ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT
Frontiers in Artificial Intelligence and Applications FAIA covers all aspects of theoretical and applied artificial intelligence research in the form of monographs, doctoral dissertations, textbooks, handbooks and proceedings volumes. The FAIA series contains several sub-series, including “Information Modelling and Knowledge Bases” and “Knowledge-Based Intelligent Engineering Systems”. It also includes the biannual ECAI, the European Conference on Artificial Intelligence, proceedings volumes, and other ECCAI – the European Coordinating Committee on Artificial Intelligence – sponsored publications. An editorial panel of internationally well-known scholars is appointed to provide a high quality selection. Series Editors: J. Breuker, R. Dieng, N. Guarino, J.N. Kok, J. Liu, R. López de Mántaras, R. Mizoguchi, M. Musen and N. Zhong
Volume 146 Recently published in this series Vol. 145. A.J. Knobbe, Multi-Relational Data Mining Vol. 144. P.E. Dunne and T.J.M. Bench-Capon (Eds.), Computational Models of Argument – Proceedings of COMMA 2006 Vol. 143. P. Ghodous et al. (Eds.), Leading the Web in Concurrent Engineering – Next Generation Concurrent Engineering Vol. 142. L. Penserini et al. (Eds.), STAIRS 2006 – Proceedings of the Third Starting AI Researchers’ Symposium Vol. 141. G. Brewka et al. (Eds.), ECAI 2006 – 17th European Conference on Artificial Intelligence Vol. 140. E. Tyugu and T. Yamaguchi (Eds.), Knowledge-Based Software Engineering – Proceedings of the Seventh Joint Conference on Knowledge-Based Software Engineering Vol. 139. A. Bundy and S. Wilson (Eds.), Rob Milne: A Tribute to a Pioneering AI Scientist, Entrepreneur and Mountaineer Vol. 138. Y. Li et al. (Eds.), Advances in Intelligent IT – Active Media Technology 2006 Vol. 137. P. Hassanaly et al. (Eds.), Cooperative Systems Design – Seamless Integration of Artifacts and Conversations – Enhanced Concepts of Infrastructure for Communication Vol. 136. Y. Kiyoki et al. (Eds.), Information Modelling and Knowledge Bases XVII Vol. 135. H. Czap et al. (Eds.), Self-Organization and Autonomic Informatics (I) Vol. 134. M.-F. Moens and P. Spyns (Eds.), Legal Knowledge and Information Systems – JURIX 2005: The Eighteenth Annual Conference
ISSN 0922-6389
Artificial Intelligence Research and Development
Edited by
Monique Polit and Thierry Talbert Laboratoire de Physique Appliquée et d’Automatique, University of Perpignan Via Domitia, France
Beatriz López and Joaquim Meléndez Institute of Informatics and Applications, University of Girona, Spain
Amsterdam • Berlin • Oxford • Tokyo • Washington, DC
© 2006 The authors and IOS Press. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 1-58603-663-7 Library of Congress Control Number: 2006932182 Publisher IOS Press Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail:
[email protected] Distributor in the UK and Ireland Gazelle Books Services Ltd. White Cross Mills Hightown Lancaster LA1 4XS United Kingdom fax: +44 1524 63232 e-mail:
[email protected]
Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail:
[email protected]
LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
v
Preface Artificial Intelligence (AI) forms an essential branch of computer science. The field covered by the IA is multiform and gathers subjects as various as the engineering of knowledge, the automatic treatment of the language, the training, the systems multiagents, to quote only some of them. The history of the AI knew various periods of evolution passing from periods of doubt at very fertile periods. AI is now in its maturity and did not remain an isolated field of computer science, but approached various fields like statistics, data analysis, linguistics and cognitive psychology or databases. AI is focused on providing solutions to real life problems and is used now in routine in medicine, economics, military or strategy games… The Catalan Association for Artificial Intelligence (ACIA1), with the aim of joining together the researchers of the AI community, organizes an annual conference to promote synergies in the research community of its influence. The advances made by ACIA people and its influence area have been gathered in this single volume as an update of previous volumes published in 2003, 2004 and 2005 (corresponding to numbers 100, 113 and 131 of the series “Frontiers in Artificial Intelligence and Applications”). The book is organized according to the different sessions in which the papers were presented at the ninth International Conference of the Catalane Association for artificial Intelligence, held in Perpignan (France) on October 26–27th, 2006, namely: Machine Learning, Reasoning, Neural Networks, Computer Vision, Planning and Robotics and Multiagent Systems. For the first time this conference has been organized in the “French Catalonia” and we want to thank the ACIA for confidence that they granted to us. Papers have been selected after a double blind process in which distinguished AI researchers participated. The quality of papers was high on average. All the papers collected on this volume would be of interest to any computer scientist or engineer interested in AI. We would like to express our sincere gratitude to all the authors and members of the scientific and organizing committees that have made this conference a success. We also send special thanks to the invited speakers for their effort in preparing the lectures. Perpignan, October 2006 Monique Polit (University of Perpignan) Joseph Aguilar-Martin (LAAS/CNRS, Toulouse) Beatriz López (University of Girona) Joaquim Meléndez (University of Girona)
1
ACIA, the Catalan Association for Artificial Intelligence, is member of the European Coordinating Committee for Artificial Intelligence (ECCAI). http://www.acia.org.
vi
Conference Organization CCIA 2006 was organized by: the University of Perpignan (LP2A) the LAAS/CNRS of Toulouse (DISCO group) the University of Girona the Associació Catalana d’Intelligència Artificial
General Chairs Monique Polit, LP2A, University of Perpignan Joseph Aguilar-Martin, LAAS/CNRS, Toulouse Beatriz López, University of Girona Joaquim Meléndez, University of Girona
Scientific Committee Isabel Aguiló, UIB Josep Aguilar, LAAS-CNRS Cecilio Angulo, GREC-UPC Ester Bernardó, EALS-URL Vicent Botti, UPV Jaume Casasnovas, UIB Jesus Cerquides, UB M. Teresa Escrig, UJI Francesc Ferri, UV Rafael García, VICOROB-UdG Josep M. Garrell, EALS-URL Héctor Geffner, UPF Elisabet Golobardes, EALS-URL M. Angeles Lopez, UJI Ramon Lopez de Mantaras, IIIA/CSIC, UAB Beatriz López, ARLAB-UdG Maite López, UB Joan Martí, VICOROB-UdG Enric Martí, CVC-UAB Joaquím Melendez, eXiT-UdG
Organizing Committee Sauveur Bénet, LP2A, University of Perpignan Eduard Diez Lledo, LAAS/CNRS, Toulouse Tatiana Kempowsky, LAAS/CNRS, Toulouse
Margaret Miró, UIB Antonio Moreno, URV Eva Onaindia, UPV Miquel Angel Piera, UAB Filiberto Pla, UJI Enric Plaza, IIIA-CSIC Monique Polit, U. Perpinyan Josep Puyol-Gruart, IIIA-CSIC Petia Radeva, CVC-UAB Ignasi Roda, LEQUIA-UdG Josep Lluís de la Rosa, ARLAB-UdG Xari Rovira, ESADE-URL Mónica Sànchez, (Grec UPC) Ricardo Toledo, CVC-UAB Miguel Toro, U. Sevilla Vicenc Torra, IIIA-CSIC Louise Través, LAAS/CNRS Magda Valls, UdL Llorenç Valverde, UIB Jordi Vitrià, CVC-UAB
vii
Stéphane Grieu, LP2A, University of Perpignan Claudia Isaza, LAAS/CNRS, Toulouse Thierry Talbert, LP2A, University of Perpignan Adama Traoré, LP2A, University of Perpignan
Web Managers Stéphane Grieu, Thierry Talbert
Sponsoring Institutions
This page intentionally left blank
ix
Contents Preface Monique Polit, Joseph Aguilar-Martin, Beatriz López and Joaquim Meléndez
v
Conference Organization
vi
Invited Talks Application of Expert Systems in Medicine Francklin Rivas Echeverría and Carlos Rivas Echeverría
3
From Artificial Intelligence to Natural Stupidity (and Back) in Only Fifty Years Ton Sales
5
1. Machine Learning Real-Time Object Detection Using an Evolutionary Boosting Strategy Xavier Baro and Jordi Vitria
9
Support Vector Machines for Color Adjustment in Automotive Basecoat Francisco Ruiz, Cecilio Angulo and Núria Agell
19
Optimal Extension of Error Correcting Output Codes Sergio Escalera, Oriol Pujol and Petia Radeva
28
A Comparative Analysis of Different Classes-Interpretation Support Techniques Karina Gibert, Alejandra Perez-Bonilla and Gustavo Rodriguez-Silva
37
Learning from Cooperation Using Justifications Eloi Puertas and Eva Armengol
47
A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets Vicenç Soler, Jesus Cerquides, Josep Sabria, Jordi Roig and Marta Prim
55
Qualitative Induction Trees Applied to the Study of the Financial Rating Llorenç Roselló, Núria Agell, Mónica Sánchez and Francesc Prats
63
Tactical Modularity for Evolutionary Animats Ricardo A. Téllez and Cecilio Angulo
71
An Optimization Method for the Data Space Partition Obtained by Classification Techniques for the Monitoring of Dynamic Processes C. Isaza, J. Aguilar-Martin, M.V. Le Lann, J. Aguilar and A. Rios-Bolivar
80
2. Reasoning A General Approach for Qualitative Reasoning Models Based on Intervals Ester Martínez and M. Teresa Escrig
91
x
Coarse Qualitative Model of 3-D Orientation Julio Pacheco and Mª Teresa Escrig
103
Fuzzified Strategic Maps Ronald Uriel Ruiz Ordóñez, Josep Lluis de la Rosa i Esteva and Javier Guzmán Obando
114
A Qualitative Representation Model About Trajectories in 2-D J.V. Álvarez-Bravo, J.C. Peris-Broch, M.T. Escrig-Monferrer, J.J. Álvarez-Sánchez and F.J. González-Cabrera
124
Proposition of NON-Probabilistic Entropy as Reliability Index for Decision Making Eduard Diez-Lledo and Joseph Aguilar-Martin
137
3. Neural Networks Kohonen Self-Organizing Maps and Mass Balance Method for the Supervision of a Lowland River Area Frédérik Thiery, Esther Llorens, Stéphane Grieu and Monique Polit
147
4. Computer Vision Two-Step Tracking by Parts Using Multiple Kernels Brais Martínez, Luis Ferraz and Xavier Binefa
157
n-Dimensional Distribution Reduction Preserving Its Structure Eduard Vazquez, Francesc Tous, Ramon Baldrich and Maria Vanrell
167
5. Planning and Robotics Seat Allocation for Massive Events Based on Region Growing Techniques Víctor Muñoz, Miquel Montaner and Josep Lluís de la Rosa
179
Solving the Response Time Variability Problem by Means of Metaheuristics Alberto García, Rafael Pastor and Albert Corominas
187
Planning Under Temporal Uncertainty Indurative Actions J. Antonio Alvarez, Laura Sebastia and Eva Onaindia
195
Building a Local Hybrid Map from Sensor Data Fusion Zoe Falomir, Juan Carlos Peris and M. Teresa Escrig
203
Map Building Including Qualitative Reasoning for Aibo Robots David A. Graullera, Salvador Moreno and M. Teresa Escrig
211
Cognitive Vision Based on Qualitative Matching of Visual Textures and Envision Predictions for Aibo Robots David A. Graullera, Salvador Moreno and M. Teresa Escrig Assessing the Aggregation of Parameterized Imprecise Classification Isabela Drummond, Joaquim Meléndez and Sandra Sandri
219 227
xi
6. Multiagent System Dynamic Electronic Institutions for Humanitarian Aid Simulation Eduard Muntaner-Perich, Josep Lluis de la Rosa, Claudia Isabel Carrillo Flórez, Sonia Lizzeth Delfín Ávila and Araceli Moreno Ruiz
239
Extending the BDI Architecture with Commitments Dorian Gaertner, Pablo Noriega and Carles Sierra
247
Recommendations Using Information from Selected Sources with the ISIRES Methodology Silvana Aciar, Josefina López Herrera and Josep Lluis de la Rosa
258
Social Currencies and Knowledge Currencies 266 Claudia Carrillo, Josep Lluis de la Rosa, Araceli Moreno, Eduard Muntaner, Sonia Delfin and Agustí Canals Improving the Team-Work in Heterogeneous Multi-Agent Systems: Situation Matching Approach Salvador Ibarra, Christian Quintero, Didac Busquets, Josep Ramón, Josep Ll. de la Rosa and José A. Castán WIKIFAQ: Obtaining Complete FAQs Araceli Moreno, Claudia Carrillo, Sonia Delfin, Eduard Muntaner and Josep Lluis de la Rosa Designing a Multi-Agent System to Simulate Scenarios for Decision-Making in River Basin Systems Thania Rendón-Sallard, Miquel Sànchez-Marrè, Montserrat Aulinas and Joaquim Comas
275
283
291
Outline of Citation Auctions Josep Lluis de la Rosa i Esteva
299
Improving Privacy of Recommender Agents by Means of Full Dissociation Sonia Delfin, Claudia Carrillo, Eduard Muntaner, Araceli Moreno, Salvador Ibarra and Josep Lluis de la Rosa
308
Author Index
317
This page intentionally left blank
Invited Talks
This page intentionally left blank
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
3
Application of expert systems in medicine Francklin Rivas Echeverría * Carlos Rivas Echeverría ** Laboratorio de Sistemas Inteligentes Universidad de Los Andes Mérida, Venezuela e-mail: *
[email protected], **
[email protected]
Abstract In this talk, we will present some development of expert system for decision-making in diagnosis and treatment in medicine. These systems guide the user to collect easily the patient information, based on those information points that can lead to a possible diagnose and to the adapted treatment of the diseases. They guide the user during the medical examination (physical) that will be done on the patient showing the definitions, images, sounds and/or videos of the signs associated to their disease and verify that the doctor does not forget to examine none of the criteria diagnoses even though is the first time that he sees or knows this sign. Once the patient data collected, the diagnose is based on the stored medical knowledge. The data on symptoms or signs, special data of laboratory or tests or radiological images are process by the system using defined rules to obtain the possible diagnoses. Additional data such as the presence or absence of certain signs and symptoms help to make a final diagnose. The rules of these Expert systems include the diagnose criteria from world-wide Associations, as well as algorithms designed by members of the Laboratory of Intelligent Systems of the University of The Andes. The qualities of this system are: 1. It can diagnose of one or more diseases and suggests the appropriate therapy. 2. It diagnoses the absolute absence of anyone of these diseases. 3. It can find some symptoms or signs due to any exogenous cause (a differential diagnose). 4. It notifies to the doctor that the patient does not fill the minimum criteria for some of the diseases and in this case, suggests a new evaluation. 5. It suggests send the patient to a specialist. The reasoning for establishing diagnoses or hypotheses of diagnoses is given as well as the plans for other examinations and for patient treatment. Also it is indicated when there are inexplicable signs, symptoms or laboratory data. They include the realization of a set of questions individualized for each subject and the selection of data that is going to be acquired answering the questions.
4
F. Rivas Echeverría and C. Rivas Echeverría / Application of Expert Systems in Medicine
About the speaker Francklin Rivas Echeverría. Associated professor at the "Universidad de Los Andes " (ULA) in Mérida, Venezuela. Engineer in Systems, Scientific magister in control engineering and PHD in Applied Sciences. Actually, he is Director of the Intelligent System Laboratory in ULA; He is co-editor of the book " Introduction to the techniques of Intelligent Computing and co-author of the book "Control of non-linear systems" published by "Pearson Education". He is the author of more than 100 papers in reviews or conference proceedings. He has directed more than 50 theses. His is member of scientific committee of various conferences and reviewer of some reviews and of national and international programs
5
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
From Artificial Intelligence to Natural Stupidity (and Back) in only Fifty Years Ton Sales
About the speaker Ton Sales was Professor of Logic and Artificial Intelligence at the School of Computer Science in the Technical University of Catalonia (UPC) Barcelona. He is actually retired.
This page intentionally left blank
1. Machine Learning
This page intentionally left blank
9
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Real-time object detection using an evolutionary boosting strategy a
a,b
Xavier BARO & Jordi VITRIA a Centre de Visió per Computador b & Dept. de Ciències de la Computació UAB Edifici O -Campus UAB, 08193 Bellaterra, Barcelona, Catalonia, Spain {xbaro,jordi}@cvc.uab.es
Abstract. This paper presents a brief introduction to the nowadays most used object detection scheme. From this scheme, we highlight the two critical points of this scheme in terms of training time, and present a variant of this scheme that solves one of these points. Our proposal is to replace the WeakLearner in the Adaboost algorithm by a genetic algorithm. In addition, this approach allows us to work with high dimensional feature spaces which can not be used in the traditional scheme. In this paper we also use the dissociated dipoles, a generalized version of the Haarlike features used on the detection scheme. This type of features is an example of high dimensional feature space, moreover, when we extend it to color spaces. Keywords. Boosting, genetic algorithms, dissociated dipoles, object detection, high dimensional feature space
1. Introduction Object recognition is one of the most important, yet least understood, aspects of visual perception. For many biological vision systems, the recognition and classification of objects is a spontaneous, natural activity. Young children can recognize immediately and effortlessly a large variety of objects. In contrast, the recognition of common objects is still way beyond the capabilities of artificial systems, or any recognition model proposed so far [1]. Our work is focused in a special case of object recognition, the object detection problem. Object detection can be viewed as a specific kind of classification, where we only have two classes: the class object and the class no object. Given a set of images, the objective of object detection is to find regions in these images which contain instances of a certain kind of object. Nowadays, the most used and accepted scheme to perform object detection is the one proposed by Viola and Jones in [2]. They use a cascade of weak classifiers based on simple rectangular features. Their scheme allows to do the object detection process in real-time. The main problem of this method is the training time, which increases exponentially with the number of features and samples, because in each iteration of the Adaboost, it must perform an exhaustive search over all the features in order to find the best one. Some
10
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
strategies have been presented in order to speed-up this method, which affects the initial features set or the search strategy. For instance, in [3] they perform an initial feature selection in order to reduce the initial features set, and in [4] a heuristic search is performed instead of the exhaustive search over all features. This paper is organized as follows: On section 2 we present a brief explanation of the Viola and Jones object detector, and remark the bottlenecks. In section 3 we present the dissociated dipoles, a more general type of rectangular features, and their extension to color. Finally, in sections 4 and 5, we present the introduced modifications: an evolutionary strategy in the Adaboost method that not only speeds-up the learning process, but allows the use of Viola and Jones scheme with high-dimensional feature spaces and a redefinition of weak classifiers.
2. Object Detection In this section we present a brief introduction to Viola’s face detector [2], using the feature set extension of Lienhart and Maydt [5]. This method is the canonical example of rare object detection. The main points of their scheme is the choice of simple and fast calculable features, and a cascaded detector, which allows real-time object detection. In this section we will present briefly the features, the final detector structure and concentrate our attention to the learning process. 2.1. Haar-like features
Figure 1. a)Extended Haar-like features. The sum of all the pixels under the black regions are subtracted from the sum of all the pixels under the white regions. b)Integral image. The value on the point (x, y) is the sum of all pixels under the gray region. c)Rotated integral image. The value on the point (x, y) is the sum of all pixels under the gray region.
Haar-like features are local differences between contiguous regions of the image (see fig. 1). This kind of features are inspired by the human visual system, and are very robust in front of illumination variance and noise. In addition, they can be calculated very fast using the integral image. In [5], Lienhart and Maydt extended the initial set used by Viola and Jones adding the rotated version of each feature. These new prototypes of features can be also calculated in a short time using the rotated integral image
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
11
2.2 Weak classifier A Weak classifier h : Rn → {-1,1} is composed by a Haar-like feature f, a threshold thr and a polarity value p. It is the most low level classification in this detection scheme. Given an input image X, h(X) can be calculated as:
where f(X) is the value of the feature f on the image X. 2.3. Cascade The use of a cascade of classifiers is one of the keys that allow the real-time detection. The main idea is to discard easy non-object windows in a short time, and concentrate the efforts on the hardest. Each stage of the cascade is a classifier (see fig. 2), composed by a set of weak classifiers, and their related threshold value.
Figure 2. Cascade of classifiers.
2.4. Learning process Once the basic components are introduced, now we will analyze the learning algorithm, remarking the critical points. Adaboost is an algorithm introduced by Freund and Schapire in [6] to build a strong classifier as a lineal combination of weak classifiers. In fig. 3 the Adaboost algorithm is presented. The step 3a of the algorithm is als
12
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
Figure 3. Adaboost algorithm.
WeakLearner, because it is where the parameters of the weak classifier are learned. This is the bottleneck of the learning process, and is where we introduce our modifications. The WeakLearner determines the best combination of feature, threshold and polarity value in order to minimize the weighted classification error. Basically, the steps followed to determine these parameters are: 1. For each feature f in the feature set: (a) Calculate the value of over each sample (b) Sort all the different values of f and store them into V (c) Calculate the error using the values in V as the threshold. Two errors are calculated, one for each polarity value. (d) Select the threshold and polarity value that minimizes the error 2. Select the feature, threshold and polarity values that minimize the error. After analyzing these steps, one can see that we must repeat the step 1 as many times as features are in our feature set. Therefore, in large feature spaces this process will spend a lot of time. In order to reduce the number of iterations, we use a genetic algorithm, which will generate in each generation a small feature set, reducing the number of iterations of the WeakLearner. Inside the loop, for each feature we have to use a sorting algorithm, and recalculate the error over each possible threshold and polarity values. The time expended in this case is proportional either to the number of samples as to the number of features. To reduce the time inside the loop, in section 5 we propose the approximation by normal distributions instead of the use of threshold and polarity values. 3. Dissociated dipoles A fundamental question in visual neuroscience is how to represent image structure. The most commonly used representation schemes rely on spatially localized differential operators, approximated as Gabor filters with a set of excitatory and inhibitory lobes,
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
13
which compare adjacent regions of an image [7]. As we explained in section 2, Viola and Jones use Haar-like features, which are an example of that type of local filters. In [7], Balas presents a more general type of operator, the dissociated dipoles or sticks (see fig. 4), performing some psychophysical experiments and implementing a system for content-based image retrieval in a diverse set of domains, including outdoor scenes, faces and letters. Balas’ work demonstrate the effectiveness of dissociated dipoles for representing the imag structure.
Figure 4. Dissociated dipoles. The mean value of the black region is subtracted to the mean value of the white region.
3.1. Extension to color space Most part of the acquired images nowadays have color information, which in most kinds of objects is a relevant characteristic of the object. In order to exploit the color information, we propose a simple variation on the dissociated dipoles, considering that each dipole can be evaluated in one of the image channels. This small change results in the exponential growth in the number of features. 4. Genetic WeakLearner Since Adaboost does not require the selection of the best weak classifier in each iteration, but a weak classifier with an error less than 0.5, an evolutive approach is what seems to be more logical to reduce the time complexity. As a first approach we propose the use of a classic genetic algorithm, which never will select the best feature, but will select a quite good feature. A gaussian mutation function and a one-point crossover is used. In order to speed-up the convergence of the genetic algorithm, elitism is also used. To guarantee the diversity, we evolute 4 independent populations, and every a certain number of generations, the best individuals of these populations are added to the other populations. Finally, it must be considered the fact that the convergence of genetic algorithms is not guaranteed for every initialization, or it can converge to a solution with an error higher than 0.5. When this occurs, we reinitialize the genetic algorithm using a new random population.
4.1. Features codification Since our features are dissociated dipoles, we need to codify two rectangles. Each of the rectangle is parameterized by a position (X, Y ) in the training window, a width value W and a height value H. If we consider the color extension, each of the rectangles must also codifiy C, which is the channel over which it is calculated. Therefore, the parameters codified by a chromosome are {Xe,Ye,We,He,Ce} for the
14
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
excitatory dipole and {Xi,Yi,Wi,Hi,Ci} for the inhibitory dipole. Using a bit string representation of the chromosome, we can calculate the length L of the chromosome as:
where W and H are the width and height of the training window. To reduce the generation of non valid features, we restrict the maximum size of each dipole to half of the training window.
Figure 5. Codification of a dissociated dipole using a bit string.
4.2. Evaluation function This function provides a measure of performance with respect to a particular chromosome, and is the function that the genetic algorithm tries to minimize. As we use genetics as a WeakLearner, the function to minimize is the weighted classification error. Therefore, the evaluation function will apply the feature represented by a certain chromosome to all the samples and calculate the weighted classification error. L
Given a sample set {Ω= (xi,yi) | i =1.}. where yi ∈{-1, 1} is the label of xi, C∈ {0, 1} a chromosome and W ∈ℜ the weights distribution, the evaluation function ξ : C ×Ω×W→ℜ can be expressed as:
where C(xi) is the value obtained when the feature represented by the chromosome C is applied to the sample xi. 5. Normal Approximation In order to speed-up the WeakLearner algorithm, we propose to approximate the values of each feature by two normal distributions, one for the positive samples and the other one for the negative samples (see fig. 6). This process can be addressed in polynomial time, and avoid the calculus of the threshold and polarity values, because once we estimate the two normals, we calculate the value of each distribution for the value of the feature over the sample and pick the class with higher value to classify a sample.
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
15
This approach not only speeds-up the learning process, but also allows the use of online methods as the ones presented by Oza in [8], in the sense that these methods require incremental Weak Classifiers, and now we have them.
Figure 6. Approximation of the values of three features by pairs of normal distributions. 6. Results In order to demonstrate the discriminative power of the dissociated dipoles, and the usefulness of our approach, we train two cascades using these features and the Adaboost with the genetic Weak-Learner. Using the face images from the Caltech database as the positive samples, and patches from the Corel database as negative samples, we build two sample sets, each one with 214 face images and 642 no-face images. The first one is used as a training set, and new no-face images are added at each step of the cascade in order to replace negative samples discarded by the previous stages. The second set is used as a test set, and remains invariant during all the process. The parameters of the training algorithm are set to 0.995 for the hit ratio and 0.5 for the false alarm ratio. The first experiment uses only dissociated dipoles on the gray-scale images, while the second experiment uses the color version of these features. Finally we repeat the same experiment, but now using color and gray-scale features together. The results are showed in tables 1, 2 and 3 respectively. Gray-scale features
0
Cascade Num Features 0
100.0%
Train Num Samples 642
100.0%
1
4
99.5%
100.0%
100.0%
35.4%
1,815
98.6%
2
5
32.9%
99.1%
16.8%
3,822
98.6%
3
13.6%
6
99.1%
8.7%
7,418
98.1%
7.2%
4
5
98.6%
3.7%
17,457
96.3%
3.4%
5
5
98.6%
1.7%
38,246
94.4%
1.6%
6
6
98.6%
0.7%
86,453
93.5%
1.1%
7
10
98.6%
0.4%
179,013
92.5%
0.6%
Stages
HR
FA
Test HR
FA
Table 1. Hit ratio and false alarm evolution during the training process using gray-scale features. The number of needed patches in order to complete the negative samples set is shown on the Train column. This table also shows the number of features used in each step.
From the results obtained on our experiment, we see that the resulting method allow us to learn a cascade of detectors with a high discriminative power. Moreover, if we look
16
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
at the generated features, for instance, to the features selected on the first two stages of the experiment of the table 1, showed in fig. 9, most part of selected features are not contiguous regions, therefore the dissociated dipoles are a more discriminant representation of the image. Our results also show that the color information helps in the reduction of false positives, while it seems that the gray-scale information helps to maintain a good detection rate. We suppose that it is owning to the constancy on the values. When we use color information, the intra-object variation is higher than the gray-scale variation. In addition, the use of color reduces the number of necessary features in each level of the cascade. The reduced number of features in the final detector guarantees the real-time capability of the detection process, which can be done by only 8 accesses to the integral image per feature. Color features
HR
FA
0
Cascade Num Features 0
100.0%
1
4
99.5%
Stages
100.0%
Train Num Samples 642
Test
100.0%
100.0%
28.2%
2,279
97.7%
30.2%
HR
FA
2
3
99.1%
6.8%
9,421
93.9%
7.2%
3
3
99.1%
3.0%
21,355
93.9%
2.0%
4
4
98.6%
1.3%
48,061
93.5%
0.3%
5
6
98.1%
0.6%
110,971
93.0%
0.3%
6
4
98.1%
0.3%
221,063
93.0%
0.2%
7
3
97.7%
0.1%
437,431
90.7%
0.0%
Table 2. Hit ratio and false alarm evolution during the training process using color features. The number of needed patches in order to complete the negative samples set is showed on the Train column. This table also shows the number of features used in each step. Gray-scale and color features
0
Cascade Num Features 0
100.0%
Train Num Samples 642
100.0%
100.0%
100.0%
1
3
2
3
99.5%
39.8%
1,613
99.5%
41.7%
99.5%
20.5%
3,133
99.5%
3
21.3%
4
99.1%
6.7%
9,575
97.2%
6.1%
4
5
98.6%
2.6%
24,237
97.2%
3.1%
5
5
98.6%
0.5%
128,686
94.9%
0.6%
6
6
98.1%
0.2%
374,828
93.9%
0.2%
7
10
98.1%
0.1%
747,670
93.0%
0.0%
Stages
HR
FA
Test HR
FA
Table 3. Hit ratio and false alarm evolution during the training process using gray-scale and color features. The number of needed patches in order to complete the negative samples set is showed on the Train column. This table also shows the number of features used in each step.
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
17
Figure 7. Dissociated dipoles of the first stage of the Color and gray-scale cascade. Under each picture the channel of the excitatory (+) and inhibitory (-) dipoles are described.
Figure 8. Dissociated dipoles of the first two stages of the grey-scale cascade
Figure 9. Some of the lost faces on the grey-scale and color features cascade.
7. Conclusions and future work We have presented a variation on the Adaboost algorithm using an evolutionary WeakLearner, which remove the computational bottlenecks on the nowadays most used object detection scheme. Our proposal also allows to use high dimensional feature spaces, as dissociated dipoles. In this paper we present an extension of the dissociated dipoles to the color space, and use them to verify the usefulness of our method to train a detection cascade. As a future work, we want to change the genetic algorithm by other evolutive strategies which allows a more intelligent evolution, and the addition of extra problem-dependent knowledge.
18
X. Baro and J. Vitria / Real-Time Object Detection Using an Evolutionary Boosting Strategy
Acknowledgements This work has been partially supported by MCYT grant TIC2003-00654, Spain. This has been developed in a project in collaboration with the "Institut Cartogràfic de Catalunya" under the supervision of Maria Pla. References [1] S. Ullman, High-level Vision. Object Recognition and Visual Cognition, ser. A Bradford Book. New York: The MIT Press, 1996. [2] P. Viola and M. Jones, “Robust real-time object detection,” International Journal of Computer Vision -to appear, 2002. [3] X. Baró and J. Vitrià, “Feature selection with non-parametric mutual information for adaboost learning,” Frontiers in Artificial Intelligence and Applications / Artificial intelligence Research and Development, IOS Press, Amsterdam, October 2005. [4] B. McCane and K. Novins, “On training cascade face detectors,” Image and Vision Computing, pp. 239– 244, 2003. [5] R. Lienhart and J. Maydt, “An extended set of haar-like features for rapid object detection,” Proc. of the IEEE Conf. On Image Processing, pp. 155–162, 2002. [6] Y. Freund and R. E. Schapire, “Experiments with a new boosting algorithm,” in International Conference on Machine Learning, 1996, pp. 148–156. [7] Balas and Sinha, “STICKS: Image-representation via non-local comparisons,” Journal of Vision, vol. 3, no. 9, pp. 12–12, 10 2003. [8] N. Oza and S. Russell, “Online bagging and boosting,” in Artificial Intelligence and Statistics 2001. Morgan Kaufmann, 2001, pp. 105–112.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
19
Support Vector Machines for color adjustment in automotive basecoat Francisco RUIZ*, Cecilio ANGULO** and Núria AGELL* *Fundació ESADE, Avda. Pedralbes, 60-62, Barcelona, Spain **Universitat Politecnica de Catalunya, Avda. V. Balaguer S/N, 08800 Vilanova I La Geltru, Spain Abstract. Traditionally, Computer Colorant Formulation has been implemented using a theory of radiation transfer known as the Kubelka-Munk (K-M) theory. In recent studies, Artificial Neural Networks (ANNs) has been put forward for dealing with color formulation problems. This paper investigates the ability of Support Vector Machines (SVMs), a particular machine learning technique, to help color adjustment processing in the automotive industry. Imitating ‘color matcher’ employees, SVMs based on a standard Gaussian kernel are used in an iterative color matching procedure. Two experiments were carried out to validate our proposal, the first considering objective color measurements as output in the training set, and a second where expert criterion was used to assign the output. The comparison of the two experiments reveals some insights about the complexity of the color adjustment analysis and suggests the viability of the method presented. Keywords. Artificial Intelligence, Decision Support Systems, Support Vector Machine, Color adjustment, Color formulation
1. Introduction Automated software tools used for color matching in the color industry are based on the theory developed by Kubelka and Munk [1] in the 1930’s. Their approach considers pigment particle scattering and absorption interaction on a global instead of a particle level. Using this approach, Kubelka and Munk proposed the equations that are still used in the pigment industry. According to their theory, each colorant contributes to the absorption and scattering of the material, and its contribution is proportional to its amount in the system multiplied by an absorption and scattering coefficient. These equations are based on non-exact assumptions, hence the results obtained do not completely agree with the real results at the precision level that is nowadays required by the automotive basecoat industry. In addition, this theory uses pigment absorption and scattering coefficients that are difficult to obtain, and they may vary significantly from one pigment batch to another. In the automotive industry two differentiated paint manufacturing tasks can be distinguished: the formulation task and the adjustment task. Formulation is the process of finding an appropriate set of pigments and their proportions, in order to produce a targeted color. The set and their proportions are normally obtained from a near, but different, previously-created color. Adjustment is the process of tuning a certain color, previously obtained from a batch, to achieve the precision desired in a new batch of pigments. Formulation is implemented once in a ‘color life’, whereas adjustment must be done whenever a new batch is used. Final adjustment task is at least as expensive as color formulation, requiring a lot of human and time resources. Although many efforts have been made to automate the formulation process, the adjustment process has always been considered as inevitably manual. Decision Support Systems (DSS) based on artificial intelligence techniques have in recent years demonstrated their efficiency in situations in which there is no
20
F. Ruiz et al. / Support Vector Machines for Color Adjustment in Automotive Basecoat
exhaustive information on the process [2]. Such systems start off from an empirical and partial knowledge of this process, and allow the extraction of regularities and predictions in unknown situations. In color matching, commercial soft-computing DSS tools have been developed to help in the task of color formulation. For instance, FormTools [3,4] and ColorXpress Select [5], based on the Case-Based Reasoning (CBR) methodology, have been developed at General Electric Plastics. Both tools provide effective management of color formulation and adjustment databases, as well as easily allowing the integration of these databases with measurement instruments, usually spectrophotometers, through a two phases procedure. Firstly, a personal computer containing both case-base and CBR software searches previous matches for the best match by using nearest neighbor retrieval. Next, previous matches are iteratively adjusted by a software to produce matches for the new standard. This software is based on the equations of Kubelka-Munk, among whose suppositions are that the proportions of absorption and scattering of a pigment mixture are a linear combination of the proportions of each pigment separately. As well as the difficulty in measuring the parameters involved, these assumptions make the use of this tool difficult for final color adjustment. Precision level is improved if the database is large. For example, General Electric Plastics currently has more than 50,000 previously-matched colors on file [4] for color matching in plastics. However, in other areas such as car painting in the automotive industry, no such large databases exist, and it will takes a long time to build them due to cost and time considerations. Therefore the precision nowadays required by the color automotive industry cannot be achieved with existing tools. This work presents an innovative procedure using Support Vector Machines as a decision-making tool in the task of color adjustment in the automotive industry. This kind of soft computing technique has been successfully used for complex problems similar to that already described, allowing extraction of knowledge from databases and prediction of results in new situations. Two experiments specifically designed to this project and based on the iterative procedure used by color matchers in color adjustment were developed to validate our proposal: the first considering objective CIELab color measurements as output in the training set, and the second using expert criterion to assign the output to training patterns. The study was conducted over a period of six months in 2005 at the PPG Ibérica S.A. Installations, thanks to the collaboration between personnel of the company and researchers of temporally deleted in blind version. Results from the two experiments reveal advantages in using the expert criterion over the criterium based on the objective measure. The study carried out suggest the viability of both criteria in the construction of an automatic tool for the adjustment color task. In the next section, the color adjustment problem is presented and the basics of colorimetric measurement are introduced: the so- alled pigmentary and colorimetric spaces. Section 3 gives a brief introduction of SVM. Sections 4 and 5 are devoted to the description of the experiment performed, an analysis of the results obtained and the discussion. Finally, Section 6 presents final conclusions and future tasks related to the research line followed in this work. 2. Presentation of the problem Nowadays an acceptable quality level has been reached in the mechanical properties of painting for automobiles. For this reason, decorative properties have become top priority. Among these, color stands out above the others due to its involving different
F. Ruiz et al. / Support Vector Machines for Color Adjustment in Automotive Basecoat
21
parts of the automobile from different suppliers-. Automobile manufacturers handle a range of colors for their products that is periodically modified due to the incorporation of new components, and to follow new fashion trends. Paints are ordered from specialist companies, who have to find a recipe for a color from a simple sample chip. These companies have a certain number of recipes in their databases for the colors they normally provide. The task for color matchers is semi-automated in a two-steps procedure. In the first step, they use a spectrophotometer to read the color sample submitted by the customer; next, a software tool based on the nearest neighbor technique selects similar already created colors from the stored database. The second step is iteratively manually performed by the employee: first he/she selects one of the formulae offered by the software and adapts the percentage of one or more pigments. Next, a small piece is painted with this color and dried. Finally, matching of the customer’s color standard is determined. If not achieved, a manual second step is iterated, the color match is made and the data saved. Even for previously-manufactured color formulae, differences between pigments acquired in different batches produce inevitable irregularities in the paint production process. It is therefore normally not possible to directly obtain the targeted color from an old checked formula. So a manual color adjustment must be implemented with no automated help. Procedures start by producing an initial color both for colors that are already known (adjustment) and those never seen before (matching), with pigment percentages lower than those in the recipe, i.e. with the paint vehicle (binder and solvent) in a higher proportion. From this initial color the fine adjustment is performed under the supervision of an expert. He or she is able to decide from experience and intuition what pigment must be added, and in what proportion, in order to adjust the initial color to the target. This process takes place in an iterative experimentation that unfortunately sometimes ends with a non-recoverable color. In most cases, final proportions are appreciably different from initial proportions. The color adjustment process represents a significant cost for these companies, as much in human time and resources as in customer satisfaction. In addition, training of color matchers is complicated due to the difficulty of transmitting intuitive knowledge. In the color adjustment context, two sets of variables must be considered. Input variables are those involved in the recipe, that is, pigments and their use percentages (pigmentary space); output variables are those involved in the colorimetric coordinates for the color obtained (colorimetric space). Knowledge of mapping from colorimetric to pigmentary space, allowing the matching of colorimetric coordinates of a color to its recipe, would be sufficient to solve the problem of color adjustment, with differences between batch pigments being dealt with as noise. Since this is too complex to obtain, a less exhaustive knowledge solution based on SVM will be presented here, designed following the color matcher procedure.
22
F. Ruiz et al. / Support Vector Machines for Color Adjustment in Automotive Basecoat
Figure 1. CIELab and CIELCh coordinates.
2.1. Paints and pigments. The pigmentary space Paints are natural or artificial substances, generally organic, that form a continuous and adherent film on the surface of an object. In the automotive industry, paints and coatings are currently applied to protect vehicles from environmental corrosion and to improve their consumer appeal. The paint is made up fundamentally of a binder, pigments and solvents. The non-volatile vehicle, often called binder, is a resin or polymer that actually forms the film of the finished paint product. Pigment particles are insoluble and only form a suspension in the binder. Pigments in paint serve several purposes: to hide the surface on which they are applied, to provide a decorative effect through the particular color of the paint film, and to provide durability of the surface. Solvents are the part of the paint or coatings product that evaporates. Their rôle is to keep the paint in liquid form for ease of application. Once applied to the surface they evaporate, leaving a uniform film which then dries to form a protective coating. Only the pigments and the binder are permanent components of paint, and only pigments significantly influence the definitive color of the surface. The exact proportions of pigments in a paint constitute the color recipe. If k pigments take part in a specific color, a k-1 dimension space will be considered. This is known as pigmentary space. Each element or point of this space represents a different color recipe. 2.2. Colorimetric space and color measure Several numeric specifications for color can be found in the literature. The most classic and internationally accepted of these is that based on tristimulus values, first RGB and later XYZ, proposed by the Commission International de l’Eclairage (CIE) in 1931. In 1976, the CIE proposed the CIE L*a*b* color scale, based directly on the CIE 1931 XYZ color space, as an attempt to linearize the perceptibility of color differences [6]. CIE L*a*b* (CIELab) is the most complete color model used conventionally to describe colors visible to the human eye. Its three parameters represent the luminance (L) of the color, its position between red and green (a) and its position between yellow and blue, (b). Some years later, the CIE adopted revisions to L*a*b* calculations which led to L*C*h* color tolerance (also known as CIELCh). This uses the basic CIELab information, but presents the graphical information with a focus on chroma (C) and hue (h) that may be visually easier to understand than typical CIELab graphical presentations. CIELCh converts the CIELab linear coordinates into (C,h) polar coordinates, L remaining as the lightness/darkness coordinate. Figure 1 represents geometrically the two color representation systems. Once the L*a*b* or L*C*h*
F. Ruiz et al. / Support Vector Machines for Color Adjustment in Automotive Basecoat
23
position of a standard color has been determined, an ellipse tolerance can be drawn around this point for visual acceptability, indicating colors from the standard color that are undistinguished by the human eye. The spectrophotometer is the most accurate type of instrument used for color measurement. It measures individual wavelengths and then calculates L*a*b* or L*C*h* values from this information. This instrument offers robust measures for all angles and standard lighting conditions. Spectrophotometers are widely used in industrial color applications as objective color-measuring instruments, providing a way to consistently quantify color differences, and thus facilitate the management of product color tolerance. The colorimetric space allows the characterization of a color through a set of numbers, such as Lab or LCh, that are denominated colorimetric coordinates. No colorimetric space is yet accepted by color matchers as a unique objective color measure, therefore visual evaluation is still necessary to establish appreciable differences between colors. New color spaces are being defined in the literature [7] attempting to capture human vision features. 3. Support Vector Machines Support Vector Machines [8] are a new class of learning algorithms that combine a strong theoretical motivation in statistical learning theory, optimization techniques, and the kernel mapping idea. SVMs have been used to automatically categorize web pages, recognize handwritten digits and faces, identify speakers, interpret images, predict time series, estimate prices and more. Their accuracy is excellent, and in many cases they outperform competing machine learning methods such as neural networks and radial basis function networks due to an excellent generalization performance in practice [9]. In classification tasks, SVM search for an optimal hyperplane, separating patterns represented by their features maximizing margin between the hyperplane and nearest patterns. When patterns are not linearly separable, kernel functions are used to deal with an implicit feature space where linear classification is possible, returning the inner product between the mapped data points in a higher dimensional space. Hyperplane in the feature space corresponds to a nonlinear decision boundary in the input space. The kernel most used is the Gaussian kernel, (1) where parameter V is strictly positive. The corresponding feature space for this kernel is an infinite dimension Hilbert space, however maximum margin classifiers are well regularized, so the infinite dimension does not spoil the results. This kind of machine learning technique is particularly efficient, and more competitive than other methods when only a small number of sample patterns is available. This is the case in this work, where obtaining sample patterns is expensive due to the process cost as a whole: weighting pigments, applying and drying paints, and color measurement in the appropriated device. In this work, a bi-class SVM formulation defined by a Gaussian kernel has been considered.
24
F. Ruiz et al. / Support Vector Machines for Color Adjustment in Automotive Basecoat
4. Experiment description In the process of color adjustment, many factors can produce colorimetic changes: form of application, thickness of the painting layer, ambient humidity, temperature, drying time, and so on. The more complex the pigmentary formula, the more easily these factors will influence the color observed. In order to diminish this effect, a color without sophisticated visual effects (aluminum and micas) was selected. The experiment was designed on a particular zone of the color spectrum inhabited by red colors. According to expert colorists, color adjustment in this zone is suitable for valid experiment analysis and discussion. The target color is called Coral Red, and is composed of four different pigments in its initial formula. Experimentation was performed under real industrial conditions for color adjustment at the Research and Development Department of PPG Ibérica in Valladolid (Spain). Experimental ‘applying’ conditions were fixed by defining the kind of substrate, percentage of uncolored components, thickness of the painting layer, form of application, drying time and temperature. Color measurement conditions were also defined: measurement device, illuminating device, and measurement angle. Tests was carried out to cover the pigmentary space for training a binary SVM. For industrial purposes, the color adjustment process is always developed from a named initial color formula corresponding to a color similar to the target, but not the exact stored formula. This initial formula is built on all the necessary pigments with percentages lower than 100% of the complete formula, i.e. uncolored components like solvent and binder are present in a higher proportion than desirable. The initial formula produces an initial color similar to but usually different from the target color. Next, the initial color is adjusted by the expert supervisor to achieve a color closer to the target. The adjustment procedure takes place in several steps, with the expert adding one or more pigments to the earlier formula at each step. The colorimetric coordinates of the original and final colors are measured by a spectrophotometer. Following this protocol and beginning with several initial colors, a set of training patterns is obtained with only one of the four composing pigments being added in a fixed amount, i.e. from each initial color, four separate adjustments are made. The nearest of these four new colors according to its CIELab Euclidian distance, from the target Coral Red color is used to obtain four more patterns, and so on. When none of the new colors obtains a lower distance from the target than the earlier color, the fixed amount of pigments is reduced. Each series of four tests carried out is accompanied by a test of confirmation of the best result of the previous series, therefore ensuring equality of conditions of temperature and humidity. Thus, a set of training patterns is completed. Each of the patterns for training is composed of seven features, the colorimetric coordinates of the initial color (L*i , a*i , b*i ) and a vector in the pigmentary space that represents the adjustment being made on a certain pigment ('p1,'p2,'p3,'p4). A Boolean class {good, bad} is associated with each pattern according to whether the color obtained after adjustment is more similar to the target color that the initial color before adjustment.
Table 1. Distribution of patterns in classes according to both labeling, expert-based and CIELab-ased.
F. Ruiz et al. / Support Vector Machines for Color Adjustment in Automotive Basecoat
25
In order to decide whether a certain pattern belongs to the good or bad class, two different similarity criteria were used: one a subjective criterion based on the expert opinion, and a second based on the CIELab measurements, without the intervention of the color matcher. Under this second criterion, a pattern is considered good if the Euclidean distance measured on the colorimetric space after adjustment is lower than the distance before adjustment. For experimentation purposes, a set of 188 patterns was drawn up, and assigned to one of the two objective classes (closer to or further away from the target in colorimetric space) in a first experiment, and for a second experiment based on the expert advice, assigned to one of three subjective classes (suitable for reaching the target, not suitable, and indifferent). A third class was defined because for some patterns experts cannot appreciate whether or not they are suitable for reaching the target. All the computations were performed using the R computing language [10] and the e1071 package available from the CRAN R repository (cran.r-project.org). 5. Analysis and discussion of the results Table 1 shows the distribution of patterns in the objective and subjective classes. In most but not all the patterns the two criteria agree. Disagreements are due to different criteria on the distance in colorimetric space taken into consideration by the experts. The training procedure for the SVM was implemented on a standard Gaussian kernel considering several values for the width V. It was realized during the experimentation that a small variation in this parameter did not have a significant influence on the results. Several values of the regularization parameter C were also tested. In order to evaluate the accuracy of the classification obtained by the algorithm, the leave-one-out (loo) cross-validation technique was used. Tables 2 and 3 show results in percentage of success varying values for the parameters V and C using the two criteria, subjective and objective. With the expert criterion about 90% success was reached, whereas with the objective criterion based on CIELab measurements, success was rather more than 80%. The higher percentage of success using the subjective criterion with such a short training set suggests that the criterion used by the color matcher is easier to learn and more acceptable than the Euclidean distance in the CIELab space. After discussions with color matchers, it can be concluded that expert opinion is preferable as output because it extracts weighting relations between L,a,b measurements on the colorimetric space that Euclidian measurement cannot capture. In this sense, it is well-known that the zone of colors that is not distinguishable from a targeted color is not a sphere, but an ellipsoid, so weighted Euclidean distance would be more suitable for comparisons in colorimetric space than the standard distance. Experts are capable of implicitly inferring this weights simplifying the search space, whereas the learning machine cannot do so from this database.
26
F. Ruiz et al. / Support Vector Machines for Color Adjustment in Automotive Basecoat
Table 2. Percentages of success using objective criterion.
Table 3. Percentages of success using expert criterion.
Another analysis supporting the above conclusion was discussed from the geometrical point of view on the regularization parameter C. When the objective criterion was used as output, best accuracy results were obtained for C = 1, indicating a poor generalization, and so over fitting, whereas when using the subjective criterion, best results were obtained for C > 10000. Finally, it is important to realize that these percentages of success correspond to a single step of the color adjustment iterative process. This signifies that a percentage of success of 90% in one step will produce a slight increase in the number of steps necessary to obtain the final color. However, the process is made up of several steps, so the percentage of success for the whole process is much higher. 6. Conclusions and further work Adjusting or matching color processes involves an expensive experimentation procedure for the industry specializing in paint production. Companies require continuous subjective advice from expert color matchers whose training presents serious difficulties.Work described in this paper represents an innovative approach to the resolution of color adjustment problem by using SVM, with two different approaches. Results obtained encourage the design of a more complex application to help decision-making using SVM, with experts carrying out a continuous training process, and the color adjustment task proposed for the learning machine. In the near future, this application will automatically propose vectors in pigmentary space that will adjust the initial color to the target color. In addition, multiclassification SVM and Qualitative Reasoning techniques such as Order of Magnitude
F. Ruiz et al. / Support Vector Machines for Color Adjustment in Automotive Basecoat
27
tools will be used to improve the resolution of the problem.The present tool could therefore have a practical application at production level due to the number of colors that are handled. In the meantime, an extensive experimentation program will be maintained to increase the number of patterns for learning. Results obtained with the color chosen encourage work with more complex colors with visual effects and therefore with a highter number of parameters to measure. Acknowledgments The authors would like to thank the Research and Development Department of PPG Ibérica in Valladolid (Spain) for providing the research facilities used in this study. The authors also wish to gratefully acknowledge the encouraging given by Mr. Luis Ibáñez, PPG Ibérica’s General Director. References [1] P. Kubelka, F. Munk, Ein beitrag zur optik der farbanstriche, Zeitschrift fur technische Physik 12 (1931) 593–601. [2] J. Bishop, M. Bushnell, S. Westland, Computer recipe prediction using neural networks, in: T. Addis, R. Muir (Eds.), Research and Development in Expert Systems VII. Proceedings of the 10th Annual Technical Conference of the BCS Specialist Group, British Computer Society Conference Series, Cambridge University Press, 1990, pp. 70–78. [3] W. Cheetham, J. Graf, Case-based reasoning in color matching, in: D. B. Leake, E. Plaza (Eds.), Case-Based Reasoning Research and Development, Second International Conference, ICCBR- 7, Providence, Rhode Island, USA, Vol. 1266 of Lecture Notes in Computer Science, Springer, 1997, pp. 1–12. [4] W. Cheetham, Tenth anniversary of the plastics color formulation tool, AI Magazine 26 (3) (Fall 2005) 51–61. [5] W. Cheetham, Benefits of case-based reasoning in color matching, in: D. W. Aha, I. Watson (Eds.), Case-Based Reasoning Research and Development, 4th International Conference on Case-Based Reasoning, ICCBR 2001, Vancouver, BC, Canada, Vol. 2080 of Lecture Notes in Computer Science, Springer, 2001, pp. 589–596. [6] F. Billmeyer, Principles of Color Technology: Second Edition, JonWiley & Sons, New York, 1981. [7] M. Seaborn, L. Hepplewhite, J. Stonham, Fuzzy colour category map for the measurement of colour similarity and dissimilarity, Pattern Recognition 38 (2) (2005) 165–177. [8] B. E. Boser, I. Guyon, V. Vapnik, A training algorithm for optimal margin classifiers, in: Computational Learing Theory, 1992, pp. 144–152. [9] N. Cristianini, J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernelbased Learning Methods, Cambridge University Press, 2000. [10] R Development Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, ISBN 3-900051-07-0 (2005). URL http://www.Rproject.org
28
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Optimal Extension of Error Correcting Output Codes a
b
a
Sergio ESCALERA , Oriol PUJOL , and Petia RADEVA a Centre de Visió per Computador, Campus UAB, 08193 Bellaterra (Barcelona), Spain b
Dept. Matemàtica Aplicada i Anàlisi, UB, Gran Via 585, 08007, Barcelona, Spain
Abstract. Error correcting output codes (ECOC) represent a successful extension of binary classifiers to address the multiclass problem. In this paper, we propose a novel technique called ECOC-ONE (Optimal Node Embedding) to improve an initial ECOC configuration defining a strategy to create new dichotomies and im-prove optimally the performance. The process of searching for new dichotomies is guided by the confusion matrices over two exclusive training subsets. A weighted methodology is proposed to take into account the different relevance between dichotomies. We validate our extension technique on well-known UCI databases. The results show significant improvement to the traditional coding techniques with far few extra cost. Keywords. Error Correcting Output Codes, Multiclass classification
1. Introduction Machine learning studies automatic techniques to make accurate predictions based on past observations. There are several multiclass classification techniques: Support Vector Machines [1], multiclass Adaboost [2], decision trees [3], etc. Nevertheless, building a highly accurate multiclass prediction rule is certainly a difficult task. An alternative approach is to use a set of relatively simple sub-optimal classifiers and to determine a combination strategy that pools together the results. Various systems of multiple classifiers have been proposed in the literature, most of them use similar constituent classifiers, which are often called base classifiers (dichotomies from now on). The usual way to proceed is to reduce the complexity of the problem by dividing it into a set of multiple simpler binary classification subproblems. One-versus-one (pairwise) [4] or one-versus-all grouping voting techniques or trees of nested dichotomies [5] are some of the most frequently used schemes. In the line of the aforementioned techniques Error Correcting Output Codes [6] were born. ECOC represents a general frame-work based on a coding and decoding (ensemble strategy) technique to handle multiclass problems. One of the most well-known properties of the ECOC is that it improves the generalization performance of the base classifiers [7][4]. Moreover, the ECOC technique has demonstrated to be able to decrease the error caused by the bias and the variance of the base learning algorithm [8]. In the ECOC technique, the multiclass to binary division is handled by a coding matrix. Each row of the coding matrix represents a codeword assigned to each class. On the other hand, each column of the matrix (each bit of the codeword) shows a partition of the classes in two sets. The ECOC strategy is divided in two parts: the coding part, where
S. Escalera et al. / Optimal Extension of Error Correcting Output Codes
29
the binary problems to be solved have to be designed, and the decoding technique, that given a test sample looks for the most similar codewords. Very few attention has been paid in the literature to the coding part of the ECOC. The most known coding strategies are one-versus-all, all-pairs (one-versus-one) and random coding. Crammer et. al [9] were the first authors reporting improvement in the design of the ECOC codes. However, the results were rather pessimistic since they proved that the problem of finding the optimal discrete codes is computationally unfeasible since it is NP-complete. Specifically, they proposed a method to heuristically find the optimal coding matrix by changing its representation from discrete to continuous values. Recently, new improve-ments in the problem-dependent coding techniques have been presented by Pujol et al. [10]. They propose embedding of discriminant tree structures in the ECOC framework showing high accuracy with a very small number of binary classifiers, still the maximal number of dichotomies is bounded by the classes to be analyzed. In this article, we introduce the ECOC Optimal Nodes Embedding (ECOC-ONE), that can be considered as a general methodology for increasing the performance of any given ECOC coding matrix. The ECOC-ONE is based on a selective greedy optimization based on the confusion matrices of two exclusive training data sets. The first set is used for standard training purposes and the second one for guiding and validation avoiding classification overfitting. As a result, wrongly classified classes are given priority and are used as candidate dichotomies to be included in the matrix in order to help the ECOC convergence. Our procedure creates an ECOC code that correctly splits the classes while keeping a reduced number of classifiers. Besides, we compare our optimal extension with another standard state-of-art coding strategies applied as coding extensions. 2. Error Correcting Output Codes The basis of the ECOC framework is to create a codeword for each of the Nc classes. Arranging the codewords as rows of a matrix we define the "coding matrices" M, where
M ∈ {− 1, 1}
Nc×n
n
, being n the code length. From point of view of learning, matrix M representes n binary learning problems (dichotomies), each corresponding to a matrix column. Joining classes in sets, each dichotomy defines a partition of classes (coded by +1,-1 according to their class membership). Applying the n trained binary classifiers, a code is obtained for each data point in the test set. This code is compared to the base codewords of each class defined in the matrix M, and the data point is assigned to the class with the "closest" codeword. The matrix values can be extended to the trinary case
M ∈ {− 1, 0, 1}
Nc×n
, indicating that a particular class is not considered (gets 0 value) by a given dichotomy. To design an ECOC system, we need a coding and a decoding strategy. When the ECOC technique was first developed it was believed that the ECOC code matrices should be designed to have certain properties to generalize well. A good error-correcting output code for a k-class problem should satisfy that rows, columns (and their complementaries) are well-separated from the rest in terms of Hamming distance. Most of the discrete coding strategies up to now are based on pre-designed problemindependent codeword construction satisfying the requirement of high separability between rows and columns. These strategies include one-versus-all that uses Nc dichotomies, random techniques, with estimated length of 10log2(Nc) bits per code for Dense random and 15log2(Nc) for Sparse random [4], and one-versus-one with Nc(Nc-1)/2 dichotomies [11].
30
S. Escalera et al. / Optimal Extension of Error Correcting Output Codes
Figure 1. Coding matrix M for a four classes one-versus-all toy problem. New test sample with codeword X is classified to class c4 of minimal distance using the Hamming distance. The last one mentioned has obtained high popularity showing a better accuracy in comparison to the other commented strategies. These traditional coding strategies are based on a prior division of subsets of classes independently of the problem to be used. The decoding step was originally based on error-correcting principles under the assumption that the learning task can be modeled as a communication problem, in which class information is transmitted over a channel [12]. The decoding strategy corresponds to the problem of distance estimation between the codeword of the new example and the codewords of the trained classes. Concerning the decoding strategies, two of the most standard techniques are the Euclidean distance d j = Hamming decoding distance d j =
∑
n i =1
∑
n i =1
( xi − y ij ) 2 and the
( xi − y ij / 2 , where dj is the distance to the
row class j, n is the number of dichotomies (and thus, the components of the codeword), and x and yare the values of the input vector codeword and the base class codeword, respectively. If the minimum Hamming distance between any pair of class codewords is d, then any [(d-1)/2] errors in the individual dichotomies result can be corrected, since the nearest codeword will be the correct one. In fig. 1 an example of a coding matrix M for an one-versus-all toy problem is shown. The problem has four classes, and each column represents its associated dichotomy. The dark and white regions are coded by -1 and 1, respectively. The first column h1 represents the training of {c1} vs {c2,c3,c4}, and so on. A new test input is evaluated using dichotomies h1,…,h4 and its codeword X is decoded using the Hamming distance (HD) between each row of M and X. Finally, the new test input is classified by the class of minimum distance (c4, in this case). 3. ECOC-ONE ECOC-Optimal Node Embedding defines a general procedure capable of extending any coding matrix by adding dichotomies based on discriminability criteria. Given a multiclass recognition problem, our procedure starts with a given ECOC coding matrix. The initial coding matrix can be one of the previously commented or one generated by the user. We increase this ECOC matrix in an iterative way, adding dichotomies that correspond to different spatial partitions of subsets of classes p. These
S. Escalera et al. / Optimal Extension of Error Correcting Output Codes
31
partitions are found using a greedy optimization based on the confusion matrices so that the ECOC accuracy improves on both exclusive training subsets. Our training set is partitioned in 2 training subsets: a training subset of examples that guides the convergence process, and a validation subset, that leads the optimization process in order to avoid classification overfitting. Since not all problems require the same dichotomies, our optimal node embedding approach generates new dichotomies just for classes not well sep-arable yet. Thus, we construct an optimal ECOC-ONE matrix dependent of the concret domain. To explain our procedure, we divide the ECOC-ONE algorithm into 3 steps: optimal node estimation, weights estimation, and ECOC-ONE matrix construction. The training process guided by the two training and validation subsets, ignores a significant amount of data from the training set, which can be redundant or harmful to the learning process, and avoid overfitting [13]. Let us define the notation used in the following paragraphs: given a data pair (x,l), where x is a multidimensional data point and l is the label associated to that sample, we define {x,l} = {xt,lt} ∪ {xv,lv}, where {xt,lt} and {xv,lv} are the sets of data pairs associated to training and validation sets, respectively. In the same way, e(h(x),l)represents the empirical error over the data set x given an hypothesis h(.). 3.1. Optimal node estimation Test accuracy of the training subsets: To introduce each network node, first, we test the current M accuracy on the training subsets. For this step, we find the resulting codeword
x ∈ {− 1, 1} for each class sample of these subsets, and we label it as follows: n
where d(.) is a distance value between H(M,h,x) and the codeword yj. H(M,h,x) is the strong hypothesis resulting in applying the set of learning algorithms h(.), parameterized with Θ on the problems defined by each column of the ECOC matrix M on a data point x. The result of H(M,h,x) is an estimated codeword. We propose the use of a weighed Euclidean distance in the following way:
where the weight wi introduces the relevance of each dichotomy in the learning ensemble technique. The training and validation confusion matrices: Once we test the accuracy of the strong hypothesis H on the training and validation subsets, we estimate their respective confusion matrices vt and vv. Both confusion matrices are of size Nc x Nc, and have at position (i, j) the number of instances of class ci classified as class cj.
where h(x) is the label estimation obtained using equation (2) and l is the true label of example x. Once the matrices have been obtained, we select the pair {ci, cj} with maximum value according to the following expression:
32
S. Escalera et al. / Optimal Extension of Error Correcting Output Codes
∀(i, j ) ∈ [1,..., N c ] , where vT is the transposed matrix. The resulting pair is the set of classes that are more easily confounded, and therefore they have the maximum partial empirical error. Find the new dichotomy: Once the set of classes with maximal error has been obtained, {ci,cj}, we create a new column of the ECOC matrix as follows: each candidate column considers a possible pair of subsets of classes p = {{ci ∪ C1},{cj ∪ C2}} ⊆ C so that C1 ∩ C2 ∩ ci ∩ cj = Ø and Ci ⊆ C. In particular we are looking for the subset division of classes p so that the dichotomy ht associated to that division minimizes the empirical error defined by e({x,1}).
where mi(p) follows the rule in equation (7). The column components associated to the classes in the set {ci, C1} are set to +1, the components of the set {cj, C2} are set to -1 and the positions of the rest of classes are set to zero. In the case that multiple candidates obtain the same performance the one involving more classes is preferred. Firstly, it reduces the number of uncertainty in the ECOC matrix by reducing the number of zeros in the dichotomy. Secondly, one can see that when more classes are involved the generalization is greater. Each dichotomy finds a more complex rule on a greater number of classes. This fact has also been observed in the work of Torralba et al. [14]. In their work a multi-task scheme is presented that yields to an improved generalization classifier by aids of class grouping algorithm. This work shows that this kind of learners can increase generalization performance. 3.2. Weights estimates It is known that when a multiclass problem is decomposed in binary problems, not all of these base classifiers have the same importance and generate the same decision boundaries. Our approach uses a weight to adjust the importance of each dichotomy in the ensemble ECOC matrix. In particular, the weight associated to each column depends on the error obtained when applying the ECOC to the training and validation subsets in the following way,
where wi is the weight for the ith dichotomy, and ei is the error produced by this dichotomy at the affected classes of the two training subsets of classes. This equation is based on the weighed scheme of the additive logistic regression [2]. Update the matrix: The column mi is added to the matrix M and the weight wi is calculated using equation (6).
S. Escalera et al. / Optimal Extension of Error Correcting Output Codes
33
3.3. ECOC-ONE matrix construction Once we have generated the optimal nodes, we embed each one in the following way: consider the set of classes associated to a node Ci = {Ci1 ∪ Ci2|Ci1 ∩ Ci2 = Ø}, the element (i, r) of the ECOC-ONE matrix corresponding to class i and dichotomy r is filled as (7). The summarized ECOC-ONE algorithm is shown in fig. 1.
Table 1. ECOC-ONE extension algorithm
As mentioned before, one of the desirable properties of the ECOC matrix is to have maximal distance between rows. In this sense, our procedure focuses on the relevant difficult partitions, increasing the distance between the classes. This fact improves the robustness of the method since difficult classes are likely to have a greater number of dichotomies focussed on them. In this sense, it creates different geometrical arrangements of decision boundaries, and leads the dichotomies to make different bias errors. 4. Results To test our proposed extension method, we extend the most well-known strategies used for ECOC coding: one-versus-all ECOC (one-vs-all), one-versus-one ECOC (one-vs-one), and Dense random ECOC. We have chosen dense random coding because it is more robust than the sparse technique for the same number of columns [4]. The decoding strategy for all mentioned techniques is the standard Euclidean distance because it shows the same behavior as the Hamming decoding but it also reduces the confusion due to the use of the zero values [10]. The number of dichotomies considered for Dense random is based on 10 x log2n, where n is the number of classes for a given database. The decoding strategy for our ECOC-ONE extension is the weighted euclidean distance. The weak classifier used for all the experiments is Gentle Adaboost. Nevertheless, note that our technique is generic in the sense that it only uses the classification score. In this sense it is
34
S. Escalera et al. / Optimal Extension of Error Correcting Output Codes
independent of the particular base classifier. All tests are calculated using ten-fold crossvalidation and a two-tailed t-test with a 95% confidence interval. In order to test ECOCONE coding extension, we have used a set of very well-known databases from UCI repository. The description of each database is shown in table 2. To test our extension technique, we have extended the three commented coding strategies embedding 3 new dichotomies for all cases. The new 3 dichotomies embedded by dense random maximize the hamming distance between matrix rows. The results of extending one-versus-all, Dense random, and one-versus-one matrices in 5 UCI databases are shown in tables 3, 4 and 4 respectively. For each case we show the hit obtained and the number of dichotomies used for that experiment (#D). One can ob-serve that adding just 3 extra dichotomies the accuracy increase considerably in comparison with the initial coding length. Besides, our problem-dependent ECOC-ONE coding extension outperform in all cases the Dense extension strategy due to the problem-dependent optimal selection of the extra dichotomies. One can observe that the confi-dence rates for our proposed technique is comparable and decreased in most cases in comparison with the results obtained by the dense extension strategy.
Table 2. UCI repository databases characteristics.
Table 3. Results of coding extensions of one-versus-all for UCI repository database. If we compare the initial differences between one-versus-all and one-versus-one for the initial codes, their results are considerable different. When the one-versus-all initial code is extended with 3 extra ECOC-ONE dichotomies, the results are comparable with the obtained using one-versus-one with far less cost.
S. Escalera et al. / Optimal Extension of Error Correcting Output Codes
35
Table 4. Results of coding extensions of Dense random for UCI repository database. 5. Conclusions In most of the ECOC coding strategies, the ECOC matrix is pre-designed, using the same dichotomies in any type of problem. We introduced a new coding and decoding strategy called ECOC-ONE. The ECOC-ONE strategy can be seen as a general extension for any initial coding matrix. The procedure shares classifiers among classes in the ECOC-ONE matrix, and selects the best partitions weighed by their relevance. In this way, it reduces the overall error for a given problem. Moreover, using the validation subset the generalization performance is increased and overfitting is avoided. We show that this technique improves in most cases the performance of any initial code with few extra cost better than other distance maximization extensions. Besides, ECOC-ONE can generate an initial small code by itself. As a result, a compact - small number of classifiers - multiclass recognition technique with improved accuracy is presented with very promising results.
Table 5. Results of coding extensions of one-versus-one for UCI repository database. 6. Acknowledgements This work was supported by projects TIC2003-00654, FIS-G03/1085, FIS-PI031488, and MI-1509/2005.
36
S. Escalera et al. / Optimal Extension of Error Correcting Output Codes
References [1] V. Vapnik, The nature of statistical learning theory, Springer-Verlag, 1995. [2] J. Friedman, T. Hastie, R. Tibshirani, Additive logistic regression: a statistical view of boosting, The Annals of Statistics vol. 38 (2) (1998) 337–374. [3] L. Breiman, J. Friedman, R. Olshen, C. Stone, Classification and Regression Trees, Wadsworth, 1984. [4] E. Allwein, R. Schapire, Y. Singer, Reducing multiclass to binary: A unifying approach for margin classifiers, Journal of Machine Learning Research vol. 1 (2002) 113–141. [5] E. Frank, S. Kramer, Ensembles of nested dichotomies for multiclass problems, in: Proceedings st of 21 International Conference on Machine Learning, 2004, pp. 305–312. [6] T. Dietterich, G. Bakiri, Solving multiclass learning problems via error-correcting output codes, Journal of Artificial Intelligence Research vol. 2 (1995) 263–286. [7] T. Windeatt, R. Ghaderi, Coding and decoding for multi-class learning problems, Information Fusion vol. 4 (1) (2003) 11–21. [8] T. Dietterich, E. Kong, Error-correcting output codes corrects bias and variance, in: P. of the 21th ICML (Ed.), S. Prieditis and S. Russell, 1995, pp. 313–321. [9] K. Crammer, Y. Singer, On the learnability and design of output codes for multiclass problems, Machine Learning 47 (2) (2002) 201–233. [10] O. Pujol, P. Radeva, J. Vitrià, Discriminant (ecoc): A heuristic method for application dependent design of error correcting output codes, Transactions on PAMI 28 (6) (2006) 1001– 1007. [11] T. Hastie, R. Tibshirani, Classification by pairwise grouping, The annals of statistics vol. 26 (5) (1998) 451–471. [12] T. Dietterich, G. Bakiri, Error-correcting output codes: A general method for improving multiclass inductive learning programs, in: A. Press (Ed.), 9th CAI, 1991, pp. 572–577. [13] H. Madala, A. Ivakhnenko, Inductive Learning Algorithm for Complex Systems Modelling, CRC Press Inc, 1994. [14] A. Torralba, K. Murphy, W. Freeman, Sharing visual features for multiclass and multiview object detection, MIT AIM.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
37
A Comparative Analysis of different classes-interpretation support techniques Karina GIBERT, Alejandra PEREZ-BONILLA, Gustavo RODRIGUEZ-SILVA Department of Statistics and Operations Research. Universitat Politènica de Catalunya. Campus Nord, Edif. C5, C. Jordi Girona 1-3, 08034 – Barcelona. e-mail:
[email protected] Abstract. In this work, the application of some traditional statistical or AI techniques (Logistic Regression, Decision Trees and Discriminant Analysis) used to assist interpretation of a set of classes is presented together with a new methodology Conceptual characterization by embedded conditioning (CCEC)[7] based on combination of statistics and some knowledge induction. All of them are applied to a set of real data coming from a WasteWater Treatment Plant (WWTP) previously classified [8] to identify the characteristic situations that can be found in. Keywords. Machine Learning, Clusters, Interpretation, Validation, Characterizing variable, Knowledge Base, Rule induction, Artificial Learning, Reasoning models, Information systems, data mining
1. Introduction As a result of the very big size of data bases electronically collected and stored, reduction, storing and recovering of information is becoming a matter of great importance [12]. In this process, it is useful to divide a set of objects in a set of P = {C1,...,Cξ}, disjoint classes. One of the most applied techniques to find P is Cluster Analysis. The aim is to find distinguishable clusters of similar elements. Once found by using some clustering technique, the interpretation of resulting clusters is a critical process to understand their meaning and to see if they correspond to reality or not, if they could be used for descriptive or predictive purposes or to support later construction of intelligent decision support systems. Clustering makes sense when the real structure of data is unknown; then, validation is difficult and testing significant differences between clusters only guarantees some structural properties, but not the interpretability of proposed clusters, which highly determines their usefulness [19]. Some proposals exist for structural validation, but little has been done in the description/interpretation field, and at present this is still left to analyst who does it in a personal way. According to [12] each cluster can be described by k (≥1) representative objects (gravity centers or medoids) and by a measure of its spread. Plots of the dissimilarity between each object and each cluster’s representative and other measures of strength can assist in distinguishing between core and outlying members of a cluster. This kind of information supports that the analyst proceeds to personally interpret the clusters. But as the number of classes and variables increases, which is very common in data mining applications, it is more and more difficult to interpret classes and more powerful tools are required to support this task. In this way, the literature offers different possibilities [12]. Among them, Decision Trees [17] or
38
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
Figure 1. General structure of the wastewater treatment plant.
VARIABLES Inflow(Q) Iron Pre-treatment(FE) Hydrogen Potential(PH) Suspended Solids(SS) Volatile Suspended Solids(SSV) Chemical Organic Matter(DQO) Biodegradable Organic Matter(DBO) Index 30 at the Biological Reactor(V30) Mixed Liquor Suspended Solids(MLSS) Mixed Liquor Volatile Suspended Solids(MLVSS) Mean Cell Residence Time(MCRT)
E Q-E FE-E PH-E SS-E SSV-E DQO-E DBO-E
D
PH-D SS-D SSV-D DQO-D DBO-D
B QB-B V30-B MLSS-B MLVSS-B MCRT-B
S
PH-S SS-S SSV-S DQO-S DBO-S
• Measurement Points: Input(E), After Settler(D), Biologic Treatment(B), Output(S). Other variables: Recirculated Flow(QRG), Purged flow(QPG) and Air inflow(QAG) Table 1. Variables used in the Clustering.
some other works in conceptual clustering and KDD [4] [13] [3]. In this work, several of them, together with a new methodology called Conceptual characterization by embedded conditioning (CCEC) [7] are used on a set of real data coming from a WWTP (see §2) and previously clustered to identify characteristic situations in the plant. They are briefly described (in §3) and results (§4) compared among them (§5). This work shows for this particular application, by one side, which ones are useful tools to proceed to the interpretation of the clusters, and by the other side, the level of coincidence of the results for the different tools. Finally, §6, the conclusions and future lines of research are shown. 2. The Target Domain The main goal of WWTP is to guarantee the outflow water quality (referred to certain legal requirements), in order to restore the natural environmental balance, disturbed by industry wastes, domestic waste-waters, etc. When the plant is abnormally operating, which is extremely difficult to model by traditional mechanicistic models [1], decisions have to be taken to reestablish the normality as soon as possible. This process is very complex because of the intrinsic features of wastewater and the bad consequences of an incorrect management of the plant. Here, a sample of 396
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
39
observations from a Catalan WWTP is used. Each observation refers to a daily mean. The Plant is described through 25 variables, considered by experts as the more relevant. They can be grouped as shown in table 1. Some other variables are available like ammonium concentration but experts recommended not to include them for finding clusters. These data was previously clustered using Clustering Based on Rules (ClBR) [5] in the KLASS software [8]. It is a hierarchical method, that can take advantage of prior knowledge to bias classes construction for improving interpretability. See [10] for comparison with other clustering techniques. Because all variables are numeric,
Figure 2. Hierarchical tree.
squared euclidean distance and Ward criterion were used and dendrogram of Figure 2 was produced. As usual in hierarchical clustering, the final partition is the horizontal cut of the tree that maximizes the ratio between heterogeneity between classes with respect to homogeneity within classes, what guaranties the distinguishability between classes. Upon that, a partition in four clusters was performed P4 = {C392,C389,C390,C383} and validated by the experts. It was seen in [10] that classes identified different typical situations in the plant. 3. Methods This section introduces the methods used in this work for interpretation. 3.1. Logistic Regression (LR) The Logistic Regression (LR) is part of the models in which the response variable is qualitative, for example, the occurrence or not of a certain event or the belong or not to a certain category. In this work Binary Logistic Regression is used to identify which variables better explain every class by using as a binary response variable the dummy YC.
40
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
The explanatory variables could be quantitative or qualitative if these are correspondingly treated and the result is a model for the probability of an object of being in C or not [14].
The LR can identify the subset of variables Xk related to YC =1 and the prototype profile of YC =1 can be formulated. A qualitative reinterpretation of the coefficients βk is usually done so that for the Xk included in the final model, positive coefficients βk indicate that high values of Xk increment probability of being in C and low values of Xk decrement it, while for negative βk the opposite occurs. In this work SPSS software has been used for estimating the logistic coefficients. 3.2. Decision Trees (DT) The Decision Trees (DT) are non parametric techniques. The aim is to segment a set of data for finding homogeneous subsets according to a categorical response variable [16]. There are two types of Decision Trees: the Binary (made of binary splits) and Nary Trees (which can give a division of more than two parts) [15]. They can give classification rules easily interpretable and they can accept any kind of variable, in addition, the graphical representation is an excellent way to visualize the classification process. R software has been used with the rpart library to produce a DT over the P4 partition. This library applied the RPART (Recursive PARTitioning) algorithm to construct a Binary Decision Tree [20]. In this work, the DT is completed with boxplots of splitting variables. In these boxplots, the values that satisfied the established condition for the division of the nodes are emphasized with a line. The leafs of the tree are labelled with his most frequent cluster. pure leafs have blodface labels. These marks on the DT make easier the induction of qualitative concepts referred to different classes (see §3.2). 3.3. Discriminant Analysis (DA) The Discriminant Analysis DA is a statistical method for distinguishing previously found classes. For each class C ∈ P a function dC (X1 ...XK ) is defined in terms of the observed variables Xk. These functions are found in such a way that they provide high scores for objects i ∈ C and small scores for i∉ C, allowing distinction of different classes. They are called discriminant functions [13]. In the evaluation of i, all the functions dC for an object i are computed and it will be assigned to the class C if
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
41
In other words, the classifier assigns an object to the class for which the value of the discriminant functions is the biggest. In this work SPSS statistical software is used to perform Fisher Discriminant Analysis, the functions obtained can predict the most probable class for an object. Then, R software was used to project variables and centroids of the clusters over discriminant planes to construct the figure 3. This method is based on hypothesis of equality of variances-covariances matrices for different classes, without conditions over the variables’ distributions. The main idea is searching Discriminant Axes where observations could be projected conserving maximum distance between classes and minimum internal dispersion. The solution is obtained through a Principal Component Analysis over a set of the gravity centers [18]. The technique allows the conjoint graphical representation of variables and representatives (usually gravity centers). From this, the variables with more influence on the formation of a certain class can be observed. SPSS also provides the contributions of each variable to the Discriminant Axes, an excellent complement to the projections over the discriminant planes. 3.4. Conceptual characterization by embedded conditioning (CCEC) CCEC [7], [6] is a methodology for generating an automatic conceptual interpretation of a partition
P ∈ τ, where τ is an indexed hierarchy of I, τ = {P1, P2, P3, P4, ..., Pn}
(usually represented as a hierarchical binary tree or dendrogram). Current version CCEC takes advantage of the hierarchical structure to overcome some limitations observed in previous works [19], [22]. It uses the property of any binary hierarchical structure that classes of Pξ+1 are the same as Pξ except one, which splits in two subclasses in Pξ+1. The dendrogram is used by CCEC to discover particularities of the final classes by a process that iteratively searches the variables distinguishing a pair of classes, starting on the top of the hierarchical tree and going one step down at every iteration. The single pair of classes that split from previous one is analyzed at every iteration, and the knowledge already obtained from the father of this pair of classes is inherited. Boxplot based Discretization (BbD)[8] assess which variables separate a pair of classes. It is a discretization of a numerical variable using the cut points that identify changes on the set of classes with non-null intersection of X k. This allows to automatically analyze conditional distributions through Multiple boxplot [21]. Variables with intervals in a single class are good separators. In §4 illustration of this particular process are introduced.
42
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
Figure 3. Decision Tree with Boxplot in each node.
4. Application In this section CCEC, RL, DT and DA are used to generate the domain decision model for P4. The interpretation provided by the experts in [8] was supported by the concepts induced by CCEC[7], which identifies variables that distinguish every cluster. 4.1. Logistic Regression (LR) SPSS is used to apply a LR to the data and the following equations were obtained for each cluster. On the right, between brackets, the level of correctness1.
From these equations, variable relationships determining P(C=1) for each cluster can be identified : first equation shows that high values of SS-E and DBO-E increase P (C383 = 1) as low value of DBO-S also does. However, upon LR, other variables do not appear to be significant in C383. This suggests that in C383 water comes in very 1
In Cluster C392, 5 or 6 objects have missing values in 14 of 25 variables. A LR in these conditions presents some theoretical problems, so, the results in this cluster are doubtful. On the other hand, the size of the cluster is so small that estimator's variance grows too much and the significance of coefficients is uncertain.
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
43
dirty while low organic matter exits. Similar process is done with other equations and typical profiles for every class can be identified. Table 2 shows a synthesis of the induced concepts for every class and methods. Variables are grouped upon WWTP structure. 4.1.1. Decision Trees (DT) The rpart library was applied and the tree of figure 3 was produced. It was enriched according to the description of §2.2 to make easier the understanding.
Figure 4. Variables and Gravity Centers of Classes of the Discriminant Axes 1, 2 and 3.
4.1.2. Discriminant Analysis (DA) Fig. 4 shows common projection of variables and class centroids in the first and second discriminant plane. First axis divides C392 from other clusters and MCRT-B variable has the major contribution to it. The second axis discriminates C 383, with major contribution of SS-E with high values and MLVSS-B and SS-S with low values. Third axis opposes C390 and C389. For C390 variables MCRT-B, QR-G, DQOD, DBO-S have high values and for C389, low values. With this information and considering the sign of correlations between axes and variables, concepts can be induced for every class (Table 2). 4.1.3. Conceptual characterization by embedded conditioning (CCEC) Since a partition of 4 classes is interpreted, 3 iterations are required. First, variables separating C392 from C393, see Fig. 2, are found. Second (C391 and C389) (which split C393) are found. Finally, variables separating C390 and C383 (which split C391) are found.
44
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
Figure 5. multiple boxplot : Q-E vs P2 (left); DBO-D vs {P3− C392 }(right)
In the final interpretation, the knowledge obtained at every iteration is combined. In Fig. 5 the Boxplot for Q-E and DBO-D for first iteration are shown. Briefly, BdD k k consists on getting the minimum (mC ) and maximum (MC ) of Xk inside any class and using them as cut-points to discretize Xk (in top of the Fig. 5, the resulting Q−E
DBO−D
K
and I are shown). After, cross tables of I vs P allow discretization I identification of class particularities. The process continues separating C390 and C383, see Fig. 2, which split C391. Repeating this process till the fourth level down the hierarchical tree and making similar analysis for all the variables, the following interpretation of P4 [8], is obtained: -Class C392 ,“Low values for Inflow WasteWater" -Class C389 ,“High values for Inflow and few Biodegradable Organic Matter at settler" -Class C390 ,“High values of Inflow wastewater, high Biodegradable Organic Matter at settler and few Biodegradable organic matter at the input" -Class C383 ,“Medium-high values of Inflow wastewater, high Biodegradable Organic Matter at settler and high Biodegradable organic matter at the input." This can be considered as a domain model which can support later decision on the treatment for a new day, provided that a standard treatment is previously associated by experts to every class. In this association providing the experts means for easily understanding the meaning of classes is critical. CCEC provides simple and short concepts which use to be easier to handle than the provided by other induction rules algorithms. This final interpretation is consistent with the one provided by the experts in [8].
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
45
5. Comparison Table 2 shows the comparison of concepts obtained with each technique per each cluster (variables boldfaced appear as relevant to the class at least twice). Only the LR and the DA have complete coincidence in cluster C392; for the others, result depend on the applied method. Nevertheless, all clusters have some variables (1 or 2) that appear as discriminant with at least 3 methods: SS-E, characteristically high in C383 or DQO-D, high in C390. CCEC is the method providing richer interpretations. It does not behave, as most induction rules methods, providing a minimum set of predictors, but identifying particularities of a class which define its special behavior and trying to mimics the process followed by experts to manually analyze clustering results for understanding.
Table 2. Comparative of Concepts by cluster for each tool.
In general, the results of each method for different classes have coincidences with the experts’ interpretation, for example: SS-E at high level in class C 383 is a good indicator that water inflow is dirty and DBO-S at low level indicates that water outflow is clean (as SS-S at low level). Anyway, unless a class is very homogeneous, the process of analyzing classes with different methods is recommended to obtain a clear perspective of the classes. Right column of Table 2 contains the experts interpretation given in [8].
6. Conclusions and future work In this work, four tools of interpretation of Cluster Analysis have been explored. Differences between them[9] were found but the interpretation of the results is consistent and it has coincidences with the conclusions that in the past the experts announced. Whatever, to get a better description/interpretation of the clusters the information provided by LR, DT and DA should be combined and analyzed globally.
46
K. Gibert et al. / A Comparative Analysis of Different Classes-Interpretation Support Techniques
CCEC [6] is a new proposal presumed to provide more reach descriptions. Benefits of those proposal are specially interesting in the interpretation of partitions with great number of classes where manual experts analysis becomes almost impossible. By associating an appropriate standard treatment to every class a model for deciding the appropriate treatment of a concrete day upon a reduced number variables is obtained together with an estimation of the risk associated to that decision (which is related with the certainty of the rule). Acknowledgements: This research has partially been financed by TIN 2004 − 01368. References [1] Abrams and Eddy (2003). Wastewater engineering treatament, disposal,reuse. 4th Ed. revised by George Tchobanoglous, Franklin L. Burton NY.US. McGraw-Hill. [2] Breiman L., Friedman J. H., Olshen R. A., Stone C. J. (1984). Classification and Regression Trees., Wadsworth, Belmont, CA. [3] Fayyad U. M., Piatetsky-Shapiro G., Smyth P., Uthurusamy, R. (Eds) (1996). Advances in KnowledgeDiscovery and Data Mining., AAAI Press / MIT Press, Menlo Park, CA. [4] Fisher (1987). Knowl. acquisition via incremental conceptual clustering., ML(2)139-172 [5] K. Gibert & U. Cortés (1998). Clustering based on rules and knowledge discovery in illstructured domains. Computación y Sistemas. 1(4):213–227. [6] Gibert, K. & Pérez, A. (2005) Ventajas de la estructura jerárquica del clustering en la interpretación automática de clasificaciones. III TAMIDA, Granada, THOMPSON 67-76. [7] Gibert, K. & Pérez-Bonilla, A. (2006): Automatic generation of interpretation as a tool for modelling decisions. In: Springer. III MDAI, Tarragona. [8] Gibert, K. & Roda, I. (2000.) Identifying characteristic situations in wastewater treatment plants. Workshop BESAI (ECAI2000), V1:1–9. [9] Gibert K., Rodríguez G., & Sánchez G. (2006). Sobre las herramientas estadísticas útiles en la interpretación de clases procedentes de un cluster. DR 2006/2. EIO-UPC. [10] Gibert, K., Sànchez-Marrè, M., Flores, X. (2004) Cluster discovery in environmental databases using GESCONDA: The added value of comparisons. AIComm 18(4), pp 319–331 [11] Gordon, A. D. (1994). Identifying genuine clusters in a classification. CSDA18:561–581. [12] Gordon, A. D. (1999). Cluster Description., University of St. Andrews Scotland. [13] Ho T. B., Diday E., Gettler-Summa, M. (1988). Generating rules for expert systems from observations., Pattern Recognition Letters 7, 265-271. [14] Kleinbaum D. (1994). Logistic Regression., New York Springer. [15] Lebart L. (1997). Statistique Exploratoire Multidimensionnelle.,Paris Dunod. [16] Lévi J. (2003). Análisis Multivariable para las Ciencias Sociales., Madrid Prentice Hall. [17] Michalski R. S., Carbonell J. G., Mitchell T. M. (Eds)(1983). Machine Learning: An Artificial Intelligence Approach., Tioga Publishing Company, Palo Alto, CA. [18] Nakache J. (2003). Statistique Explicative Appliquée., Paris Technip. [19] Riquelme eds. (2004).Tendencias de la Minería de Datos en España. 119-130. Digital3. Sev. [20] Therneau T.M., E. J. Atkinson (1997). An introduction to recursive partitioning using the rpart routine. Technical Report 61, Mayo Clinic, Section of Statistics. [21] Tukey, J.W. (1977). Exploratory Data Analysis. Addison-Wesley. [22] Vázquez, F. & Gibert, K. (2001). Generación Automática de Reglas Difusas en Dominios Poco Estructurados con Variables Numéricas. In Proc. CAEPIA01, V1:143–152.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
47
Learning from cooperation using justifications1 Eloi Puertas a, 2, and Eva Armengol a Intelligence Institute (IIIA-CSIC)
a Artificial
Abstract. In multi-agent systems, individual problem solving capabilities can be improved thanks to the interaction with other agents. In the classification problem solving task each agent is able to solve the problems alone, but in a collaborative scenario, an agent can take advantage of the knowledge of others. In our approach, when an agent decides to collaborate with other agents, in addition to the solution for the current problem, it acquires new domain knowledge. This domain knowledge consists on explanations (or justifications) that other agents done for the solution they proposed. In that way, the first agent can store these justifications and use them like some kind of domain rules for solving new problems. As a consequence, the agent acquires experience and it is capable to solve on its own problems that initially were outside of his experience. Keywords. Machine Learning, Multi-agent System, Cooperation, Justifications
1. Introduction In Machine Learning the idea of cooperation between entities appears with the formation of ensembles. An ensemble is composed of several base classifiers (using inductive learning methods), each one of them being capable of completely solving a problem. Perrone and Cooper [9] proved that aggregating the solutions obtained by independent classifiers improves the accuracy of each classifier on its own: it is the ensemble effect. In that approach the cooperation among entities is done by both sharing the results for the same problem and reaching an aggregated solution. This same idea is taken by Plaza and Ontañón [10] but they adapt it to agents. These authors define a commitee as an ensemble of agents where each agent has its own experience and it is capable of completely solving new problems. In addition, each agent in a commitee can collaborate with other agents in order to improve its problem solving capabilities. The difference between this approach and the most common approaches to multi-agent learning systems (MALS) is that in the former case the agent is able to completely solve a problem whereas in most common approaches of MALS each agent only solves a part of a problem. In the current paper we take the same idea of Plaza and Ontañón but we propose that the agents can take benefit from the collaboration with other agents by learning domain knowledge. Our point is that if the agents are able to justify the solution they provide, then each agent can use that justification as new domain knowledge. Therefore, an agent learns from the collaboration with other agents. The idea of take benefit from cooperation between learners was appointed by Provost and Hennessy [13]. In their work authors show how individual learners can share domain rules in order to build a common domain theory from distributed data and how this 1 2
This work has been supported by project CBR-PROMUSIC (TIC2003-07776-C02-02)
Artificial Intelligence Research Institute ( IIIA - CSIC ) Campus UAB, 08193 Bellaterra, Catalonia (Spain) Tel.: +1 222 333 4444; Fax: +1 222 333 0000; E-mail:
[email protected].
48
E. Puertas and E. Armengol / Learning from Cooperation Using Justifications
improves the performance of all the system. In our work we are interested on show how one individual agent can improve its own domain knowledge by means of other agents. Thus, we propose that, in addition to the classification of a problem, agents have to be able to give an explanation of why they proposed such solution. This explanation, that we call justification, is used by the agent as new domain knowledge since it can use it for solving further problems on its own. The paper is organized as follows. In section 2 we explain our multi-agent architecture and the algorithm followed by the agents and then, in section 3, justifications are defined. Finally in section 4 the experimental results are shown. Related work is appointed in section 5 and the main conclusions and the future work are presented in section 6. 2. Multi-agent architecture In this section we propose a multi-agent scenario where there are several agents that are capable of solving problems of a certain domain. Our goal is twofold: on one hand the agents have to collaborate for solving problems and on the other hand they would take benefit from the collaboration to learn new domain knowledge. In the architecture we propose the agents hold the following properties: 1. they are cooperative, i.e. they always try to solve the problems; 2. the experience (case base) of each agent is different; 3. each agent is capable of completely solving a problem. The collaboration among the agents begins when one of the agents has to solve a new problem but it is not able to reach a confident enough solution on its own. In such situation, a simple “query and answer” protocol is used to send the problem to other agents who answer with its own solution for that problem. Figure 1 shows the algorithm followed by the agents for solving new problems. Let us suppose that agent A has to solve a problem p. The first step is to use some problem solving method on its case-base for solving p. The only requirement for the problem solving method is that it has to be able to give a justification for the solution (see section 3). This justification will be used by the agent A to assess its confidence on that solution. In Machine Learning there are several measures commonly used to assess the confidence on a solution [3]. In our experiments we used the recall measure that is a ratio between the number of cases covered by a justification and the total number of cases of the class Sj in the case base of the agent A. The recall of a justification Jj is computed using the following expression:
where CBA[Sj ] is the set of cases in the case base of agent A that belong to the class Sj , and PJj is the set of cases of Sj covered by the justification Jj . High recall is defined as an index above a certain domain-dependent threshold parameter defined into the interval [0,1]. A high threshold forces the agent to collaborate most of the times and a low threshold produces no collaboration.
E. Puertas and E. Armengol / Learning from Cooperation Using Justifications
49
procedure Collaborative-Algorithm (p)
Figure 1. Collaborative Algorithm used by the agents for solving problems
When a solution Si has enough support (step 2 in Fig. 1), this means that the agent does not need the collaboration of other agents for solving p, therefore the process finishes giving Si as solution for p. When the solution Si has not enough support, A asks other agents for solving p. For simplicity, let us suppose that p is the first problem for which the agent asks for collaboration. In such situation, the step 4 fails (since there are not previous justifications) and the agent A sends the problem to the other agents (step 5 of the algorithm). Each agent solves p using its own problem solving method and case base, and returns (step 6) a tuple {Aj , Sj , Jj} where Aj is the agent who solved the problem, Sj is the solution for p and Jj is the justification of Aj for that solution. The agent A checks each one of these tuples against its case base and builds the set CA of consistent justifications (see section 3). When CA is empty (step 8) then the solution Si predicted by the agent A (although with low confidence) is returned as solution for p. Otherwise, A uses a voting system to predict the class for p (step 9). In particular, this voting system takes into account the confidence that A has on the justifications given by the other agents. Agent A stores consistent justifications (set CA) into a table T . This table allows the agent to increase its domain experience, i.e. to learn new domain knowledge. Thus, when A reaches a solution for a problem on his own with low support, it can use the table T (step 3 of the algorithm) before it starts the collaboration protocol. Let SJp be the set of justifications in T satisfied by p, if SJp is not empty (step 4) then A can classify p according to the justifications in SJp using the aggregation method as in step 4. Otherwise the agent A is forced to start the collaboration process (step 5). The confidence on a justification determines how reliable the solution associated to that justification is. Consistent justifications allow the agent to enrich its experience since these justifications are incorporated to the domain knowledge available to the agent. In the next section we define the justifications and how they can be used by the agents. 3. Justifications A justification is defined as a symbolic description formed by all the aspects of the problem that have been used to achieve the solution. How to reach this symbolic description depends on the method used for problem solving. Thus, when a decision tree is used for problem solving, a justification could be formed by the path from the root to the leaf that classified the problem. Similarly, in methods using relational
50
E. Puertas and E. Armengol / Learning from Cooperation Using Justifications
representation (for instance ILP systems [7]), a justification could be formed by the rules used to classify an example. In our approach, each agent collects justifications coming both from solving problems alone and from other agents. These justifications are stored in order to use this knowledge in future situations. In our approach an agent A keeps all justifications in a table T . A row of this table contains the following information: x a justification (Ji): A symbolic description. x a class solution (Si): The class predicted by Ji. x a set of precedents (PJi): List of problems of A covered by Ji. x a set of agents (AJi): List of agents that used Ji to predict Si. x a significance score (¬Ji): A score assessing the confidence on Ji. The table T supports an agent in acquiring experience when it solves problems. In that way, the agent improves its experience on the domain by means of interactions with other agents and using justifications other agents propose for those problems that the agent was not able to solve with enough confidence. Let PJj be the set of precedents belonging to the cases base of an agent A that satisfy the justification Jj . There are several possible situations: 1) cases in PJj belong to different classes, i.e. Jj is not consistent with the case base of A; 2) PJj = Ø, i.e there are no cases in the case base of A satisfying Jj ; and 3) all cases in PJj belong to the same class, i.e. Jj is consistent with the case base of A. Only consistent justifications are stored in T. When an agent A want to assess the confidence on a consistent justification Ji, a possible situation is that only few cases in the case base of A are covered by Ji. This produces a low recall for Ji but this does not necessarily mean that the justification is wrong. For this reason we need a measure that identifies significant justifications independently where they come from. CN2 [4], evaluates the quality of rules using a test of significance based in the likelihood ratio statistic [6]. The significance ¬j for the justification Jj is calculated as follows:
where k is the number of classes, #PJj are all cases covered by the justification Jj , qi is the relative frequency calculated using the Laplace correction and #CBA[Si] is the relative frequency of all cases of class Si in the case base of agent A. The result is distributed as Chi-Square approximation (F2) with k-1 degrees of freedom. The significance measure ¬j compares the class probability distributions in the set of covered examples with the distribution over the whole training set. We guess that a justification is reliable if these two distributions are significantly different. However, it is not possible to assure that a justification is really significant or not, it can only be said that it is unlikely that this justification is not significant according to the experience.
Table 1. Total number of cases of each data set and how they have been partitioned in the experiments.
E. Puertas and E. Armengol / Learning from Cooperation Using Justifications
51
4. Experiments We performed several experiments implementing the multi-agent architecture described in section 2. Each agent has its own case base (disjoint from the other agents) and its own method of problem solving. All the agents use the relational lazy learning method called LID [1] as problem solver method. Given a new problem, LID gives as result the solution class(es) where the problem can be classified and a justification of that solution. Although in our experiments all the agents have the same problem solving method, this is not a requirement for the architecture we propose. The experiments have been performed on three domains from the UCI repository [8]: Soybean, TicTacToe and Car. The goal of the experiments is to prove that an agent can learn from the collaboration with other agents. This learning can be proved because the collaboration with other agents diminishes as long as the agent uses the justifications obtained from previous collaborations (the table of justifications). To achieve this goal we performed the following process: i) given a domain D, the total number of cases was randomly splitted in two parts: the test set (20% in Soybean domain and 10% in the rest) and the training set (80% in Soybean domain and 90% in the rest) (table 1). ii) The training set, in turn, was divided in N parts uniformly distributed, where N is the number of agents in the architecture. Concerning the accuracy, Table 2 shows the result of several configurations of the multi-agent system (2, 3, 5, 8 and 10 agents). These results are the average of 7 tenfold cross-validation trials for both the Car and TicTacToe and 7 five-fold crossvalidation trials for Soybean. As it is expected, the accuracy of the ensemble increases due to ensemble effect [5] proved for a set of classifiers and that can also be translated to a set of agents. In short, the ensemble effect states that when agents are minimally competent (individual error lower than 0.5) and they have uncorrelated errors (this is true in our experiments because agents have disjoint case bases), then the error of the combined predictions of agents is lower than the error of individual agents. Notice that the accuracy decreases as long as the number of agents increases due to point ii) above, i.e. The training set is uniformly divided among the agents, therefore as more agents compose the system, the smaller the case base of each agent is.
Table 2. Accuracy exhibited by configurations with 2, 3, 5, 8 and 10 agents in Soybean, Tic Tac Toe and Car.
52
E. Puertas and E. Armengol / Learning from Cooperation Using Justifications
Figure 2. Given a number of agents (X-axis), graphics show for each domain the percentage of cases (Yaxis) solved by an agent using 1) its case base (label not collaborative), 2) the justification table (label using justification) and 3) collaboration with other agents (label collaborative).
Figure 2 shows for each multi-agent configuration and each domain, the percentage of cases that an agent solves on its own (label not collaborative), i.e. step 3 of the algorithm in Fig.1; using the justification table (label using justification), i.e. step 5 of the algorithm; and collaborating with other agents (label collaborative), i.e. step 9 of the algorithm. For example, when the system is composed of three agents and solves problems in the Soybean domain, around 40% of cases are solved by the agent alone. This means that the remaining 60% should not be solved with enough confidence. Nevertheless, using the justification table, the number of cases that the agent is able to solve without collaborations increases around 60%. This shows that the agent is reusing some justifications that have been useful in previous cases. It is important to remark here the utility of the justification table, since without it the agent would ask for collaboration around 50% of cases in the car domain (Using justification plus Collaborative) whereas using the justification table, the agent is able to solve without collaboration approximately 65% of the test cases (Not collaborative plus Using justication). This improvement is more clear in the Soybean domain with a configuration of 5 and 8 agents. In both configurations, the percentage of cases solved with enough confidence by the agent alone is under 20%, whereas using the table of justifications this percentage increases to around 60%. This proves that the agent learns from collaboration with other agents increasing its experience in the domain. Figure 3 shows the evolution of the agent experience along the time on Tic Tac Toe and Car domains for a configuration with 5 agents. The x-axis shows the problem number, and the y-axis shows, in percentage, how many times a method has been used for solving each problem. Therefore, the graphics show how the decisions taken by the agent change from the first to the last problem. 5. Related Work The architecture introduced in this paper is related with works on ensemble learning[12]. The goal of ensemble learning is to aggregate solutions proposed by a set of index pendent classifiers. Each classifier uses an inductive learning method to induce domain knowledge and it can solve completely a problem using this domain knowledge. This same idea was taken by Plaza and Ontañón [10] to define commitees as ensembles of agents. In that work, authors analyze different collaboration strategies focused on a lazy behavior of the agents, i.e. an agent has as goal to solve the current problem with collaboration when necessary. Nevertheless agents do not extract any information from the collaboration. Instead, the goal of our approach is twofold: on one
E. Puertas and E. Armengol / Learning from Cooperation Using Justifications
53
hand an agent has to solve new problems but, on the other hand, the agent has to learn more about the domain.
Figure 3. Evolution of the agent experience on Tic Tac Toe and Car for configurations with 2 and 5 agents
In the same way appointed by Weiß in [14], agents in our system improve their performance by means of communication. In turn, communication itself is reduced as long as agents acquire more knowledge. In our approach, agents learn using high-level communication, in form of justifications that are used for solving the current problem and also future problems. The idea of learning from high-level communication was introduced firstly by Sian [15] in his approach called consensus learning and, more recently, by Plaza and Ontañón [11]. In the framework proposed by Sian each agent learns from its experience and obtains hypothesis. When an hypothesis has reasonable confidence, it is proposed to the other agents by means of a shared blackboard. Other agents use their own experience to evaluate the published hypothesis and may make changes to it. Finally, agents accept the hypotheses that have greatest support by consensus. Conversely, our agents do not share a blackboard, but justifications of a solution are stored by only one agent (that asking the others for solving the current problem). Therefore, stored justifications are only consistent with the knowledge of that agent. The reuse of justifications as domain rules was previously introduced in CLID [2]. The goal of C-LID was to use the justifications from LID as patterns that support solving similar problems (those satisfying the justifications) without using the lazy learning method. In the current paper we extended this idea of caching the justifications of a KBS to a multi-agent learning architecture. 6. Conclusions and Future Work In this paper we introduced a multi-agent architecture where each agent can completely solve a problem. When the problems could not be solved with enough confidence, the agent can ask other agents for collaboration. The collaboration is established by means of justifications that agents give in addition to the solution they propose. The inquirer agent stores those justifications consistent with its knowledge in order to use them for solving further problems. The experiments show that an agent learns from the collaboration since thanks to the table of justifications, an agent acquires domain knowledge allowing it to solve more problems than without that collaboration. In the future we plan to analyze other measures for assessing confidence on justifications. On the other hand, we also plan to use association rules to dynamically
54
E. Puertas and E. Armengol / Learning from Cooperation Using Justifications
discover the expertise of agents.With this knowledge about other agents an agent could detect the most appropriate team of agents for solving each new problem. References [1] E. Armengol and E. Plaza. Lazy induction of descriptions for relational case-based learning. In Machine Learning: ECML-2001, number 2167 in Lecture Notes in Artificial Intelligence, pages 13–24. Springer-Verlag, 2001. [2] E. Armengol and E. Plaza. Remembering similitude terms in case-based reasoning. In 3rd Int. Conf. on Machine Learning and Data Mining MLDM-03, Lecture Notes in Artificial Intelligence 2734, pages 121–130. Springer Verlag, 2003. [3] M. Berthold and D. J. Hand, editors. Intelligent Data Analysis: An Introduction. Springer- Verlag New York, Inc., Secaucus, NJ, USA, 1999. [4] P. Clark and T. Niblett. The cn2 induction algorithm. Mach. Learn., 3(4):261–283, 1989. [5] L. K. Hansen and P. Salomon. Neural network ensembles. IEEE Transactions on Pattern analysis and machine intelligence, 12:993–1001, 1990. [6] J. Kalbfleish. Probability and Statistical Inference, volume II. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1979. [7] S. Muggleton. Inductive logic programming. New Generation Comput., 8(4):295–, 1991. [8] D. Newman, S. Hettich, C. Blake, and C. Merz. UCI repository of machine learning databases. http://www.ics.uci.edu/ mlearn/MLRepository.html, 1998. University of California, Irvine, Dept. of Information and Computer Sciences. [9] M. P. Perrone and L. N. Cooper. When networks disagree: Ensemble methods for hybrid neural networks. In R. J. Mammone, editor, Neural Networks for Speech and Image Processing, pages 126–142. Chapman-Hall, 1993. [10] E. Plaza and S. Ontañón. Ensemble case-based reasoning: Colaboration policies for multiagent cooperative cbr. In I. Watson and Q. Yang, editors, CBR Research and Development: ICCBR-2001, volume 2080, pages 437 – 451, 2001. [11] E. Plaza and S. Ontañón. Justification-based multiagent learning. In ICML-, 2003. [12] A. Prodromidis, P. Chan, and S. Stolfo. Meta-learning in distributed data mining systems: Issues and approaches. In Book on Advances of Distributed Data Mining, editors Hillol Kargupta and Philip Chan, AAAI press, 2000., 2000. [13] F. J. Provost and D. N. Hennessy. Scaling up: Distributed machine learning with cooperation. In AAAI/IAAI, Vol. 1, pages 74–79, 1996. [14] S. Sen and G. Weiß. Chapter 6: Learning in multiagent systems. In G. Weiß, editor, Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, pages 259–299, Cambridge, MA, USA, 1999. MIT Press. [15] S. S. Sian. Extending learning to multiple agents: Issues and a model for multi-agent machine learning. In Y. Kodratoff, editor, EWSL, volume 482 of Lecture Notes in Computer Science, pages 440–456. Springer, 1991.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
55
A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets Vicenç Soler1, Jesus Cerquides2, Josep Sabria3, Jordi Roig1, Marta Prim1 Dept. Microelectrònica i Sistemes Electrònics, Universitat Autònoma de Barcelona, Spain 2 WAI Research Group, Dept. Anaysis and Applied Mathematics, University of Barcelona, Spain 3 Dept. Gynecology & Obstetrics, Hospital Universitari Dr.Josep Trueta, Girona, Spain 1
Abstract. We propose a method based on fuzzy rules for the classification of imbalanced datasets when understandability is an issue. We propose a new method for fuzzy variable construction based on modifying the set of fuzzy variables obtained by the RecBF/DDA algorithm. Later, these variables are combined into fuzzy rules by means of a Genetic Algorithm. The method has been developed for the detection of Down’s syndrome in fetus. We provide empirical results showing its accuracy for this task. Furthermore, we provide more generic experimental results over UCI datasets proving that the method can have a wider applicability. Keywords. Fuzzy Logic, Genetic Algorithms, Down’s syndrome, Fuzzy Rule Extraction, Imbalanced Datasets
1. Introduction In classification, imbalanced datasets are those that contain classes with a large difference in the number of instances. Working with imbalanced datasets is always difficult due to the problem of generalization. Except for problems with a very clear difference between the classes, it is not easy to define a boundary to separate the different classes involved in. In addition, usually the datasets have noise and this fact creates more problems to work with the data. All these problems are found joined in the field of medicine, usually because the data is collected either directly from the medical staff or with instruments of measure that have a precision error. In this case study our focus is in the Down’s Syndrome (DS) detection problem for the second trimester of gestation. This is a well known problem in medicine, not solved yet, partly due to the very imbalanced data and the noise of the collected data. It is a binary classification problem: either the fetus presents DS (positive case) or it is considered healthy (negative case). Fortunately, regarding the number of examples, the negative class is the major-class and the positive class is the minor-class, and these two classes are highly imbalanced. The solution of the DS detection problem must be acceptable in terms of efficiency detection and understanding. Current methods are statistically based and are able to identify a 60%-70% of the positive cases, with a 10% of false positives [6][7]. In this case, the proposed method will try to improve the results of the current methods, especially in terms of understanding (important for the medical community), and it is
56
V. Soler et al. / A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets
because of this reason that Fuzzy Logic (FL) is used. Fuzzy Logic approximates our way of thinking and has enough variability of solutions to solve a problem of these characteristics. The proposed method to achieve an accurate fuzzy classification system from an imbalanced dataset, consists on using the RecBF/DDA algorithm to obtain a first set of Membership Functions (MF) from the dataset, recombine those functions by using a special method presented in this paper and called ReRecBF (Recombine RecBF) and finally, with the recombined set of MF and the dataset, obtain a set of fuzzy rules by means of a Genetic Algorithm (GA), as shown in Figure 1.
Figure 1. Our method’s schema. Solid line rectangles represent algorithms (RecBF/DDA, ReRecBF and GA) and dashed ones represent sets (dataset, fuzzy rules set and fuzzy variables set).
RecBF Networks [8] are a variation of RBF networks, in which the representation is a set of hyper-rectangles belonging to the different classes of the system. Every dimension of the hyper-rectangles represents a membership function. Finally, a network is built, representing, on every neuron, the MF found. The algorithm which produces the RecBFN from the dataset and the definition of the classes of the system is called RecBF/DDA. The set of MFs defined by this method are usually well suited for classification. These MFs will be used to obtain a set of rules by means of a Genetic Algorithm. But, first, a previous step to transform the set of MF (output of the RecBF/DDA) has to be done because they need to be adapted for working with imbalanced data. This task is done by means of the ReRecBF algorithm by recombining the set of MF. Genetic Algorithms [11][14] are used to derive fuzzy rules from the set of recombined MFs. GA are methods based on principles of natural selection and evolution for global searches [9] [10]. Empirical results show that for the DS problem the accuracy is improved with respect to the current method used. Furthermore, this method has been compared with other methods for imbalanced problems, using some UCI datasets. The results show that our method improves the accuracy in most of cases. This article has been structured in several sections. First, Section explains our method to classify data and extract fuzzy rules from imbalanced datasets. After that, the empirical results of the method are analyzed into the section 3. Finally, section 4 provides a summary and presents the conclusions of our work.
2. The method This section explains the method used to obtain a fuzzy logic classification system for imbalanced datasets. The method pursues the following steps:
V. Soler et al. / A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets
1. 2. 3.
57
It uses the RecBF/DDA algorithm to get a set of MF from the dataset (section 3.1). It recombines the MFs found in step 1, and obtains a new set of MFs (section 3.2). Finally, it applies a GA to find a small set of fuzzy rules that use the MFs obtained in step 2 and provide high classification accuracy. (section 3.3).
2.1. Constructing a first set of membership functions by applying RecBFs The RecBF/DDA[8] constructs a set of MFs from datasets that provide a set of linguistic labels that can be afterwards use to concisely express relationships and dependencies in the dataset. The algorithm creates hyper-rectangles (called RecBF) belonging to every class, from the input dataset and a defined fuzzy output variable. From these RecBFs, a set of MFs will be extracted for every variable. Each RecBF is characterized by a support region and a core region. In terms of MF, a trapezoidal MF is composed of four points (a,b,c,d): the interval [a,d] defines the support-region and the [b,c] one, the core-region. RecBF/DDA algorithm is based on three procedures, which are executed for every training pattern: covered is executed when the support-region of a RecBF covers a pattern, commit creates a new pattern if the previous condition is false and, finally, the procedure shrink solve possible conflicts of the pattern with respect to RecBFs of a different class.
Figure 2. An example of the execution of the RecBF/DDA algorithm for a 2-dimensional system. (1) shows 3 patterns from one class determining a RecBF, (2) shows 2 patterns from another class and how they cause the creation of a new RecBF and shrink the existing one, (3) and (4) show the different RecBFs created when the inclusion of new pattern is done, just varying the x coordinate: outside and inside the core-region of the other class. The x and y axis show the different MF created.
Figure 2, shows how the patterns are passed to the RecBF/DDA algorithm and how the different RecBFs are created. But in our case study we work with imbalanced or highly imbalanced datasets, and to avoid granulation of the MFs of the minor-class, it is absolutely necessary to generalize this class, because the main problem is when the method has to classify/test patterns belonging to the minor-class not shown during the training process. Therefore, some aspects have to be taken into account to train the RecBF/DDA algorithm, like shrink in only one dimension or train the dataset sorted by class [20].
58
V. Soler et al. / A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets
2.2. Recombining RecBFs Since the shrinking method in RecBF algorithm is performed only in one dimension, superposed MFs are given as result of that algorithm (Figure 2(4)). These MFs are not adequate to obtain an accurate set of fuzzy rules from the imbalanced dataset. The tests on different datasets demonstrated that the MFs obtained from the RecBF/DDA algorithm were not discriminant enough and some transformations of these MFs were needed. On these grounds we proposed to: 1. Take only the intervals obtained by the core-regions. 2. Transform to triangles the trapezoids belonging to the minor-class. 3. If it is possible, discard the less representative RecBF. That is, RecBFs whose core-region includes less than the 10% of the patterns of its class. The core-regions delimit the areas where the training patterns are, and the supportregions (without taking into account the core ones) are the undefined areas between the core ones. In this case, the set of superposed MFs belonging to a variable and a class will be split into new ones, selecting only the areas defined by the core-regions. Thanks to this operation, we will have the MF areas divided into sectors. This fact will improve the quantity of patterns matched by the rules found.
Figure 3. (1) Trapezoids for major-class MFs and triangles for minor-class MFs. (2) Example of 2 overlapped trapezoids. (3) Three trapezoids will be obtained from above if the MFs would belong to the major-class and (4) Three triangles if would belong to the minor-class. Both (3) and (4) have MFs with the minimum and the maximum in the extremes of the variable.
Finally, the recombination procedure shown in Figure 3 consists on create new MFs by splitting the existing ones by its core-region (points b and c) and eliminating the old ones. For every new MF created, the support-area will be defined from the minimum to the maximum values of that variable. The MFs created from examples belonging to the minor-class are transformed into triangles (having its maximum in the average point of its original trapezoid) and the major-class ones as trapezoids.
59
V. Soler et al. / A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets
Thanks to these changes, the MFs are not so rigid and they have many more possibilities to participate in every rule. So, we give to the system more feasible tools to adapt itself to the given dataset [20]. The following subsection explains how the rule set will be extracted by a GA, once the MFs have been calculated by the previously specified procedure. 2.3. Obtaining the rule set: The GA The codification of one chromosome of our GA is expressed in the following line:
where n is the number of variables (input variables plus output variables) and m is the number of rules. xi,j is the value a gene can take, which is an integer value compressed in the interval [0,n_fuzzysetsj] where n_fuzzysetsj is the number of MFs of the jth variable. If a xi,j has value 0 it expresses that this variable is not present in the rule. If the 0 value is assigned to the output fuzzy set, the rule is not taken into account to evaluate the rules. So, the system is able to find a set of rules less than m, just putting 0 in the output fuzzy set of the rule. Every xi,1 , ... , xi,n corresponds to a rule of the system. The initial population is either taken randomly or by an initial set of rules. Every gene of a chromosome is generated randomly in the interval [0, n_fuzzysetsj], but some rules can be fixed for the entire simulation or just given as an initial set of rules, if needed. If a fuzzy set is 0, it means that the variable is not taken into account in the rule. The GA uses the g-means metric (1), suggested by Kubat et al. [1], as fitness value for imbalanced datasets.
The g-means is the most common measure used to evaluate results in imbalanced +
-
datasets, where acc is the accuracy classification on the positive instances, and acc the accuracy on the negative ones. The classification of the data, in a fuzzy logic system, depends on the shape of the MFs of the input and output variables, and the rules. The MFs of the input variables are obtained by the method described in sections 2.1 and 2.2, the rules by means of this GA and the output variable has to be defined from the beginning. So, another important fact in the matching of the examples by the rule set is the shape of the output trapezoids. We can choose between symmetric (they have the same area and shape) or nonsymmetric (they have different shapes and areas). In this paper, 4 different types of asymmetry were tested.
60
V. Soler et al. / A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets
3. Experimental results 3.1. Experimental results for Down’s syndrome detection The presented method has been applied to the detection of Down’s syndrome for the second trimester of gestation. In this case, to compare results, the same dataset can be presented with either 5 variables (physical: mother’s age, mother’s weight, gestation’s age, and the measures of two hormones) or 3 variables (MoM: mother’s age and two MoM corresponding to both hormones), thanks to an applied reduction of variables used in medicine called MoM (Multiple of Median) [6]. The characteristics of these datasets are: 2 output classes and continuous input data. Each datatset (MoM or physical) is divided into 2 parts: 3071 patterns (3060 patterns belonging to the majorclass and just 11 belonging to the minor-class) for training and 4815 (4801 major-class and 14 minor-class) for testing. Imbalance ratio is 1:300, i.e., highly imbalanced dataset. Several classification methods have been tested against this dataset: Neural Networks (Backpropagation, BayesNN, etc.), classical methods of Fuzzy Rule Extraction and other methods, like decision trees or SVM [9] [10] [15]. The results were negative, because of the treatment of the minor-class patterns: either because they did specialize neurons/rules/etc. in the cases belonging to the minor-class or because the minor-class patterns were ignored (they tried always to match the major-class patterns without taking into account the minor-class patterns). Table 1 shows that the results are very similar to those obtained by the current methods (60%-70% TP and 10% FP). The best results are in the first three rows, which minimizes the %FP. In these cases, no RecBF obtained was discarded. But the results which improve the current methods are the very small quantity of rules found: between 4 and 6. This fact makes the system very understandable and hence very adequate for task. The last two columns show the accuracy for the 3071 dataset after testing for whole training and test patterns. Table 1. The best results for the DS problem. The first 2 columns express the accuracy of the test of 4815 patterns not included in the set of 3071 training patterns. The type of the set refers to the type of dataset: using MoM or not. Type of output can be symmetric or non-symmetric (section 3.3). Discarded RecBFs indicates if the solution was found discarding the less representative RecBFs /section 3.2). The last 2 columns refer to the accuracy of the training + test dataset, training always with the stratified half of patterns.
3.2. Comparison with other methods Table 2 shows the comparison of the technique explained in this paper (ReRecBF) with other two methods for imbalanced datasets: KBA [3] and SDC [4]. For KBA, the authors divided the dataset into 7 parts: 6 for training and 1 for testing, and 7:3 in SDC
V. Soler et al. / A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets
61
case. The columns with the SVM [16] and SMOTE [2] methods are included because they can be compared with the rest of the results (both authors included them in their results). Both methods are kernel methods, based on modifications of a SVM. In Table 2, the non-binary datasets were transformed into binary datasets by choosing one class as the negative one and the rest of classes were grouped all as one. The number corresponds to the chosen class. The first four columns of the table explains the type of the dataset, the following three columns are the g-means results from the KBA and the next 3 columns the same results from the SDC. The last two columns reflect the g-means results from the method proposed in this paper and the different number of rules of the systems found. Table 2. Comparison of our method (ReRecBF) with the KBA & SDC methods for some UCI datasets, by means of the g-means measurement. In first column, the number of the class done minor-class is included with the name of the dataset. The columns 2, 3 & 4 express the characteristics of every dataset. The last column indicates the number of rules of the fuzzy system found, that shows that is reduced.
These results show that for the glass, segmentation and hepatitis datasets our method is better, in terms of the g-means metric whilst for the car and abalone it is worst (abalone improves KBA). Always the system found has a small number of rules, i.e., so it has a high probability of being understandable by a human expert. 4. Conclusions We have presented a method to work with imbalanced or highly imbalanced datasets. The method has been shown to improve previously used methods on the Down’s syndrome detection problem. Furthermore, it has been proven to be competitive with state of the art classifiers for imbalanced datasets, on Irvine databases. This method extracts information in the dataset, and expresses it in a fuzzy system. This information is expressed in a small number of rules, as can be see in tables 1 and 2. This fact means that the rules are not specialized in cases of the minor-class, but they are distributed among both classes. Normally, we can find more or many more rules belonging to the major-class than to the minor-class. This was one of the goals to be achieved by the method.
62
V. Soler et al. / A Method to Classify Data by Fuzzy Rule Extraction from Imbalanced Datasets
5. References [1]
[2] [3] [4] [5] [6] [7] [8] [9]
[10] [11]
[12] [13] [14] [15] [16] [17] [18] [19] [20]
Kubat, M. & Matwin, S. Addressing the Curse of Imbalanced Training Sets: One-Sided Selection. Proceedings of the 14th International Conference on Machine Learning. (1997). Chawla, N., Bowyer, K., Hall, L. & Kegelmeyer, W. SMOTE: Synthetic Minority Oversampling Technique. Journal of Artificial Intelligence Research (2002), 16, 321-357. Wu, G. & E. Y. Chang. KBA: Kernel Boundary Alignment Considering Imbalanced Data Distribution. IEEE Transactions on knowledge and data engineering (2005). Akbani, R., S. Kwek, N. Japkowicz. Applying Support Vector Machines to Imbalanced Datasets. Proceedings of the 2004 European Conference on Machine Learning (ECML'2004). Japkowicz N. & S. Stephen. The Class Imbalance Problem: A Systematic Study, Intelligent Data Analysis, Volume 6 (2002), Number 5, pp. 429-450. Sabrià, J. Screening bioquímico del segundo trimestre. Nuestra experiencia, Progresos en Diagnóstico Prenatal, vol. 10 (1998), nº 4, pag.147-153. Norgaard-Pedersen, L. et al. Maternal serum markers in screening for Down syndrome (1990). Berthold M. & K.P. Huber. "Constructing Fuzzy Graphs from Examples", Intelligent Data Analysis, 3 (1999), pp. 37-53. Wu, T.P. & S.M. Chen. A New Method for Constructing MFs and Fuzzy Rules from Training Examples", IEEE Transactions on Systems, Man and Cybernetics-Part B: Cybernetics, Vol. 29 (1999), No.1, pp. 25-40. Wang L.X. and J.M. Mendel. Generating Fuzzy Rules by Learning from Examples, IEEE Transactions on Systems, Man and Cybernetics, Vol. 22, No.6 (1992), pp. 1414-1427. Pal, S.S.K. Bandyopadhyay and A. Murphy. Genetic Algorithms for Generation of Class Boundaries, IEEE Transactions on Systems, Man and Cybernetics-Part B: Cybernetics, Vol. 28, No.6 (1998), pp. 816-828. Soler, V., J. Roig, M. Prim. Finding Exceptions to Rules in Fuzzy Rule Extraction, KES 2002, Knowledge-based Intelligent Information Engineering Systems, Part 2, pp.1115-1119. Soler, V., J. Roig, M. Prim. A Study of GA Convergence Problems in the Iris Data Set, Third International NAISO Symposium on Engineering of Intelligent Systems (2002). Herrera, F., M.Lozano, J.L. Verdegay. A Learning Process for Fuzzy Control Rules using Genetic Algorithms, Technical Report #DECSAI-95108 (1995). Sordo M. Neural Nets for Detection of Down's Syndrome. MSc Thesis, Department of Artificial Intelligence, University of Edinburgh, UK. (1995). Cristianini, N. & Shawe-Taylor, J. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge, UK. (2000). Chawla, N., N. Japkowicz, A. Kolcz (guest editors). Special Issue on Learning from Imbalanced Data Sets, ACM SIGKDD Explorations, Volume 6, Number 1, June 2004. Japkowicz, N. The Class Imbalance Problem: Significance and Strategies. Proceedings of the 2000 International Conference on Artificial Intelligence (IC-AI'2000), Volume 1, pp. 111-117. Kecman, V. Learning & Soft Computing, Support Vector Machines, Neural Networks and Fuzzy Logic Systems. The MIT Press, Cambridge, MA, 2001. Soler, V., J. Roig, M. Prim. Adapting Fuzzy Points for Very-Imbalanced Datasets, NAFIPS 2006. To be printed.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
63
Qualitative Induction Trees applied to the study of the financial rating1 a
Llorenç Rosellóa,2, Núria Agell b, Mónica Sánchez a, Francesc Prats a Departament de Matemàtica Aplicada II -Universitat Politència de Catalunya b ESADE-Universitat Ramon Llull
Abstract: In this work the Qualitative Induction Trees and the algorithm QUIN are described to construct them. It is justified its suitability for the prediction of the rating of the companies. Predicting the rating of a company requires a thorough knowledge of the ratios and values that indicate the company’s situation and, also, a deep understanding of the relationships between them and the main factors that can modify these values. In this paper are given concrete examples of application in the prediction of the variation of the rating, analyzing on one hand which is this variation and on the other hand explaining which are the more influent variables in this change. Keywords: Learning algorithms, Orders of magnitude reasoning, Financial Rating, Qualitative Induction Trees
1. Introduction In this paper a learning process is described to induce from a few numerical information a qualitative model that provides a causal interpretation between the variation of some input variables and the output variables. To obtain the model is used the algorithm QUIN (QUalitative INduction) that solves the problem of the automatic construction of qualitative models across an inductive learning of numerical examples. What the QUIN algorithm learns it is known as Qualitative Trees. These kind of trees are similar to decision trees but with the difference that in the leafs there are qualitative functional restrictions inspired in the Q+ and Q− predicates introduced by Forbus [1]. A qualitative induction tree defines a partition in the attributes space in zones with a common behavior of the chosen variable. This algorithm and its implementation were done by Dorian Šuc and Ivan Bratko [2]. This qualitative model is suitable in order to analize the
1 This work has been partially financed by MEC (Ministerio de Educación y Ciencia): AURA project (TIN2005-08873-C02-01 and TIN2005-08873-C02-02) 2 Correspondence to: Llorenç Roselló. Complete address of corresponding author: Tel.: +34 93 413 76 83; Fax: +34 93 413 77 01; E-mail:
[email protected]
64
L. Roselló et al. / Qualitative Induction Trees Applied to the Study of the Financial Rating
financial rating problem because it is interesting to analyze how the variable that describes the state of a company in a given moment can modify the valuation of its rating. The algorithmic complexity of QUIN does intractable big sets of information with examples with many attributes. Provided that this one is precisely a characteristic of the problem that we are working with, it has been necessary to simplify the model who describes the state of a company reducing the number of variables and grouping the sets of information of diverse ways. The obtained results in the different ways of grouping show certain common trends in the rating variation. The structure of the paper is: in section 2 a general description of the qualitative induction tree and of the QUIN algorithm is given. In section 3 the general approach of the financial rating is explained. Later the experiments realized are exposed, and in section 4 the obtained results are shown. Finally the conclusions and new routes to solve the raised problem are presented.
2. Qualitative Induction Trees and QUIN 2.1. Qualitative Induction Trees To introduce the qualitative induction trees let’s suppose that we have N examples and that each example is described by n + 1 variables where the first n , X 1 , … , X n are called attributes and the last one X n +1 is called class. We want to learn zones of the space that should present a common behavior of the class variable. These zones are described by means of qualitative induction trees. A qualitative tree is a binary tree with internal nodes called splits, its leafs are qualitative constrained functions. From now on these functions will be denoted by QCF. The internal nodes define partitions of the space of attributes. In each node there is an attribute and a value of this attribute. The QCF define qualitative n
constrains of the class variable in a following way: if F : R → R is a map that associates to each n attributes a value of the class variable, a QCF associated to the function F and to m-tupla
( x1 , … , x m ) ∈ R m (m ≤ n) is denoted by F s1 ,…, sm ( x1 , … , x m ) , where
s i ∈ {+,−} , and it means that:
ΔF Δxi
= si Δx j = 0 , j ≠ i
In other words, si = + means that F is a strictly increasing function respect the xi variable, and strictly decreasing if si = − . Later one gives a simple example to allow understand better what is a qualitative induction tree:
L. Roselló et al. / Qualitative Induction Trees Applied to the Study of the Financial Rating
65
4 2
x≤0
0 -2 -4
y≤0
20
y>0
0
F − , + ( x, y ) F − , − ( x, y ) F + , + ( x, y ) F + , − ( x, y )
-20 -4 -2 0 2 4
Figure 1: The graphic of
F ( x, y ) = x 2 − y 2
and its qualitative induction tree.
It is necessary to remark that in general we do not have an explicit expression of the
F function.
To decide which QCF adjusts better to the examples the qualitative changes q i of a
variable xi is used, where q i ∈ {pos , neg , zero} , in such a way that if Δxi > 0 then
qi > 0 and Δxi = 0 then qi = zero.
Let’s define de QCF-prediction P ( si , qi ) ( s i ∈ {+,−}) as
⎧ pos, if ΔF / Δxi > 0 ⎪ P ( si , q i ) = ⎨ neg , if ΔF / Δxi < 0 ⎪ zero, otherwise ⎩ Then, by each pair of examples e, f it is formed a qualitative change vector where each vector component q ( e , f ),i it is defined by q ( e , f ),i Where
⎧ pos, if x f ,i > x e.i + ε ⎪ = ⎨ neg , if x f ,i > xe.i − ε ⎪ zero, otherwise ⎩
x f ,i is the i-th component of f. The parameter ε is introduced to solve the
cases with tiny variations. Once introduced these concepts one goes to explain the method to choose the QCF that better describes data (later will see what is understood by `better’). Given to set C of N examples described above, are formed all the QCF possible to describe the variation of the variable class based on the attributes. From here is deduced that it is necessary to generate
66
L. Roselló et al. / Qualitative Induction Trees Applied to the Study of the Financial Rating
⎛n⎞ ⎛n⎞ ⎛n⎞ 2⎜⎜ ⎟⎟ + 2 2 ⎜⎜ ⎟⎟ + … + 2 n ⎜⎜ ⎟⎟ = 3 n − 1 ≈ 3 n , ⎝1⎠ ⎝ 2⎠ ⎝n⎠ QCF’s (the number 2 corresponds to the two possible signs and the combinatorial number to the different ways to choose the n variables on which the function depends) One says that a QCF, F
s1 ,…, sn
, that describes the behavior of the class X n +1 is
consistent with a vector of qualitative changes if all QCF-predictions P ( s i , q n +1 ) they are all positive or some is zero, but any is negative. In other words, when the vector does not contradict the QCF. One says that there is an ambiguity in the prediction of the QCF with a vector of qualitative changes when they appear QCF-predictions of type pos and neg or when all the predictions are zero. Finally, one says that a QCF is inconsistent with a vector of qualitative changes if it is neither consistent nor ambiguous. For each QCF a cost function is defined that from the number of consistent qualitative vectors and ambiguous (therefore it has had to verify this fact with all the qualitative vectors of the problem) gives us a measurement of the suitability of this function to describe our data. 2.2. The QUIN algorithm Once introduced the concept of tree of qualitative induction, algorithm QUIN can be presented of a very simple form. The QUIN constructs the tree of qualitative induction with a greedy algorithm that goes from top to bottom similar to the ID3. Given a set of examples, it calculates the function of cost for each one of the QCF that has found for each partition and chooses that partition that minimizes the cost-error of the tree. The cost-error of a leaf is the function of cost of the QCF that there is in this leaf. The cost-error of a node is the cost-error of each one of the subtrees plus the cost of the division.
3. Application to financial rating 3.1. Financial rating The rating is a qualified assessment about the risk of bonds issued by a company. The specialized rating agencies, such as Standard & Poor’s, classify firms according to their level of risk, using both quantitative and qualitative information to assign ratings to issues. Predicting the rating of a firm therefore requires a thorough knowledge of the ratios and values that indicate the firm’s situation and, also, a deep understanding of the relationships between them and the main factors that can alter these values.
L. Roselló et al. / Qualitative Induction Trees Applied to the Study of the Financial Rating
67
The processes employed by these agencies are highly complex. Decision technologies involved are not based on purely numeric models. Experts use the information given by the financial data, as well as some qualitative variables, such as the industry and the country or countries where the firm operates, and, at the same time, they forecast the possibilities of the firm’s growth, and its competitive position. Finally, they use an abstract global evaluation based on their own expertise to determine the rating. Standard & Poor’s ratings are labeled AAA, AA, A, BBB, BB, B, CCC, CC, C and D. From left to right these rankings go from high to low credit quality, i.e., the high to low capacity of the firm to return debt. 3.2. Prediction in the change of rating The interest of this paper is not based in classifying a firm once it is well-known the descriptive variables (this problem has been already attacked by several authors, for example see [3]) but what it is wanted to be known is which are the variables that can make vary the classification and in which way takes place this influence. To do this we have the financial results presented by 1177 companies worldwide and the classification that Standard & Poor’s in reference to year 2003 has granted. A set C of examples is given and the attributes are the 23 variables that consider to adjudge the rating and the variable class (qualitative) is the rating. With algorithm QUIN we have tried to learn which is the qualitative tree associated to this problem.
3.3. Carry out experimentation At the moment of making the experimentation has arisen a problem due to the algorithmic complexity of the QUIN. This is that could not be used all the examples nor be simultaneously considered all the attributes. The authors say [4] “QUIN cannot efficiently handle large learning sets, neither in terms of examples nor the attributes”. It is for this reason that in this work we had: 1st) to limit the number of examples grouping the companies by countries: one has taken Canada (83 examples), France (13 examples), United Kingdom (59 examples), Japan (26 examples) and with a random sample of 86 companies. The case of the companies of the United States at the moment is intractable because it has an excessive number of examples. 2nd) To limit at five the number of variables to treat. This limitation has been able to do of coherent form since they are had 23 variables that are considered grouped in groups that describe a certain financial characteristic. In particular one works with ratios of financing, liquidity, yield and size. Therefore we can say that in spite of these limitations as far as the number of attributes, each company is determined by these five variables. Altogether five experiments have been carried out that correspond to the data corresponding to companies of Canada, France, United Kingdom, Japan and a random sample.
68
L. Roselló et al. / Qualitative Induction Trees Applied to the Study of the Financial Rating
4. Results From algorithm QUIN we obtain five trees of qualitative induction so that in each one of them we can see how it varies the rating and which the important variables for this variation are. The results are in it figures 2,3,4,5 and 6. The meaning of the variables is the following one: A is a ratio that measures the activity, G measures the yield, L is the ratio that measures the liquidity and T is the indicator of the size (sales in thousands of dollars of the USA). The coincidences between the trees have been fitted. It is possible to remark that the trees obtained by the different countries, although are not identical they show certain common characteristics that give us useful information on the problem. For each country a different tree is obtained but it is observed that there are remarkable coincidences in three cases: the random sample, Canada and United Kingdom. In all the three cases the rating depends negatively on the size of the company and the yield. The considered order had been the order of Standard & Poor’s, i.e. from less to more risk: AAA ≺ AA ≺ … ≺ D . Also it is possible to be observed that in two cases the size of the company by itself influences negatively in the rating, the companies with great volumes of sale are less dangerous.
A ≤ 0.98 A ≤ 0.57
M − (T )
L ≤ 2.70
M − , − (T , G ) T ≤ 1.97 M − ( A)
M − , − (G , T )
M − , − (T , G )
Figure 2: Tree obtained from a random sample of companies
L. Roselló et al. / Qualitative Induction Trees Applied to the Study of the Financial Rating
L ≤ 1.79
T ≤ 5.87 ×105
T ≤ 1.67 ×106 A ≤ 0.80
M −,+,− (G, A, F ) M −,+,− (G, A, T )
M −,+ (G, F )
M −,− (T , G) M − (A) Figure 3: Tree obtained from Canadian companies.
G ≤ −7.46 A ≤ 0.75
M −,− (T , F )
M +, −,+ ( A, T , F ) A ≤ 1.08 M + (F ) Figure 4: Tree obtained from United Kingdom companies.
F ≤ 1.23 L ≤ −19.75
M − (T )
M − (T )
M − , − , + ( L, G , F )
Figure 5: Tree obtained from French companies.
M −, − (T , G)
69
70
L. Roselló et al. / Qualitative Induction Trees Applied to the Study of the Financial Rating
L ≤ 7.11
M − , − (G , L)
M + (F )
Figure 6: Tree obtained from Japanese companies.
5. Conclusions and future work In this paper the qualitative induction trees and the QUIN had been introduced, and from a quantitative data set has been analyzed how it varies the rating. The results have provided certain useful information on how the rating can depend on some variables and some similarities between countries. As future work we want follow doing research in: • To study the possibility of improving the algorithmic complexity of the QUIN so that it can work with problems of bigger size and more complexity. In this case we have had to test the QUIN with more or less small samples because the algorithm needs an excessive time for samples with more than 100 examples. • To extend the qualitative induction trees and the algorithm QUIN to work with problems where the output variable is not numerical but ordinal. This would allow to define several levels of growth and decrease and to work considering the different orders from size of the exit variable. • To follow the works initiated by Šuc to obtain a qualitative function that not only allows giving the variations but in what order of size moves the output variable based on the attributes. References [1] [2] [3]
[4] [5]
Forbus, K: Qualitative process theory, Artificial Intelligence (1984) 24:85-168 Šuc, D., Bratko, I. Induction of Qualitative Trees. In L. De Raedt and P. Flach editors, Proc. 12th European Conference on Machine Learning, pages 442-453. Springer, 2001. Freiburg, Germany. Rovira, X., Agell, N., Sánchez, M., Prats, F., Parra, X.: Qualitative Radial Basis Function Networks Applied to Financial Credit Risk Prediction, Lecture Notes in Computer Science, pàg 127. Springer, 2001. Freiburg, Germany Zabkar, J., Vladusic, D., Zabkar, R., Cemas, D., Šuc, D., Bratko, I.: Using Qualitative Constraints in Ozone Prediction, Proc. 19th In. Workshop on Qualitative Reasoning QR05, pp. 149-156. Šuc, D., Vladusic, D., Bratko, I.: Qualitative Faithful Quantitative Prediction, Proc. ŠŠŠ18th In. Joint Conf. on Artificial Intelligence, IJCAI.2003, pp. 1052-1057
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
71
Tactical modularity for evolutionary animats Ricardo A. Téllez and Cecilio Angulo
[email protected],
[email protected] Technical University of Catalonia, Jordi Girona, 1-3, Barcelona Abstract: In this paper, we use a massive modular architecture for the generation of complex behaviours in complex robots within the evolutionary robotics framework. We define two different ways of introducing modularity in neural controllers using evolutionary techniques, which we call strategic and tactical modularity, and show at what modular levels each one acts and how can they be combined for the generation of a completely modular controller for a neural networks based animat. Implementation results are presented for the garbage collector problem using a khepera robot and compared with previous results from other researchers. Keywords. autonomous robot control, neural networks, modularity, evolutionary robotics
1. Introduction In the framework of evolutionary robotics, the generation of complex behaviours in complex robots is still an open question far from being completely solved. Eventhough lately some complex behaviours have been implemented in simple wheeled robots [13], it is not clear how those implementations will scale up in more complex robots, understanding by complex robots, robots with a large number of sensors and actuators that must coordinate in order to accomplish a task or generate a behaviour. In our research we focus on this problem by using evolutionary techniques for the generation of complex behaviours in complex robots. We think that the solution to this problem must be grounded in the divide and conquer principle, this means, the use and generation of modular controllers. This is not indeed a new approach and has been adopted by other researchers [5][10][4][2].However, in all those modular implementations, modules were created at the level of behaviours, it is, a module was created for each behaviour required by the robot. In this paper, we introduce the idea of modularisation at the level of robot device and promote its use together with the modularisation at the level of behaviours. We propose that the use of both types of modularisation in a unique controller may be required in order to achieve complex behaviours in complex robots. By doing this, a modularisation at the two levels may allow the generation of really complex behaviours. This paper is divided into the following sections: section 2 makes a review of related works, section 3 describes the two different types of modularisation approaches; section 4 describes the experimental framework used; section 5 describes the experiments performed and their results; section 6 discusses the results obtained; and final section 7 goes to the conclusions and points towards future work.
72
R.A. Téllez and C. Angulo / Tactical Modularity for Evolutionary Animats
2. Modularity in robot control In the design of modular neural controllers, most of the jobs have been influenced by Jacobs et al. mixture of experts [8]. Their basic idea was to create a system composed of several networks, each one in charge of handling a subset of the complete set of cases required to solve the problem. Their system has been widely used and improved upon by several authors [3][1] in different classification problems. In evolutionary robotics, Tani and Nolfi [16] improved Jacobs' architecture by drawing on a mixture of experts whose segmentation of competences was produced by observing the changes in the dynamical structure of the sensory-motor flow. In a more recent work, Paine and Tani [14] studied how a hierarchy of neural modules could be generated automatically, and Lara et al. [9] separately evolved two controllers that performed two different behaviours, then evolved the connections between them in order to generate a single controller capable of performing both behaviours. On a similar way, Nolfi [12] makes one of the first attempts to use modular neural controllers for the control of an autonomous robot in a quite complex problem. He compares the performances of different types of control architectures, including two modular ones: on the first one, the controller is composed of two modules each one in charge of one specific and well differentiated task. The division in those two modules is performed by using a distal description of the behaviours, it is, a division in behaviours described from the point of view of an external observer. In the second modular architecture, Nolfi proposes a control architecture where modules are created based on a proximal description of behaviours, it is, the description of the behaviour from the sensory-motor system point of view of the agent. The work of Nolfi was later improved in [15] by showing how a distal modularity could arise from an evolutionary process, and extended by Ziemke et al. [20] by using other types of neural control architectures. Except Nolfi's proximal division, all the works shown have in mind the implementationof behaviours as composed of sub-behaviours. Hence, the idea in all of them is almost the same: to divide the global behaviour into a set of simpler subbehaviours, which when combined, make the robot perform the required global behaviour. This division is sometimes made by hand or by any automatic way. Then, each sub-behaviour is implemented by a single monolithic neural controller that takes as inputs the sensor values, and as outputs the commands for the actuators. Even that all those works where successful with those monolithic implementations, their results were only applied to simple wheeled robots, being difficult to see how this type of neural controllers could control more complex robots with several sensors and actuators. Because all those modularisation strategies focus on behaviour division, we say that they implement modularity at the strategic level. 3. Strategic and tactical modularity From game theory [11], we think of strategy as the overall group of behaviours (or subgoals) that a robot requires for the accomplishment of a goal, and of tactics as the actual means used to gain each of those sub-goals. Then, we define strategic modularity as the modular approach that identifies which sub-behaviours are required for an animat in order to obtain the global behaviour. This modularity can be performed from a distal or a proximal point of view, but it is a division that identifies a list of subbehaviours. In contrast, we define tactical modularity as the one that creates the sub-
R.A. Téllez and C. Angulo / Tactical Modularity for Evolutionary Animats
73
modules required to implement a given sub-behaviour. Tactical modularity has to be implemented for each of the sub-behaviours by using the actual devices that will make the animat act, these are, its sensors and actuators. In tactical modularity, subdivision is performed at the level of the elements that actually are involved in the accomplishment of the sub-task. To our extent, all of the works based on divide and conquer principles, focus their division at the strategic modularity level, it is, on how to divide the global behaviour into its sub-behaviours (or how to divide the task at hands into the required sub-tasks). Then, they implement each of those sub-behaviours by means of a monolithic neural controller. However, in this paper we propose the additional use of tactical modularity, where an additional decomposition is made at the level of the devices of the animat. In some cases, it is possible to implement the whole behaviour required for the animat by only using tactical modularity, the use of one type of modularity does not imply the use of the other. In fact, we propose the use of both modularity types on a same controller as the required solution for complex behaviours in complex robots. Strategic modularity would decide which sub-behaviours are required for the complex global task, and tactical modularity would implement each of them on the complex robot. We will not discuss here how to implement strategic modularity for a given robot and task, since it is not our goal. In principle, any of the modularisation methods used on the works described above is valid for its integration with our method of implementing tactical modularity. 3.1. Implementing tactical modularity Tactical modularity should create modularity at the level of the robot devices which have to implement a required behaviour. It means that, once a sub-behaviour required for the animat has been decided, tactical modularity has to implement it using the sensors and actuators at hands. In this paper, we implement tactical modularity by creating a completely distributed controller composed of small processing modules around each of the robot sensor and actuator.We call this module an intelligent hardware unit (IHU) and its schematics is shown in figure 1a. Every IHU is composed of a sensor or an actuator and an artificial neural network (ANN) that processes the information of its associated device (received sensor information for sensors, commands sent to the actuator for actuators). This means that the neural net is the one that decides which commands must be sent to the actuator, or how a value received from a sensor must be interpreted. All IHUs are interconnected to each other in order to be aware of what the other IHUs are doing. So, the net is also in charge of deciding what to say to the other elements as well as to interpret what the others are saying. The structure of a IHU can be seen in figure 1a, and figure 1b shows its application to a simple robotic system with two sensors and two actuators.
74
R.A. Téllez and C. Angulo / Tactical Modularity for Evolutionary Animats
Figure 1. (a) IHU schematics (left) and (b) application of four IHUs to the control of a simple robot composed of two sensors and two motors (right)
Through the use of a neuro-evolutionary algorithm, IHU modules learn how to cooperate and coordinate between them, and how to control its associated element, allowing the whole robot to perform the sub-behaviour required. The algorithm selected is the ESP (Enforced Sub-Populations) [6][7], which has been proved to produce good results on distributed controllers [19]. By using such algorithm it is possible to teach to the networks how they must cooperate to achieve a common goal (i.e. the sub-behaviour to implement), when every network has its own an different vision of the whole system. A chromosome is generated for each IHUs' network coding in a direct way the weights of the network connections, and the whole group of neural nets is evolved at the same time with direct interaction with the environment. 4. Experimental framework In order to test the theoretical approach presented in the previous section, we use a Khepera robot simulation as test-bed for our experiments. Experiments consists of the implementation of a tactical modular control system for the Khepera robot while performing a cleaning task, and the comparison of performance with other controllers developed previously by other researchers. The selected test-bed task is called the garbage collector as in [12]. In this task, a khepera robot is placed inside an arena surrounded by walls, where the robot should look for any of the cylinders randomly distributed on the space, grasp it, and take it out of the arena. Eventhough the robot used is not very complex, the task selected for it is complex. The garbage collector behaviour requires that the robot completely changes its behaviour based on a single sensor value change. When the robot does not have a stick on the gripper, its behaviour has to avoid walls, look for sticks, and approach them in order to pick them up. When the robot carries a stick, its behaviour has to change to the opposite, avoiding other sticks and approaching walls in order to release the stick out of the arena.
R.A. Téllez and C. Angulo / Tactical Modularity for Evolutionary Animats
75
Figure 2. The first three architectures implemented: (a) a standard feedforward architecture, (b) a modular architecture with two sub-modules, and (c) an emergent modular architecture
4.1. The khepera robot and its environment All the experiments reported for the khepera robot are done on a simulator. As simulator, we selected the freely available simulator YAST for the Khepera robot. It includes the simulation of the Khepera gripper, which is the turret capable of grasping objects. The Khepera gripper is composed of an arm that can be moved through any angle from vertical to horizontal, and two gripper fingers that can assume an open or closed position. The gripper is also composed of a sensor that indicates the presence of an object between the fingers. Only the six front infrared sensors of the robot were used, as well as the gripper sensor. As actuators, the robot has two motors (left and right), but it is also possible to control the position of the gripper arm and the status of the gripper fingers (open or close). The control of the gripper is done by means of two procedures: the first procedure, when activated, moves the arm down, closes the gripper fingers and moves the arm up again, in order to pick a stick; the second procedure moves the arm down, opens the gripper fingers, and moves the arm up again, in order to release a stick. The simulation setup is composed of a rectangular arena of 60x35 cm, surrounded by walls, and contains five garbage cylindric sticks. Each stick has a diameter of 2.3 cm and is randomly positioned inside the arena at every new epoch as well as the robot. Experiments consist of 15 epochs of 200 time steps each, where an evolved controller is tested over the task. The duration of each time step is of 100 ms. Each epoch ends after the 200 steps or after a stick has been released out of the arena. 4.2. Neural architectures Five different architectures were tested with the described setup. The first three architectures implemented are shown in figure 2. The first one, labelled (a), is a simple feedforward network with a hidden layer of four units. This architecture has seven inputs, corresponding to the six infrared sensors and the gripper sensor, and four outputs that correspond to the two wheel motors, and to the two procedures of control of the gripper (pick-up stick and release stick). The second architecture, labelled (b), is a modular architecture composed of two control modules. The modules were designed by hand from a distal point of view. One module controls the behaviour of the robot when it is looking for a stick, and the second module controls the robot when it carries one. The third architecture (c) is the emergent modular architecture as defined by Nolfi [12]. In this case, there are two different modules with seven inputs (the seven sensor values) and four pairs of output neurons. The four pairs outputs of the first module code for the speed of the motors (left and right) and the triggering of the pick-up and release procedure. The outputs of the second module determine which of the two competing neurons of the first module had actual control over its corresponding effector. Those three architectures were used by Nolfi on his work and are
76
R.A. Téllez and C. Angulo / Tactical Modularity for Evolutionary Animats
implemented here in order to verify that the setup implemented produces similar results to that obtained by Nolfi, and by hence, allow us to compare results. The two other architectures are the ones that use tactical modularity. The first one, which we will call architecture (d), is a direct application of tactical modularity over the garbage collector task. In this case, only one global behaviour is required for the task (it is, the garbage collector behaviour), and we implement that behaviour using tactical modularity, it is, the creation of one IHU element for each device involved. Since eleven devices are involved (seven sensors and four actuators), we use eleven IHUs for the construction of the controller. No figure is provided for this architecture, because of its complexity, but it would look like figure 2b with eleven devices. We create a IHU for each of the infra-red sensors and the gripper sensor, and four IHUs for the left and right motors, and the two gripper procedures. Each IHU is implemented by a feedforward neural net with eleven inputs, no hidden units, and one output. The second architecture we will call architecture (e). In this case, we apply the two modular approaches to solve the control problem. On a first stage, we use strategic modularity to define the required sub-behaviours for the global task. We manually decide to divide the global behaviour of the animat into two sub-behaviours: the first sub-behaviour should look for a stick while avoiding walls, and once detected, pick it up. The second sub-behaviour should be activated once a stick is picked up, and it has to avoid other sticks and go to the wall where leave out the carrying stick. On the second stage, each of those strategic modules is implemented using tactical modularity, of the same type as in the previous architecture (d), but now focused on those more limited sub-tasks. This means that we will obtain two different strategic modules, each one implementing one sub-behaviour by means of 11 tactical modules. Training of each strategic module is performed separately, and once they are evolved, they are joined for the final global controller. 5. Results We evolved all the architectures using our evolutionary setup. The fitness function rewarded those controllers that were able to release one stick out of the arena. Controllers that were able to pick up one stick were also rewarded with a lower fitness. We included an additional term in the fitness function that decreased the fitness when the stick was released inside the arena. Like Nolfi did, we implemented a mechanism that artificially added a stick in front of the robot each time it picked up one stick, in order to increase the situations where the robot encounters an obstacle in front of it while carrying a stick. The evolutionary process was performed ten times for each architecture, and the results presented here show the averaged fitness obtained from those ten runs. The results, plotted in figure 3a, show that architectures (a), (c) and (d) were able to reach the maximal performance of releasing a stick in every epoch, and being the only difference the number of generations that each architecture required to reach that maximal performance. However, architecture (b) was not able to reach maximal performance.
R.A. Téllez and C. Angulo / Tactical Modularity for Evolutionary Animats
77
Figure 3. (a) Evolution of the fitness obtained by the best controller of each architecture (left). (b) Evolution of the fitness obtained by the best controller of each strategic module of the architecture (e) (middle). (c) Number of times that the best controller of each architecture was able to correctly recollect the five sticks without displaying any error (right).
These results are consistent with the ones reported by Nolfi, and show that architectures not based on distal modularization obtain better results. Evolution of architecture (d) was performed in the same conditions as in (a), (b) and (c), and as can be seen in figure 3a, took a longer number of steps to evolve the same behaviour and about the same performance as in (a) and (c). On the other side, we evolved architecture (e) in two stages. On the first stage, we evolved the strategic module in charge of avoiding walls and picking a stick. On a second stage, we evolved the strategic module in charge of avoiding sticks and approaching walls to release the stick. Both strategic modules were evolved using tactical modularity. Figure 3b shows the number of evolutions required for the controllers to generate those behaviours in both cases. Once the two strategic modules were evolved, they were used on a global controller which used both modules, switching between one and the other to govern the robot depending on the status of the gripper sensor. The final behaviour obtained correctly performed the garbage collector behaviour. 6. Discussion The results obtained in the previous section show that, except in architecture (b), all the architectures are capable of a garbage collector behaviour with a 100% success rate. Only the evolutionary time required for each architecture shows a difference between them, since their behaviours are very similar, including errors about thinking that a stick is a wall that sometimes appear in the final controllers. We conclude then, that the strategic and tactical approaches are as good as the others on the garbage collector task. Having shown that the all the architectures are able to perform the garbage collector behaviour,the next question may be what is the point about using strategic and tactical modularity on a robot controller. The reasons are two: first, the use of tactical modularization allows the introduction of new sensors and actuators at any time. This point has been proved on a previous work [18], where aditional sensors and actuators of an Aibo robot where introduced at different steps of the evolutionary process. To our extent, no other architecture is able to do that. Second, the results obtained with tactical modularization are more robust. In order to probe that,we perform an additional test.We visually compare the performance obtained in all the architectures when performing the garbage collector. For this, the best neural nets obtained for each architecture on each run, were tested into the arena, to see how many times were able the controllers to pick up and release outside the arena the 5 sticks within 5000 cycles without displaying any incorrect behaviour (those
78
R.A. Téllez and C. Angulo / Tactical Modularity for Evolutionary Animats
are, crashing into walls, trying to grasp a wall or trying to release a stick over another). The results are shown on figure 3c. 7. Conclusions We have proposed two types of modularisation required for the generation of complex behaviours in complex robots. We have shown the differences between them and how they can be combined for the generation of complex behaviours in robots. Modularisation at the level of behaviour has already been widely used by other researchers. The real new thing provided by this research is the use of a new level of modularisation focused on the robot devices (tactical modularity), and the demonstration that the use of both types of modularisation together leads to more complex behaviours in more complex robots. Future work includes the application ofstrategic and tactical modularity for complex behaviours in complex robots. This work has already successfully started with the application of tactical modularity to an Aibo robot for the generation of a standing up behaviour [17] and a walking behaviour [18]. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [15] [16]
G. Auda and M. Kamel. Cmnn: Cooperative modular neural networks for pattern recognition. In Pattern Recognition in Practice V Conference, 1997. F. Azam. Biologically inspired modular neural networks. PhD thesis, Virginia Polytechnic Institute and State University, 2000. K. Chen and H. Chi. A modular neural network architecture for pattern classification based on different feature sets. International Journal of Neural Systems, 9(6):563.581, 1999. Ian Davis. A Modular Neural Network Approach to Autonomous Navigation. PhD thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, April 1996. R. Di Ferdinando, A. Calabretta and D. Parisi. Evolving modular architectures for neural networks. In Proceedings of the sixth Neural Computation and Psychology Workshop: Evolution, Learning and Development, 2000. F. Gomez and R. Miikkulainen. Incremental evolution of complex general behavior. Technical Report AI96-248, University of Texas, 1, 1996. F. Gómez and R. Miikkulainen. Solving non-markovian control tasks with neuroevolution. In Proceedings of the IJCAI99, 1999. R. Jacobs, M. Jordan, S.J. Nowlan, and G. E. Hinton. Adaptative mixture of local experts. Neural Computation, 1(3):79.87, 1991. B. Lara, M. Hülse, and F. Pasemann. Evolving neuro-modules and their interfaces to control autonomous robots. In Proceedings of the 5th World Multi-conference on Systems, Cyberbetics and Informatics, 2001. P. Manoonpong, F. Pasemann, and J. Fischer. Modular neural control for a reactive behavior of walking machines. In Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, 2005. Roger McCain. Game Theory : A Non-Technical Introduction to the Analysis of Strategy. SouthWestern College Pub, 2003. S. Nolfi. Using emergent modularity to develop control systems for mobile robots. Adaptative Behavior, 5(3-4):343:364, 1997. S. Nolfi. Evolutionary robotics: Looking forward. Connection Science, 4:223.225, 2004. [14] RW. Paine and J. Tani. How hierarchical control self-organizes in artificial adaptive systems. Adaptive Behavior, 13(3):211.225, 2005. D. Parisi R. Calabretta, S. Nolfi and G. P. Wagner. Emergence of functional modularity in robots. In J.-A. Meyer R. Pfeifer, B. Blumberg and S.W. Wilson, editors, Proceedings of From Animals to Animats 5, pages 497.504. MIT Press, 1998. J. Tani and S. Nolfi. Learning to perceive the world as articulated: an approach for hierarchical learning in sensory-motor systems. Neural Networks, 12:1131.1141, 1999.
R.A. Téllez and C. Angulo / Tactical Modularity for Evolutionary Animats [17] [18] [19] [20]
79
R. Tellez, C. Angulo, and D. Pardo. Highly modular architecture for the general control of autonomous robots. In Proceedings of the 8th International Work-Conference on Artificial Neural Networks, 2005. R. Téllez, C. Angulo, and D. Pardo. Evolving the walking behaviour of a 12 dof quadruped using a distributed neural architecture. In Proceedings of the 2nd International Workshop on Biologically Inspired Approaches to Advanced Information Technology, 2006. H. Yong and R. Miikkulainen. Cooperative coevolution of multiagent systems. Technical Report AI01-287, Department of computer sciences, University of Texas, 2001. J. Ziemke, T. Carlsson and M. Bodén. An experimental comparison of weight evolution in neural control architectures for a 'garbage-collecting' khepera robot. In F. Löffler, A. Mondada and U. Rückert, editors, Experiments with the Mini-Robot Khepera - Proceedings of the 1st International Khepera Workshop, pages 31.40, 1999.
80
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
An Optimization Method for the Data Space Partition Obtained by Classification Techniques for the Monitoring of Dynamic Processes Isaza C.a+ , Aguilar-Martin J.a, Le Lann M.V.a,b, Aguilar J.c, Rios-Bolivar A.c {cisaza, aguilar, mvlelann}@laas.fr , {aguilar, ilich}@ula.ve a LAAS-CNRS, 7, Avenue du Colonel Roche, 31077 Toulouse, France b Université de Toulouse, INSA, 135, Avenue de Rangueil, 31077 Toulouse, France c Universidad de los Andes, CEMISID, Merida, Venezuela
Abstract. In this paper, a new method for the automatic optimization of the classes obtained by application of fuzzy classification techniques is presented. We propose the automatic validation and adjustment of the partition obtained. The new approach is independent of the type of fuzzy classification technique and can be applied in the supervision of complex processes. Keywords. Fuzzy classification, Operating mode detection, Validation and adjustment of classes.
Introduction The complex processes (e.g. bio-chemicals) have highly nonlinear dynamics and time dependant parameters. For their supervision a great number of variables are due to consider. Therefore, it is quite difficult to obtain a precise mathematical model to detect abnormal states. The characteristics of the system operating modes must be extracted from process data by training mechanisms and the knowledge of the process expert must be taken into account. Using classification methods it is possible to obtain information from the process data to identify the system states. The performance of the classification methods for process diagnosis is well-known and it is implemented in a vast variety of applications. However the performance is strongly related to accurate selection of the parameters of each algorithm [2][3][4][7][12]. The proposed approach is the optimization of the data space partition obtained by fuzzy classification methods. The objectives are to decrease the dependence on training algorithm parameters and to deal with the problem of the a priori determination of the clusters number for the classification algorithms whenever necessary. In Section 1, the systems diagnosis by classification methods is introduced. Section 2 describes the proposed approach, followed by an illustrative example in Section 3. Finally the discussion and conclusions are presented. +
Professor at Universidad de Antioquia, Medellín-Colombia
C. Isaza et al. / An Optimization Method for the Data Space Partition
81
1. Diagnosis and Classification Methods The complex systems diagnosis consists of the operating mode detection by including the faults. Using classification techniques it is possible to extract, from the process historical data, the information which characterizes the system states and faults. Each class corresponds to a system state or a fault of the process. The use of the artificial intelligence techniques and the fuzzy logic into the systems diagnosis, allows to obtain interpretable results and offers useful information for decision making when a fault occurs. In a first step (learning), the objective is to find the characteristics of the behaviour which will permit to differentiate the system states (each one is associated to a class) (see Figure 1). In a posterior step, the data recognition allows us to identify on line the current system state. The data pre-treatment is necessary and includes traditional operations of automatic and signal treatment. In all cases, a vector that summarizes the accessible information is provided for monitoring, and the supervisor must decide, by using the class recognition, what is the current functional state of the process. In order to optimize the obtained partition we propose to include into the learning phase a step for the validation and adjustment of clusters. The idea is not to restart the classification algorithm. The partition is evaluated in terms of the separation and the compactness of the clusters, deciding from the result whether or not the update partition is necessary. If that is the case, the two more similar groups are merged (iteratively), until finding a space partition with sufficiently compact and separate classes. For the classification algorithms where an a priori determination of the clusters number is necessary, it is possible to find the adequate number of clusters by using the optimization technique proposed.
Figure 1. Diagnosis Scheme
2. Optimization Method for the Data Space Partition In the literature there are several methods to optimize the data space partition. There are approaches proposed for clustering techniques based on the distance, an example
82
C. Isaza et al. / An Optimization Method for the Data Space Partition
being the technique suggested by Kaymak[7]. This technique is an extension of the algorithm CCM proposed by Krishnapuram[2]. Both of them require a geometric representation of classes, and this kind of representation is not a common characteristic of the classification methods. Lurette [12] proposed the union of the groups obtained by an adaptive neural network. For the merging, the classes with the prototype membership degree larger than a threshold are considered compatible. The threshold is chosen a priori, therefore this approach includes another parameter to be chosen by the user. Lee et al. [11] proposed an iterative method where the quality of the partition is determined by the entropy and the partition optimization is obtained by restarting the classification method. Our approach includes two steps, the partition validation and the clusters update. The partition quality is measured by a validation index, from which it is decided if it is necessary or not to modify the partition. For the clusters update the fuzzy similarity of classes is calculated and the merging (each iteration) of the two similar classes is performed. Figure 2 shows the general schema of the proposed method. The validation part is the evaluation of the partition quality, the clusters adjustment and classes compatibility are the partition update step.
Figure 2. Optimization Method Scheme
2.1. Partition Validation To evaluate the partition quality many authors [2][7][4][15] have suggested a large number of indexes. The scatter matrices of the Fisher’s Linear Discriminant (FDL), can be used to measure the separation and compactness of the clusters produced by crisp clustering techniques. De Franco et al. proposed The Extended Fisher Linear Discriminant (EFLD) [4]”. The EFLD is a fuzzy extension of the FDL and is characterized by a minimum computing time, however if there is a large number of clusters the performance decreases. The Inter Class Contrast (ICC)[4] index is an EFLD extension. ICC has a similar performance to The Compactness and Separation Validity function S coefficient proposed by Xie [15]. The S coefficient is recognized for its good performance into fuzzy partition evaluation [2] but the ICC has the same performance and the computational time is smaller than the Xie’s index time.
83
C. Isaza et al. / An Optimization Method for the Data Space Partition
In the proposed method the Inter Class Contrast is used for the partition quality analysis. The measure is only calculated from the fuzzy partition matrix U and the data matrix X.
UT
ª P 11 « ... « ¬« P K 1
P 11
...
...
... ...
PK2
P1N º
... »» P KN ¼»
ª xˆ1 º « xˆ » « 2» « ... » « » ¬ xˆ N ¼
and XT
T
ª x11 « ... « «¬ x d 1
x11
...
... xd 2
... ...
x1 N º ... »» x dN »¼
(1)
μKN is the membership degree of the Nth data to the class K, d is the number of variables of the data vector. The ICC value is obtained by
Icc
sbe D min . K N
K
where
sbe
N
¦¦P
kn
( mke m) ( mke m)T
(2)
k 1 n 1
where N is the training data number, K the clusters number, Dmin the minimal distance between groups, sbe is the associated value to the partition dispersion, mke is the class K prototype and m the mean of whole data (3). N
¦u mke
kn
xˆ n
n 1
N
¦ukn
and m
1 N ¦ xˆ n N n1
(3)
n 1
The highest value of the ICC is obtained for the fuzzy partition with the highest cluster compactness and widest separation among classes. The algorithm ends when the actual ICC value is smaller than the previous one. 2.2. Clusters Adjustment To improve the partition, we determine the clusters compatibility by using the concept of fuzzy similarity. Each iteration, the two most similar clusters are merged. By using a S-Norm we obtain the new matrix of fuzzy partition U(T+1). The iterations continue until the quality partition index (ICC) decreases. The result is the fuzzy partition matrix U(T), which has the highest ICC. This matrix represents the new partition. There are several recognized approaches to evaluate the clusters similarity. In general these approaches are complement to clustering algorithms based on the data distance [2][7]. For this reason the cluster analysis is based-on the geometrical characteristics of the groups. To apply the optimization method to several fuzzy classification techniques, we propose to work with an interclass similarity measure inspired by the fuzzy entropy. Each class is interpreted as a fuzzy set defined by the membership degrees. In this way it is possible to work with high dimensionality (many variables) problems without deteriorating the optimization method performance.
84
C. Isaza et al. / An Optimization Method for the Data Space Partition
Several approaches use the entropy to measure the fuzzy information [1]. Fuzzy entropy (Shannon’s entropy extension) has been applied in different areas. The formulas to calculate the entropy vary according to the application (i.e. Zadeh, Luca and Termini, Stake and Stake and Kaufmann formulas). The fuzzy entropy (fuzziness) is the fuzzy information quantity. The fuzzy entropy is useful to determine the similarity between fuzzy clusters. Clusters with the same quantity of information can be considered as similar groups. However, to obtain a useful similarity formula for diagnosis, it is necessary to analyse the relative distance between groups. Therefore it is not sufficient to calculate the cluster entropy individually. Several fuzzy entropy formulas are based on the comparison between a set and its complement[1][13]. Yager proposed one formula having in mind that the intersection of the fuzzy set A and its complement AC is not null [16]. This fuzzy entropy for the set A is:
1 M ( A, A c ) where M ( A, Ac ) is a distance between A and Ac (4) N Kosko has been used [9] first employed the fuzzy set geometry and the distances between clusters to propose an entropy formula. Later he proposed the equation (5) as an extension of the initial entropy formula that is represented as a function of the set cardinality: HA
H
A
M (A Ac) M (A Ac)
N
and
¦P
M ( A)
An
(5)
n 1
From the Kosko and Yager formulas several clusters similarity measures have been proposed [5][11][13] but these approaches don’t have into account the distance between clusters. We had replaced AC by B, then the obtained expression corresponds to the relation between the two sets (A and B), it allows to define a fuzzy subsets separation index. Theorem: The equation (6) defines a separation index between two fuzzy subsets (A and B). N
d * A , B
1
M >A B @ M >A B @
1
¦ (P
An
P Bn )
¦ (P
An
P Bn )
n 1 N n 1
where
P An P Bn P An P Bn
(6)
min( P An , P Bn ) max( P An , P Bn )
d*(B, A) is an ultrametric measure of the set P(X) of the fuzzy parts of the X data. A more general approach would replace min by other T-norm and max by its dual S-norm.
Proof: It is necessary to prove that the metric properties are satisfied. By construction it is easy to note that 0 d*(A,B) 1, d*(A,A) { 0 and d*(A,B)= d*(B,A). In addition, for 3 fuzzy subsets A, B and C, d*(A,C) max[d*(A,B), d*(B,C)]. This shows that
C. Isaza et al. / An Optimization Method for the Data Space Partition
85
d*(A,B) is an ultrametric measure. Therefore it is possible to obtain a measurement of compatibility between fuzzy sets (A, B) in use (7). N
G ( A, B )
¦ (P
An
P Bn )
¦ (P
An
P Bn )
n 1 N
(7)
n 1
When the two most compatible clusters are found, the partition (U(T+1)) is updated by replacing those clusters by their union. The fuzzy sets union is made by using the S-norm. It should be noticed that our approach is independent of the type of fuzzy classification method because the cluster analysis is made by using the matrix U exclusively, which it is the common result of classification techniques. Besides, there are not additional parameters to be chosen by the user. The optimization performance does not change when there is a large number of variables, it depends only of the K number of clusters. Moreover it is not necessary to restart the classification algorithm in each iteration.
3. Illustrative Example
The results obtained by the proposed method to optimize a partition are presented. The initial data space partition has been obtained by using the LAMDA (Learning Algorithm for Multivariate Data Analysis) [6][8][14] algorithm. The LAMDA algorithm may produce many different classifications for the same data set by modifying its parameters. In the example the non supervised learning has been used. For illustration purposes we worked with a two variables example. The data for this example are obtained from a second order linear system with noisy input-output and changing parameters in time. On line estimator of the two parameters is used to detect the instantaneous model of the system. The problem consists in identifying the 4 system operating modes and noise data. Figure 3.a shows the non optimal classification obtained by LAMDA. It can be noticed that there are more clusters than the 5 expected. A first similarity group corresponds to clusters 4 and 5, and a second to clusters 1, 2 and 7.
(b) (a) Figure 3. (a) Initial Classification LAMDA (b) Optimized Partition
86
C. Isaza et al. / An Optimization Method for the Data Space Partition
Figure 3.(b) shows the classification obtained by using the optimization method proposed. This partition identifies the 4 states and the noise data. Figure 4 shows the automatic variation of the classes for each iteration.
Figure 4. Iterations results
Starting from a non optimal classification (in the iteration 4) a space partition, which gives out the useful clusters number for the system states identification, is automatically obtained. The optimal cluster number is 5, which corresponds to a class for each state and a group associated to noise data.
Figure 5. ICC Index Evolution
At each iteration the algorithm merges the two most similar classes until the decrease of the partition quality (index ICC). At this moment the algorithm ends and the results correspond to the preceding iteration. The index ICC (Section 3) evolution is shown in Figure 5.
C. Isaza et al. / An Optimization Method for the Data Space Partition
87
4. Conclusions and outlines
A new methodology for the fuzzy partition optimization which is independent of the classification methods has been proposed. It is based on a new cluster similarity index which only uses the membership degrees. The method is useful when there is not a cluster geometrical representation. The approach is considered as a complement of the classification methods and is useful for the identification of complex systems faults. It reduces the dependence to the training parameters algorithms and the initialization. If the data analysis method and the optimization step have an answer faster than the system sampling time, it is possible to consider unsupervised and on line fault detection. Future works are devoted to apply this methodology to the on-line diagnosis of a potable water production plant and of a new biological water treatment which uses activated sludge and immersed membrane filtration (BIOSEP Solution developed by VEOLIA-Company). Acknowledges
Financial support for this research from PCP-Automation is gratefully acknowledged. References [1] AL-SHARHAN S., KARRAY F., GUEAIEB W., BASIR O., “Fuzzy Entropy: a Brief Survey, IEEE International Fuzzy Systems Conference 2001 [2] BEZDEK J., KELLER J., KRISNAPURAM R., PAL N.,“Fuzzy Models And Algorithms For Pattern Recognition And Image Processing”, Springer, 2005 [3] CASIMIR Roland, “Diagnostic des Défauts des Machines Asynchrones par Reconnaissance des Formes”, Thèse de Doctorat de L’Ecole Centrale de Lyon, Lyon France, 2003. [4] DE FRANCO Claudia, SILVA Leonardo, OLIVEIRA Adriano “A Validity Measure for Hard an Fuzzy clustering derived from Fisher’s Linear Discriminant”, International Conference on Fuzzy Systems, 2002 [5] GOUSHUN H., YUNSHENG L.;“New Subsethood Measures and Similarity Measures of Fuzzy Sets”, IEEE International Conference on Communications, Circuits and Systems,2005 [6] ISAZA C., KEMPOWSKY T., AGUILAR-MARTIN J., GAUTHIER A., “Qualitative data Classification Using LAMDA and other Soft-Computer Methods”, Recent Advances in Artificial Intelligence Research and Development, IOS Press, 2004, [7] KAYMAK U., BABUSKA R, “Compatible Cluster Merging for Fuzzy Modelling”, IEEE Fuzzy Systems 1995 [8] KEMPOWSKY T., AGUILAR MARTIN J., SUBIAS A., LE LANN M.V. “Classification tool based on interactivity between expertise and self-learning techniques”, SAFEPROCESS'2003, USA, Juin 2003 [9] KOSKO B., “Fuzzy Entropy and Conditionning”, Inform. Sci. vol 40. pp. 165-174, 1986 [10] BEZDEK J., KELLER J., KRISNAPURAM R., PAL N., “Fuzzy Models And Algorithms For Pattern Recognition And Image Processing”, Springer, 2005 [11] LEE H., CHEN C., CHEN J., JOU Y., “An Efficient Fuzzy Classifier with Feature Selection Based on Fuzzy Entropy”, IEEE Transactions on Systems, Man and Cybernetics, Vol. 31, No. 3, June 2001 [12] LURETTE Christophe, “Développement d’une technique neuronale auto adaptative pour la classification dynamique de données évolutives Application a la supervision d’une presse hydraulique”, Thèse de Doctorat de l`Université des Sciences Technologies de Lille, Lille France,2003. [13] QING M., LI TIAN-RUI, “Some Properties and New Formulae of Fuzzy Entropy“, IEEE International Conference on Networking, Sensing & Control, 2004. [14] WAISSMAN J “Construction d’un modèle comportemental pour la supervision de procédés: Application a une station de traitement des eaux”, Thèse de Doctorat de l`INP de Toulouse, 2000 [15] XIE Lisa Xuanli, BENI Gerardo “A Validity Measure for Fuzzy Clustering”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 13 No 8, 1991. [16] YAGER R.,“Of the Measure of Fuzziness and Negation”, Part 1: Membership in the unit interval. International Journal on General Systems, 1979.
This page intentionally left blank
2. Reasoning
This page intentionally left blank
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
91
A general approach for qualitative reasoning models based on intervals Ester MARTÍNEZ, M. Teresa ESCRIG Universitat Jaume I Engineering and Computer Science Department Campus Riu Sec, Castellón, E-12071 (Spain)
[email protected],
[email protected]
ABSTRACT In qualitative spatial and temporal reasoning we can distinguish between comparing magnitudes of concepts and naming magnitudes of concepts. Qualitative models are defined by: (1) a representation model, (2) the basic step of inference process and (3) the complete inference process. We present a general algorithm to solve the representation model and the basic step of inference process of qualitative models based on intervals. The general model is based on the definition of two algorithms: the qualitative addition and the qualitative difference. In this article, the general model is instanced to two known spatio-temporal concepts, naming distance and qualitative velocity, and a novel one, qualitative acceleration.
Introduction The most widely used way to model commonsense reasoning in the spatial domain is by means of qualitative models. In fact, qualitative reasoning may help to express poorly defined problem situations, support the solution process and lead to a better interpretation of the final results [1]. A qualitative representation can be defined [2] as that representation which "makes only as many distinctions as necessary to identify objects, events, situations, etc. in a given context". Actually, most of the authors who do research in the spatial reasoning field argue directly that “much of the knowledge about time and space is qualitative in nature” [3]: although images obtained from perception are quantitative, in the sense that specific locations on the retina are stimulated by the light of a specific spectrum of a wavelength and intensity, the knowledge about a retinal image is qualitative, that is, only comparisons between such features can be performed. Humans are not very good at determining exact lengths, volumes, etc, whereas they can easily perform context-dependent comparisons [2]. Therefore, qualitative approaches have been extensively used for modelling physical phenomena and temporal reasoning. The development of any qualitative model consists of the following steps: • The representation of the corresponding magnitude that, beside, is composed by:
92
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
The set of relations between objects and the number of objects implied in each relation. A relation might be binary when there are only two objects implied (object b with respect to (wrt) object a, that is, b wrt a) or ternary when there are three objects implied (c wrt ab). Note that the number of relations will depend on the level of granularity. Granularity is not a question of scale, it is a matter of precision, i.e. the amount of information which is included in the representation. Therefore, a coarse level of granularity will provide more abstracted information whereas a fine level of granularity will provide detailed information. o The operations which can be defined. If the relationship is binary, only one operation can be defined: inverse (a wrt b). But, if the relation is ternary, it is possible to define five different operations: inverse (c wrt ba), homing (a wrt bc), homing-inverse (a wrt cb), shortcut (b wrt ac) and shortcut-inverse (b wrt ca) The reasoning process which is divided into two parts: o The Basic Step of Inference Process (BSIP): which can be defined as “given two relationships b wrt a reference system and c wrt another reference system, where b is included into the second reference system, we want to obtain the relationship c wrt the first reference system” (see figure 1). In Spatial Reasoning, the BSIP is usually implemented by tables. o The Complete Inference Process (CIP): it consists of repeating the BSIP as many times as possible with the initial information and the information provided by some BSIP until no more information can be inferred. This step is necessary when more relationships among several spatial landmarks are provided. o
•
RS2 RS1
b
c
a Figure 1. The Basic Step of Inference Process
In addition, in qualitative spatial and temporal reasoning we can distinguish between comparing magnitudes of concepts and naming magnitudes of concepts. Comparing versus naming refers to the usual distinction between “relative” and “absolute” magnitudes which is not accurate because both are relative: “relative” is related to each magnitude to be compared and “absolute” is related to an underlying scale. For comparing magnitudes, it is necessary to have at least two magnitudes of some concept to be compared. An example of comparing magnitudes is the qualitative treatment of compared distances [4]. The compared distance relationship is a ternary relationship: “the object B is {any compared distance relationship} than the object A, from the same point of view (PV)”. The comparison depends on the orientation of both objects with respect to the point of view, due to the fact that two objects A and B may be at any orientation with respect to the point of view. In particular, two extreme positions are when both objects A and B are at the same orientation with respect to the
93
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
same point of view and when objects A and B are in the opposite orientation with respect to the point of view. If we take into account both extreme orientations, then the resulting reasoning for all other orientation is included in both extremes. Therefore, we represent B[Re l ]SPV A , that is, the compared distance PV to A and PV to B, when A and B are in the same orientation, and B[Re l ]OPV A , that is, the compared distance PV to A and PV to B, when A and B are in the opposite orientation (see figure 2). When we do not need to distinguish the orientation, we will call them B[Re l ]SPV|O A .
PV B A
PV B
A
B[Re l ]SPV A
B[Re l ]OPV A
Figure 2. An example of the compared distances to be represented.
On the other hand, for naming magnitudes, it is necessary to divide the magnitude of any concept into intervals (sharply or overlapped separated depending on the context (see figure 3)) where qualitative labels are assigned to each interval. In fact, the definition of the reference system (RS) for each magnitude is equal to all models of this kind. Therefore, the RS will contain, at least, a couple of components that are: A set of qualitative symbols in increasing order, Q = {q0 , q1,..., qn } , where q0 is the magnitude closest to the reference object (RO) and qn is the one furthest away, going to infinity. Therefore, this set defines the different areas in which the workspace is divided into. The number of areas will depend on the granularity of the task. By cognitive considerations, the acceptance areas have been chosen in increasing size. A set of intervals, Δr = {δ 0 , δ1,..., δ n } , which describes the acceptance areas. Each symbol of the previous set qi is associated with an acceptance area δ i . The acceptance areas are defined quantitatively by means of a set of closed or open intervals delimited by two extreme points: the initial point of the interval j, δ ij , and the ending point of the interval j, δ ej . Thus the above structure relations is rewritten by:
{[
[]
] ]
Δr = δ 0i , δ 0e , δ 1i , δ 1e ..., δ ni , δ ne q0 q1 δi
δ0
q2
δi
δ1
q3 δi
δi
δ2
δ3
q4 δi
δ4
q0
q1
[} q2
q3
δi
δ0 δi δi
δ1
δ2 δi
δ3
Figure 3. An example of structure relations where: (a) acceptance areas are sharply separated and (b) acceptance areas are overlapped.
Our claim in this paper is to explain how the BSIP can be solved in any qualitative spatial reasoning model based on intervals by means of an algorithm based on the qualitative addition and subtraction of qualitative intervals.
94
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
1. General algorithm Models based on intervals for some spatial aspects involve a magnitude and a direction. In the inference process is important to take into account the relative orientation between the objects implied, that is, if they are in: • the same orientation • the opposite orientation • any orientation The extreme cases are: (1) when the implied objects are in the same orientation and (2) when they are in the opposite orientation. Which extreme case corresponds to the upper bound of the result and which one to the lower bound, will depend on the type of values for the magnitude we are working with (positives or negatives) and the definition of the BSIP for each magnitude. However, we know that when the implied objects are in the same orientation, the resulting area (qualitative label) will be obtained by the qualitative sum and when they are in the opposite orientation, it will be calculated by the qualitative difference. The solution for the rest of the orientations is a disjunction of qualitative labels which include from the label obtained for the same orientation to the one obtained for the opposite orientation. 1.1 Preliminar definitions We define the acceptance area of a particular magnitude, AcAr(magnitude) such as: ⎧⎪δ i ≤ magnitude ≤ δ ej if δ j contains positive values AcAr (magnitude ) = δ j ⇔ ⎨ je i ⎪⎩ δ j ≤ magnitude ≤ δ j otherwise
The acceptance area of b wrt a is δ because its value is between δ and δ , and the sum of consecutive intervals from the origin to δ , that is, the distance from the origin to δ is called Δ . The acceptance area of c wrt b is δ and the sum of consecutive intervals is Δ . Again, the definition of Δ (or Δ ) depends on the defined intervals. If we suppose that δ is the origin for positive values and δ is the corresponding origin to negative ones, then we can mathematically define the concept of Δ such as: i i
i
e i
i
i
i
j
j
j
+
i
−
i
∀i = 0,1,..., n
⎧UB (δ i ) − δ + if δ i contains positive values Δi = ⎨ ⎩ abs (LB (δ i ) − δ − ) otherwise
where UB refers to upper bound of the qualitative interval expressed between brackets and, in a similar way, LB indicates the lower bound of the qualitative interval which is between brackets too. The algorithms presented in the following sections take advantage of the commutativity property. Thus, given three objects, a, b and c, with b wrt a = qi and c wrt b = qj, we assume in the following algorithms that q i ≥ q j , without losing generality, thanks to the commutativity property of the composition. Notice that there are points which can be shared by two acceptance areas because either the acceptance areas are defined as closed intervals and the initial point of an interval is the same to the final point of the previous interval, or the acceptance areas are overlapped, in which case, more points will be shared by the two regions. The result of
95
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
reasoning with regions of this kind can provide imprecision. This imprecision will be solved by providing disjunction in the result. If a special object can be found in the qualitative region qi or qi+1, we express it listing both possibilities {qi, qi+1}. 1.2 The qualitative sum The qualitative sum of two regions, qi and qj, belonging to the structure of relations Δr provides as result a range of qualitative distance regions (see figure 4 a). The corresponding range of acceptance areas is given by the following formula:
(
)
(
AcAr Δ i −1 + Δ j −1 .. AcAr Δ i + Δ j
)
The algorithm 1 computes the upper and lower bounds of this range of qualitative regions. The procedure Build_Result constructs the list of qualitative regions from the lower bound (LB) to the upper bound (UB). If the sum of acceptance areas from q 0 to qj, Δ , is much bigger than the acceptance area related qi, δ i , then the absorption rule is applied to check whether the interval δ i can be disregarded with respect to δ . This rule has been stated in [5] for a given difference p with 1 ≤ p ≤ ( j − i ) , between the intervals, that is: j
i
(ord (q j ) − ord (qi )) ≥ p → δ j ± δ i ≈ δ j given ord (qk ) = k + 1 . We argue that the disjunction of qualitative regions can be improved (i.e. less symbols are obtained in the disjunction as the result of the inference process) if an interval δ is considered to be much bigger to another δ ( δ >> δ ) when the first interval δ is, for example, al least ten times bigger than the other δ . This is arbitrary and might be context dependent. j
j
(δ j >> δ i ) ⇔ δ j ≥ 10 × δ i → δ j ± δ i ≈ δ j
i
j
i
i
Thus, it allows the absorption rule to act in the case of increasing range intervals when the last interval which goes to infinity is considered as a part of the result. Therefore, when the absorption rule is true, the upper qualitative region of the resulting list is q i. Otherwise, if qi is the last qualitative distinction made in the structure relations used ( Δr )qi will be the upper bound. If none of these conditions is true, then the upper bound is computed by calling the procedure Find_UB_qualitative_sum. This procedure will stop when it comes to consider the last qualitative region of the structure relations or when the sum of acceptance areas from the origin to qj (including qj), that is, Δ j , is less or equal than the sum of acceptance areas starting from δ i +1 . This is the same to say that the upper bound is the minimal region qk in such a way that the relation Δ j ≤ δ i +1 + δ i + 2 + ... + δ k is satisfied.
96
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
proc qualitative_sum (qi, qj, Δr , var Result) if Δ j << i then UB qi
proc qualitative_difference (qi, qj, Δr , var Result) if Δ i ≥ Δ j then
else if i = max then UB qi else Find_UB_qualitative_sum( Δ j , i + 1 , Δr , i+1, UB) endif endif Find_LB_qualitative_sum( Δ j − 1 , i , Δr , i, LB)
if Δ j << i then UB qi else if i = 0 then UB qi else Find_LB_qualitative_difference( Δ j , i − 1 , Δr , i-1, LB) endif endif Find_UB_qualitative_difference( Δ j − 1 , i , Δr , i, UB)
Build_Result(LB, UB, Result) end proc Find_UB_qualitative_sum( Δ j , Δ inc , Δr , k, var UB) if k = max then UB qk else if Δ j ≤ Δ inc then UB qk else Find_UB_qualitative_sum( Δ j , Δ inc + k + 1 , Δr , k+1, UB) endif endif end proc Find_LB_qualitative_sum ( Δ j − 1 , Δ inc , Δr , k, var LB) if k = max then LB qk else if Δ j − 1 ≤ Δ inc then LB qk else Find_LB_qualitative_sum( Δ j − 1 , Δ inc + k + 1 , Δr ,k+1, LB) endif endif end
else if Δ i <<
j then UB qj else if j = 0 then UB qj else Find_LB_qualitative_difference( Δ i , j − 1 , Δr , j-1, LB) endif endif Find_UB_qualitative_difference( Δ i − 1 , j , Δr , j, UB)
endif Build_Result(LB, UB, Result) end proc Find_UB_qualitative_difference( Δ j − 1 , Δ inc , Δr , k, var UB) if k = 0 then UB qk else if Δ j − 1 ≤ Δ inc then UB qk else Find_UB_qualitative_difference( Δ j − 1 , Δ inc + k − 1 , Δr , k-1, UB) endif endif end proc Find_LB_qualitative_difference( Δ j , Δ inc , Δr , k, var LB) if k = 0 then LB qk else if Δ j ≤ Δ inc then LB qk else Find_LB_qualitative_difference( Δ j , Δ inc + k − 1 , Δr ,k1, LB) endif endif end
a)
b)
Algorithm 1. a) The qualitative sum algorithm; b) The qualitative difference algorithm.
97
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models Δj
δj
q0
q3
q1
q
a
Δj Δ j −1
b
2
a
q q1 0
q1
4
c q
q2
Δ j −1 q
q 2
c
q0
Δi − 1
δj
q3
Δi − 1
δi
b 2
q 0
q 1
δi Δi
Δi
a)
b)
Figure 4. a) Example of qualitative sum when structure relations with overlapped acceptance areas are used; b) Example of qualitative difference when structure relations with overlapped acceptance areas are used.
c
c
b
b
a
a
CASE 1
CASE 2
c b a CASE 3
Figure 5. The basic step of the inference process for distance information
2. Instances 2.1 Naming distance To deal with the concept of distance we will restrict our attention first to objects that can be modeled as points. In this case, we are representing the distance between two objects. The Distance Reference System (DRS) is composed by the list of distance relations ( Q = {q0 , q1,..., qn } ) and the structure relations ( Δr = {δ 0 , δ1,..., δ n } ) as it has been defined in the previous section. For naming distances, the BSIP can be defined such as [4]: “given two distances between three spatial objects “a”, “b” and “c”, we want to find the distance between the two points which is not initially given”. Three different cases can be derived from this inference process (see figure 5 (dashed lines correspond to the inferred distance relationships)): (1) CASE 1: the distances between a and b and b and c are known. The distance between a and c is inferred, (2) CASE 2: the distances between a and b and a and c are known. The distance between b and c is inferred, and (3) CASE 3: the
98
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
distances between b and c and a and c are known. The distance between a and b is inferred.These cases could be considered as equivalent, at first sight, however only CASE 3 is equivalent to CASE 2. CASE 1 and CASE 2 are different. For CASE 1 the fact that the two distances are in the same orientation provides the upper bound of the inferred distance and the fact that the distances are in the opposite orientation provides the lower bound of the inference process. However, for CASE 2 and CASE 3, the same orientation for distances provides the lower bound of the inferred distance and the opposite orientation for distances provides the upper bound of the inferred distances. In such a way we are going to use the algorithms 1 and 2 to solve the BSIP. Suppose that, for example, the DRS is composed by Q1 = {closer_than (ct), nearby (n), futher_than (ft)} and Δr1 = {[0, 40[, [40, 60[, [60, ∞[} for a coarse level of granularity and, for a fine level of granularity, Q2 = {closer_than_halfway (ch), halfway (h), closer_than (ct), nearby (n), closer_than_twice (ctw), double (d), futher_than_double (fd) }and Δr2 = {[0, 20[, [20, 30[, [30, 40[, [40, 60[, [60, 80[, [80, 120[, [120, ∞[}
CASE 1
CASE 2 and 3
Ct
N
ft
ct
N
Ft
ct
{ct, n, ft}
{ct, n, ft}
{ft}
ct
{ct}
{n}
{ft}
n
{ct, n, ft}
{ct, n, ft}
{ft}
n
{n}
{ct}
{ft}
ft
{ft}
{ft}
{ct, n, ft}
ft
{ft}
{ft}
{ft}
Table 1. The composition table which solves the BSIP for the coarse DRS1 with Q1={closer_than (ct), nearby (n), further_than (ft)}
With these values, the BSIP is solved by the values of tables 1 and 2. These composition tables have been calculated using the proposed algorithms and the obtained results are the same to the obtained ones handwritten [6]. The first column of both tables refers to the distance relationship b wrt a and the first row of both columns refers to the distance relationship c wrt b. The rest of cells of both tables indicate the distance relationship c wrt a which is included into brackets because, sometimes, it contains a disjunction of relations. Note that both tables are symmetric with respect to the main diagonal, for this reason it is necessary to represent only the upper or lower part of both tables. The dark cells represent those values that can be omitted if we only consider the upper part of these tables.
99
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
CASE 1 Ch
H
Ct
N
ctw
d
fd
ch
{ch, h, ct, n}
{ch, h, ct, n}
{ch, h, ct, n, ctw}
{h, ct, n, ctw, d}
{h, ctw, d}
{ctw, d, fd}
{fd}
h
{ch, h, ct, n}
{ch, h, ct, n, ctw}
{ch, h, ct, n, ctw}
{ch, h, ct, n, ctw, d}
{ct, n, ctw, d}
{n, ctw, d, fd}
{fd}
ct
{ch, h, ct, n, ctw}
{ch, h, ct, h, ctw}
{ch, h, ct, n, ctw, d}
{ch, h, ct, n, ctw, d}
{h, ct, n, ctw, d, fd}
{n, ctw, d, fd}
{fd}
n
{h, ct, n, ctw, d}
{ch, h, ct, n, ctw, d}
{ch, h, ct, n, ctw, d}
{ch, h, ct, n, ctw, d, fd}
{ch, h, ct, n, ctw, d, fd}
{h, ct, n, ctw, d, fd}
{fd}
ctw
{h, ctw, d}
{ct, n, ctw, d}
{ch, ct, n, ctw, d, fd}
{ch, h, ct, n, ctw, d, fd}
{ch, h, ct, n, ctw, d, fd}
{ch, h, ct, n, ctw, d, fd}
{fd}
d
{ctw, d, fd}
{n, ctw, d, fd}
{n, ctw, d, fd}
{h, ct, n, ctw, d, fd}
{ch, h, ct, n, ctw, d, fd}
{ch, h, ct, n, ctw, d, fd}
{fd}
fd
{fd}
{fd}
{fd}
{fd}
{fd}
{fd}
{ch, h, ct, n, ctw, d, fd}
CASE 2 and 3 ch
H
ct
n
ctw
d
Fd
ch
{ch}
{h}
{ct}
{n}
{ctw}
{d}
{fd}
h
{h}
{ch}
{ch}
{ct}
{n}
{d}
{fd}
ct
{ct}
{ch}
{ch}
{h}
{n}
{d}
{fd}
n
{n}
{ct}
{h}
{ch}
{ct}
{ctw}
{fd}
ctw
{ctw}
{n}
{n}
{ct}
{ch}
{n}
{fd}
d
{d}
{d}
{d}
{ctw}
{n}
{ct}
{fd}
fd
{fd}
{fd}
{fd}
{fd}
{df}
{fd}
{fd}
Table 2. The composition table which solves the BSIP for the coarse DRS2 with Q2={closer_than_halfway (ch), halfway (h), closer_than (ct), nearby (n), closer_than_twice (ctw), double (d), further_than_double (fd)}
2.2 Qualitative velocity In this case, we are going to represent the velocity of an object with respect to the position of another object, that is, we are comparing the position of an object with respect to the position of another object in two different times. Therefore, the Velocity Reference System (VRS) is composed by four elements: UD that indicates the unit of distance or space travelled by object; UT, which is the unit of time, both of them are context dependent; and Q and Δr which represent the set of velocity labels and the set of intervals associated to each defined label respectively in the same way they are defined in the previous section. Assuming that in a determined context, we fix UD to ud and UT to ut, we can define different examples depending on the level of granularity that we want to use. We can define the VRS1 = {ud, ut, Q, Δr } where Q1 = {zero, slow, normal, quick} and Δr 1 = {[0, 0], ]0, ud/2ut], ]ud/2ut, ud/ut], ]ud/ut, ∞[} such as the velocity reference system for a coarse level of granularity and, for a fine level of granularity, the VRS 2
100
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
can be Q2 = {zero, very slow, slow, normal, quick, very quick} and Δr 2 = {[0, 0], ]0, ud/4ut], ]ud/4ut, ud/2ut], ]ud/2ut, ud/ut], ]ud/ut, 2ud/ut], ]2ud/ut, ∞[} Z1
S1
N1
Q1
Z1
{Z1}
{S1}
{N1}
{Q1}
S1
{S1}
{S1, N1}
{N1, Q1}
{Q1}
N1
{N1}
{N1, Q1}
{N1, Q1}
{Q1}
Q1
{Q1}
{Q1}
{Q1}
{Q1}
Table 3. The composition table which solves the BSIP for the coarse VRS1 with Q1={zero (Z1), slow (S1), normal (N1), quick (Q1)}
Again, we can obtain the solution of the BSIP for these VRSs by applying the proposed algorithms 1 and 2. The obtained results are shown in tables 3 and 4. They are the same results as the ones obtained by hand at [7]. Z2
VS2
S2
N2
Q2
VQ2
Z2
{Z2}
{VS2}
{S2}
{N2}
{Q2}
{VQ2}
VS2
{VS2}
{VS2, S2}
{S2, N2}
{N2, Q2}
{Q2, VQ2}
{VQ2}
S2
{S2}
{S2, N2}
{N2}
{N2, Q2}
{Q2, VQ2}
{VQ2}
N2
{N2}
{N2, Q2}
{N2, Q2}
{Q2}
{Q2, VQ2}
{VQ2}
Q2
{Q2}
{Q2, VQ2}
{Q2, VQ2}
{Q2, VQ2}
{VQ2}
{VQ2}
VQ2
{VQ2}
{VQ2}
{VQ2}
{VQ2}
{VQ2}
{VQ2}
Table 4. The composition table which solves the BSIP for the fine VRS2 with Q2={zero (Z2), very slow (VR2), slow (S2), normal (N2), quick (Q2), very quick (VQ2)} D1
Z1
I1
D1
Z1
I1
D1
{D1}
{D1}
{D1, Z1, I1}
D1
{D1, Z1, I1}
{D1}
{D1}
Z1
{D1}
{Z1}
{I1}
Z1
{I1}
{Z1}
{D1}
I1
{D1, Z1, I1}
{I1}
{I1}
I1
{I1}
{I1}
{D1, Z1, I1}
a)
b)
Table 5. The composition table which solves the BSIP for the coarse acceleration ARS1 with Q1={decrease(D1), zero (Z1), increase (I1)}. (a) when the two relations are in the same direction; (b) when the two relations are in the opposite direction
2.3 Qualitative acceleration Finally, the acceleration is another novel qualitative reasoning model based on interval which can also be an instance of the general model introduced in this paper. What we are comparing is the acceleration of an object with respect to another object. It is important to note that in this case the possible values are positive and negative. Thus, we can find two different situations, unlike qualitative velocity: (1) when the two implied objects have the same orientation and (2) when they are opposite. Therefore, the Acceleration Reference System (ARS) has the same components to the VRS.
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
101
Hence, if we again fix the values of UD and UT to ud and ut, respectively, we can define two different ARS at coarse and fine levels of granularity. Let’s ARS 1 = {ud, ut, Q, Δr } where Q={decrease, zero, increase} and Δr ={]-∞, 0[, [0, 0], ]0, ∞[}} the ARS for a coarse level of granularity, and, for the fine granularity level, ARS 2 with Q={decrease, low decrease, zero, low increase, increase} and Δr ={]-∞, -ud/ut2], ]ud/ut2, 0[, [0, 0], ]0, ud/ut2], ]ud/ut2 , ∞[}. In this case, we can also use composition tables to show the result of applying the proposed algorithms to solve the BSIP, as it is illustrated in the tables 5 and 6. D2
LD2
Z2
LI2
I2
D2
{D2}
{D2}
{D2}
{D2, LD2, Z2}
{D2, LD2, Z2, LI2, I2}
LD2
{D2}
{D2, LD2}
{LD2}
{LD2, Z2, LI2}
{LI2, I2}
Z2
{D2}
{LD2}
{Z2}
{LI2}
{I2}
LI2
{D2, LD2, Z2}
{LD2, Z2, LI2}
{LI2}
{LI2, I2}
{I2}
I2
{D2, LD2, Z2, LI2, I2}
{LI2, I2}
{I2}
{I2}
{I2}
(a) D2
LD2
Z2
LI2
I2
D2
{D2, LD2, Z2, LI2, I2}
{D2, LD2}
{D2}
{D2}
{D2}
LD2
{LI2, I2}
{LD2, Z2}
{LD2}
{D2, LD2}
{D2, LD2}
Z2
{LI2, I2}
{LI2}
{Z2}
{LD2, D2}
{D2}
LI2
{I2}
{LI2, I2}
{LI2}
{LD2, Z2, LI2}
{D2, LD2}
I2
{I2}
{I2}
{I2}
{LI2, I2}
{ D2, LD2, Z2, LI2, I2}
(b) Table 6. The composition table which solves the BSIP for the coarse ARS1 with Q1={decrease(D1), zero (Z1), increase (I1)} (a) when the two relations are in the same direction; and (b) when the two relations are in the opposite direction.
Conclusions and future work In this research, a general algorithm which solves the representation and the basic step of inference process of any qualitative model based on intervals has been developed. It has been proved in three different instances: (1) naming distance, (2) qualitative velocity and (3) qualitative acceleration. The two first ones are known qualitative models based on intervals and we have obtained with the designed algorithm the same results to the presented ones in [5, 6]. The acceleration is a novel model developed by following the general algorithm presented in this paper. The following are themes of future work: - The study of a general way of implementing the Full Inference Process as part of the reasoning, for all qualitative models based on intervals.
102
E. Martínez and M.T. Escrig / A General Approach for Qualitative Reasoning Models
-
The development of new qualitative models based on intervals of aspects such as: time, weight, body sensations (such as hunger, sleepiness, tiredness, love, etc), etc. All the qualitative models described here (distance, velocity, acceleration) are being implemented to real robot navigation. The representation and reasoning process about these aspects will provide robots with intelligent abilities to solve service robotics problems.
Acknowledges This work has been partially supported by CICYT under grant number TIC200307182.
References [1] Werthner, H., “Qualitative Reasoning. Modeling and the generation of behaviour”. Springer-Verlag, 1994. [2] Hernández, D., Qualitative Representation of Spatial Knowledge. In volume 804 of Lecture Notes in Artificial Intelligence. Ed. Springer-Verlag, 1994. [3] Freksa, C., “Qualitative Spatial Reasoning”, in Mark, D.M., Frank, A.U. (eds.), Cognitive and Linguistic Aspects of Geographic Space, pag. 361 – 372, Kluwer Academic Publishers, Dordrecht, 1991. [4] M.T. Escrig, F. Toledo. “Applying Compared Distances to Robot Navigation”. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’01), Workshop on Spatial and Temporal Reasoning with Agents Focus, 2001. [5] Clementini, E., Di Felice, P., Hernández, D., “Qualitative Representation of Positional Information”, Technical Report FKI-208-95, Technische Universität München, 1995. [6] Escrig, M. T., Toledo, F. “Qualitative Spatial Reasoning: Theory and Practice. Application to Robot Navigation”. IOS Press, Frontiers in Artificial Intelligence and Applications. [7] Escrig, M.T, Toledo, F., “Qualitative Velocity”. In volume 2504 of Lecture Notes in Artificial Intelligence. Ed. Springer, 2002.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
103
Coarse Qualitative Model of 3-D Orientation Julio PACHECO and Mª Teresa ESCRIG Computer Science and Engineering Department, Jaume I UniversityAvda.Vicent Sos Baynat s/n, 12071Castellón SPAIN pacheco@ icc.uji.es, escrigm@ icc.uji.es Abstract. The 2-D orientation model of Freksa and Zimmerman has been extended into a 3-D orientation model by Pacheco, Escrig and Toledo for fine information. W hen the information provided to the system is coarse or it is advisable to reduce the processing time of the reasoning process, it is necessary to define a coarse 3-D orientation model. The 3-D Pacheco et al.‘s orientation model has been coarsen into three models, (a length coarse model, a height coarse model and a general coarse model). In this paper the algorithm which integrates the coarse and the fine 3-D orientation models has also been explained. Keywords. Model-Based Reasoning, Spatial Reasoning, Qualitative Reasoning, Qualitative Rep-resentation, Qualitative Orientation.
1.
Introduction
In recent years, many qualitative spatial models have been developed to manage properly the imprecise knowledge about different aspects of space. Qualitative Spatial Reasoning is a field which has been developed within the field of Artificial Intelligence. It is still an open field where many problems remain unsolved. The principal goal of the Qualitative Spatial Reasoning field is to represent our everyday common sense knowledge about the physical world, and the underlying abstractions used by engineers and scientists when they create quantitative models. Kak [11] points out that the behaviour of the intelligent machine of the future might carry out temporal reasoning, spatial reasoning and also reason over interrelated entities occupying space and changing in time with respect to (wrt) their attributes and spatial interrelationships. Spatial information that we obtain through perception is coarse and imprecise, thus qualitative models which reason with distinguishing characteristics rather than with exact measures seems to be more appropriate to deal with this kind of knowledge. For orientation in general it is important in the Qualitative Spatial Orientation approaches to distinguish between models based on projections and models not based on projections. In models based on projections, the relative orientation of objects is obtained by using (orthogonal or non-orthogonal) projections of objects into external axes, and then reasoning in one-dimension, by using Allen's temporal logic. There exist mainly three qualitative approaches for orientation which are based on projections: Guesgen's approach [8]; Jungert et al. approach [1], and Mukerjee and Joe's approach [12].
104
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
Models based on projections might provide inconsistent representation of objects whose sides are not parallel to the axes. To overcome this problem, qualitative models not based on projections have been developed. On the other hand, there exist mainly three qualitative models for orientation which are not based on projections into external reference systems (RS): Freksa and Zimmermann's model [3, 4, 5, 6]; Hernández's approach [9, 10]; and Frank's approach [7]. In these models not based on projections, space is divided into qualitative regions by means of RSs, which are centred on the reference objects (i.e. the RSare local and egocentric). Spatial objects are always simplified to points, which are the representational primitives. There exist mainly two qualitative models for 3D orientation. Guesgen's approach [8] (which is a straightforward extension of Allen's temporal reasoning) and Pacheco, Escrig and Toledo's approach [13,14] (that is an extension of Freksa and Zimmerman model). When the information provided to the system is coarse or it is advisable to reduce the processing time of the reasoning process, it is necessary to define a coarse 3-D orientation model. The 3-D Pacheco et al.'s orientation model has been coarsen into three models, (a length coarse model, a height coarse model and a general coarse model). A qualitative model is defined following the next steps:
The representation The basic step of the inference process The full inference process
In this paper we are going to focus our attention on the representation aspect of the 3D coarse orientation model. Therefore, the structure of the rest of the paper will be as follows: Section 1will explain the 2D orientation representation fine and coarse models by Zimmerman and Freksa [3, 4, 5 6]. Section 2 will summary the five 3D orientation representation model by Pacheco and Escrig. Section 4 explains the 3D coarse orientation representation models. And finally section 5 provides the conclusion of the paper and our current future work. 2.
The 2-D Zimmerman and Freksa Orientation Representation
In [3,4,5,6] approach, an orientation Reference System (RS) is defined by a point and a director vector ab which describes the left/right dichotomy. It can be interpreted as the direction of movement. The RS also includes the perpendicular line by the point b, which defines the first front/back dichotomy, and it can be seen as the straight line that joins our shoulders. This RS divides space into 9 qualitative regions (figure 1a). A finer distinction could be made in the back regions by drawing the perpendicular line by the point a. In this case, the space is divided into 15 qualitative regions (figure 1f). The point a defines the second front/back dichotomy of the RS. An iconical representation of the fine RS and the names of the regions are shown in figure 1d) and e) for the fine RS and in figure 1b) and c) for the coarse RS. The information represented in both the coarse and the fine RSs is where is the point c wrt the RS ab, that is, c wrt ab. This information can also be expressed of four different ways as a result of applying the following six operations: Identity (or Original Relation), Inverse, Homing, Homing Inverse, Shortcut and Shortcut Inverse.
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
105
Figure 1. a) The coarse 2-D orientation RS; b) The 9 qualitative regions and c) Their names in iconical representation d) The fine 2-D orientation RS; e) The 15 qualitative regions and f) Their names in iconical representation.
We can see the result of these operations and the corresponding tables with an iconic representation in the figure 2.
Figure 2. Iconic representation of the relationship c wrt ab and the result of applying the sixoperation to the original relationship.
3.
The 3-D Pacheco, Escrigand Toledo Orientation Model
In [13, 14, 15, 16] approach, the 3D orientation model is defined adding the height, to the 2-D Freksa and Zimmerman representation. Therefore, the 3-D orientation model divides the space into 75 qualitative orientation regions. The three dimensional space is divided into the breadthwise, by a perpendicular plane to the floor of the platform of the robot which join the two main points of the RS a and b. And in the length-wise by two parallel planes, perpendicular to the previous one and parallel to the floor of the plat-form of the robot. One of this planes is going to pass by the point a and the other plane by the point b, And highthwise by two parallel planes more, also passing one of them by the point a and the other plane by the point b. (figure 3 a). Those planes are the reference planes. The reference plane chosen will be parallel planes to the floor (or to the base of the robot in a robotic application). In the case we do not have any specific plane to make reference, we must decide it first. W hen we said a reference plane (as the point a could be in any height) we refer to all the family of planes parallel to the reference plane.
106
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
Figure 3. a) The 3-D orientation RS; b) The names inside the 3-D iconic representation
The names of every region are defined according to the position they are left, straight and right (across), front, b-orthogonal, neutral, a-orthogonal and back (along) and up, b-height, between, a-height and down (high). We use acronyms as u l f if it position is in up-left-front (fig. 3b). As a matter of clarity, the 3-D representation has been translated into 2-D iconical representation, as it is shown in figure 4. In this 2-D iconical representation it is easier to perceive conceptual neighbourhood.
Figure 4. a) A single cell divided into five heights and the names; b) The representation of the different heights.
The information to be represented with this 3-D orientation RS (c wrt ab) can also be ex-pressed of five (and Identity) different ways (as well as the original 2-D orientation RS) which defines the algebra that is: Inverse, Homing, Homing Inverse, Shortcut and Shortcut Inverse. These operations will define the algebra. See figures 5 and 6.
Figure 5. The 3-D iconical representation of the operations Inverse (Inv), Homing (HM), Homing Inverse (HMI), Shortcut (SC) and Shortcut Inverse (SCI).
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
4.
107
The Coarse Models
Sometimes the information provided to the system is coarse or it is not necessary to obtain a fine result. Therefore it is possible to reduce the processing time. In that case, it is necessary to define a coarse 3-D orientation model.
Figure 6. a) The 2-D iconic representation of the 3-D qualitative spatial orientation "c wrt ab" or Identity (ID); b) Inverse (INV); c) Homming (HM); d) Homming Inverse (HMI); e) Shortcut (SC) and f) Shortcut inverse (SCI) 2-D iconic representations.
Figure 7. a) The length coarse RS; b) The height coarse RS and c) The general coarse RS.
108
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
The 3-D Pacheco et al.'s orientation model [13] has been coarsen into three models: a length coarse model (the space is divided in the lengthwise by only one plane passing through the point b), a height coarse model (the highthwise space is divided by only one plane through the point b) and a general coarse model (See figure 7). For the length coarse model the names along will be: front, b-orthogonal, and back-coarse-length. And the 2-D iconical representation will be as the figure 1c) where every cell is divided into five parts as it is shown in figure 4a) Similarly, the names in the different heights are now: up, b-height, and downcoarse-height for the height coarse model. For the 2D iconical representation we will divide every cell in only three parts considering only the three different heights named (figure 8a).
Figure 8. a) The height coarse divides every single cell in three parts; b) 2-D iconical rep. of the general coarse RS.
4.1. The General Coarse Model Considering the two models explained before, we join both in the general coarse RS. This model consists in reducing the divisions of the space into 27 qualitative regions. It will be made contem-plating only the planes which pass by the point b, in the original model. The general coarse RS is defined by the plane which join the points a and b, defining the first left/right dichotomy. It includes the perpendicular plane to the floor by the point b, which separates the second front/back dichotomy and the reference parallel (to the floor) plane, also passing by the point b, which defines the last dichotomy up/down (figure 7c). The 2-D iconical representation shown in figure 8 b). Every cell is divided into three parts and the names of every region are defined using acronyms knowing that down-coarse-height will be dch and back-coarse-length bcl. In that case, for example, if the position is down-coarse-height-straight-front the acronym used will be dchsf, or urbcl if the position is up-right-back-coarse-length. In this 2-D iconical representation it is easier to perceive conceptual neighbourhood. 4.2. The Algebra of the General Coarse Model The algebra consists on seven operations: Identity, Inverse, Spin, Homing, Homing Inverse, Short-cut and Shortcut Inverse. The operations have been implemented as facts in a PROLOG database.
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
109
Identity We will represent the identity operation as ID. The algebraic notation is: ID(c wrt ab) =c wrt ab. The 2-D iconic representation of this operation is presented in Figure 9 with the 27 different posi-tions. The PROLOG facts of the Identity operation would be for instance (see figure 9): id(ulf,[ulf]); id(usf,[usf]); etc. In figure 12 we can compare the two 2-D iconic representation of the 3-D qualitative spatial orientation "c wrt ab" the fine and the coarse. The figure 12 a) shows the table of the identity op-eration of the fine 3D orientation model of Pacheco, Escrig and Toledo model [13, 14, 15], and the figure 12 b) the table of the general coarse 3-D qualitative spatial orientation.
Figure 9. a) The 2-D iconical representation of the fine 3-D qualitative spatial orientation "c wrt ab"; b) The 2-D iconical representation of the general coarse of the 3-D QSO "c wrt ab".
Inversion The inversion operation (INV) corresponds to compare the third point with respect to the RS ba. (See figure 10). The algebraic notation is: INV(c wrt ab) = c wrt ba. The PROLOG facts of the Inversion operation will be for example: inv(ulf,[dchrbcl]); inv(usf,[dchsbcl]); etc.
Figure 10. a) The 2-D iconical representation of the inverse 3-D QSO representation; b) The 3-D iconical coarse representation of inverse operation.
110
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
Spin The spin operation (SP) is the result of rotating 180 degrees the RS by the axis which passes by the two main points a and b of the RS (Figure 11). This operation implies that anything which were up and right will be down and left, respectively, after applying the operation. The PROLOG facts of the Spin operation are such as: sp(ulf,[dchrf]); sp(usf,[dchsf]); etc.
Figure 11. a) The 2-D iconical representation of the spin 3-D QSO representation; b) The 3-D iconical general coarse representation of spin operation.
Homing In the homing (HM) operation we ask about the point a with respect to the RS formed by bc (See Figure 12). The algebraic notation is: HM(c wrt ab) = a wrt bc. The PROLOG facts of the Homing operation are as: hm(ulf,[dchlbcl]); hm(usf,[dchsbcl]); etc. Here disjunction will appear, for example: hm(us,[dchlf, dchsf, dchrf, dchl, dchs, dchr,
Figure 12. a) The 2-D iconical representation of the homing 3-D QSO representation; b) The 3-D iconical general coarse representation of homing operation.
Homing Inverse In the homing inverse (HMI) operation we ask about the point a with respect to the RS formed by cb. In the general (fine) model is the result of applying the INV operation after the HM operation. The algebraic notation is: HMI(c wrt ab) = INV(HM(c wrt ab)) = a wrt cb. In the general coarse model is lost that property because the sureness. The homing inverse operation is presented in figure 13. Some PROLOG facts of the Homing Inverse operation are: hmi(ulf,[ulf]); hmi(usf,[usf, dchsf]); etc. Also disjunction appear here, for example: hmi(us,[ulf, usf, urf, ul, us, ur, ulbcl, usbcl, urbcl]).
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
111
Figure 13. a) The 2-D iconical representation of the homing inverse 3-D QSO representation; b) The 3-D iconical general coarse representation of homing inverse operation.
Shortcut In the shortcut (SC) operation we ask about the point b with respect to the ac RS. The algebraic notation is: SC(c wrt ab) = b wrt ac. The shortcut operation is presented in figure 14. There are PROLOG facts of the Shortcut operation as: sc(ulf,[dchrbcl]);sc(usf,[dchsbcl]); etc.
Figure 14. a) The 2-D iconical representation of the shortcut 3-D QSO representation; b) The 3-D iconical general coarse representation of shortcut operation.
Shortcut Inverse The shortcut inverse (SCI) operation is the result of asking about the point b with respect to the ca RS. In the general (fine) model is the result of applying the INV operation after the HM operation. The algebraic notation is: SCI(c wrt ab) =INV(SC(c wrt ab)) = b wrt ca. In the general coarse model is lost that property because the sureness The shortcut inverse operation is presented in figure 18. Some PROLOG facts of the Shortcut Inverse operation will be for example: sci(ulf,[dchrbcl]); sci(usf,[dchsbcl]); etc.
112
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
Figure 15. a) The 2-D iconical representation of the shortcut inverse 3-D QSO representation; b) The 3-D iconical general coarse representation of shortcut inverse operation.
5.
Conclusions and Future Work
The management of different coarse models in 3-D qualitative orientation and the relations between them has been dealt in this paper. The combinations of those models (making coarser the fine model, first in length and in height, and join them after) give us the possibility of working with different granularity levels. It is useful when the information obtained is not as clear as we want or we need more speed in spite of we will have more uncertainty. We have left out of this paper some future work: the application of the 3-D orientation model to mobile robots with an arm manipulator on it. The coarse qualitative model on 3D orientation will be needed for path planning and navigation map building.
Acknowledgements This work has been partially supported by CICYT under grant number TIC200307182.
References [1]Chang, S.K., Jungert, E., "A Spatial Knowledge structure for image information systems using symbolic projections", Proceedings of the National Computer Conference, Dallas, Texas, November 26, pag. 79-86, 1986. [2]Escrig, M.T., Toledo, F., “Reasoning with compared distances at different levels of granularity“ in the 9th Conference of Spanish Association of Artificial Intelligence, 2001. ISBN:84-932297-0-9 [3]Freksa, c., "Conceptual Neighbourhood and its role in temporal and spatial reasoning", Proceedings of the IMACS W orkshop on Decision Support Systems and Qualitative Reasoning, pag. 181-187, 1991. [4]Freksa, c., "Temporal reasoning based on semi-intervals", in Artificial Intelligence, vol. 54, pag. 199-227, 1992. [5]Freksa, c., "Using Orientation Information for Qualitative Reasoning", in a. U. Frank, I. Campari, and U. Formentini ed. Theories and Methods of Spatio-Temporal Reasoning in Geographic Space. LNCS, vol. 639,
J. Pacheco and M.T. Escrig / Coarse Qualitative Model of 3-D Orientation
113
Springer, Berlin, pag. 162-178, 1992. [6]Freksa, c., Zimmermann, K., "On the Utilization of Spatial Structures for Cognitively Plausible and Efficient Reason-ing", in Proceedings of the IEEEInternational Conference on Systems, Man and Cybernetics, pag. 18-21, 1992. [7]Frank, a.U., "Qualitative Spatial Reasoning with cardinal directions", in Proceedings of the Seventh Austrian Confer-ence on Artificial Intelligence, W ien, Springer, Berlin, pag. 157-167, 1991. [8]Guesgen, H.W ., "Spatial reasoning based on Allen's temporal logic", Technical Report TR-89-049, International Com-puter Science Institute, Berkeley, 1989. [9]Hernández, D., "Diagrammatical Aspects of Qualitative Representations of Space", Report FKI-164-92, Technische Universität München, Germany, 1992. [10]Hernández, D., “Qualitative Representation of Spatial Knowledge“. In volume 804 of Lecture Notes in Artificial Intelligence. Ed. Springer-Verlag, 1994. [11]Kak, a., "Spatial Reasoning", AIMagazine, vol.9, no. 2, p. 23, 1988. [12]Mukerjee, a. and Joe, G., "A Qualitative Model for Space". In the 8th American Association for Artificial Intelligence, pag. 721-727, 1990. [13]Pacheco, J., Escrig, M.T., “An Approach to 3-D Qualitative Orientation of Point Objects“ in the 4th Catalan congress of Artificial Intelligence, 2001. [14]Pacheco, J., Escrig, M.T., Toledo, F. “A model for Representing and Reasoning with 3-D Qualitative Orientation“ in the 9th Conference of Spanish Association of Artificial Intelligence, 2001. ISBN:84-9322972-5 [15]Pacheco, J., Escrig, M.T., Toledo, F. “Representing and Reasoning on Three-Dimensional Qualitative Orientation Point Objects“. In Pavel Brazdil, Alípio Jorge (ed.). Progress in Artifitial Intelligence. LNAIvol 2258, Springer, pag. 298-305, 2001. ISBN:3-540-43030-X [16]Pacheco, J., Escrig, M.T., Toledo, F., “Three-Dimensional Qualitative Orientation Point Objects: Model and Reasoning“, Workshop on LPAI, EPIA 2001, Porto Portugal, Proceedings edited by José Alferes and Salvador Abreu, pag. 59-74, 2001.
114
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Fuzzified Strategic Maps a,1
a
Ronald Uriel Ruiz Ordóñez , Josep Lluis de la Rosa i Esteva and a Javier Guzmán Obando a ARLab -Agents Research Laboratory, University of Girona,17071 Girona{ruruizo,peplluis, jguzmano}@eia.udg.es Abstract. The strategy may be represented by a strategic map (SM).According to Kaplan and Norton, SM are built up to obtain constant communication of the objec-tives to all employees. They have a simple layout, are general and consistent, and help achieve better implementation of strategies so that employees tolerate changes that they initially resisted and distrusted in their companies.This paper suggests a system that to interpret the strategy and its aligned actions so that it can alert and send messages to users (the employees) to guide them according to the strategy and the behavior of other employees.A real example with a research group in a European university is presented. Keywords. Artificial Intelligence, Fuzzy Systems, Recommender Systems, Strategic maps, Strategies, Management, Balanced Scorecard
1. Introduction According to Boston Consulting Group reviews [12], the strategy of a company does not generate the expected results because of resistance to change by the employees. To over-come this, Kaplan and Norton [3] proposed the balanced scorecard (BSC) and strategic maps (SM) to stimulate employees to execute any strategy correctly. Before the implementation of changes that employees may resist, we suggest that not only employees but also machines must be committed to follow a strategy. This claim is supported by the fact that 70% of current employees have computers or other support systems. Therefore, a suitable human-machine interface is required to enable the correct implementation of the strategy, and this is an opportunity for recommender agents that can understand the strategy at the same level as employees. According to Kaplan and Norton, SM are built up to obtain constant communication of the objectives to all employees. They have a simple layout, are general and consistent, and help achieve better implementation of strategies so that employees tolerate changes that they initially resisted and distrusted in their companies [9].This paper aims to show the viability of a recommender agent (RA) based on a fuzzified strategic map (FSM)in a real case: the Agents Research Laboratory (ARLab) in the University of Girona. 1
Correspondence to: Universitat de Girona, Campus Montilivi. Tel.: +34 972 41 8478; Fax: + 34 972 418 098; E-mail:
[email protected]
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
115
2. State of Art 2.1. Decision Support Systems (DSS) Tools like decision support systems (DSS) allow the creation of structured models of real problems of decision and help us analyze them to understand them better and to be able to improve the quality of the resulting decisions [1]. This paper presents one example of a DSS and shows the advantages and importance. DSSs are based on artificial intelligence[11],in particular fuzzy sets, which can contribute measurements in situations that would not be measurable with other systems [2].The fuzzy sets help the recommender agents understanding of strategy by agents in personal computers, so that these agents can recommend actions that the employees should follow to complete the strategy efficiently. The application of the fuzzy adjective to values not used in fuzzy logic is in-tended to denote uncertainty [7], by which fuzzy properties are compared with the property of indeterminism. An element can be full of uncertainty, which is to say, fuzzy, but that does not mean that the values assumed by non determinate statements are unknown. In fact, the fuzzy objective can be understood as the probability of assigning values other than "false" or "true" to statements [8]. To decide is always a human action[7]: faced with an external event, we must identify the future states of that event and establish the possible courses of action that lead to the fulfillment of the established goal. The terms "human action" and "future states" indicate to us that all decision-making processes estimate subjectivity and uncertainty. Where other systems fail, fuzzy sets can measure the degree of subjectivity. 2.2. Strategic Maps (SM) Management control begins with the vision and strategy of the company, and the balanced scorecard (BSC) is a method for controlling the business[4]. Nevertheless, the descriptive character of the BSC frequently results in reviews of the company vision and allows reconsideration of the strategy. For this reason, the initial stages of the BSC pro-cess deal with the development of a strategy [5], a phase that may already have taken place in other divisions of the company. In this case, the preparation of the BSC will only confirm the existing strategies, although in the construction of the BSC, these strategies will be expressed in the more tangible terms of goals and key factors for success. For the methodology of the BSC, it is assumed that "a strategy cannot be applied unless it is understood, and it is not understood if it cannot be described" [6]. One of the intentions of the strategic map is a clear description of the strategy. Strategic maps are graphic images that show the representation of the hypothesis on which the strategy is based [4].The strategic map must be able to explain the results that are to be obtained and how they will be obtained [6]. In addition, it is known as a cause-effect diagram because it identifies the type of relation between the different perspectives and the defined objectives.
3. Fuzzified Strategic Maps (FSM) Fuzzified the strategic maps of Kaplan and Norton [6] allows a recommender agent to interpret them and to choose agreed actions that conform to the strategy adopted by the
116
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
company, diminishing the risk of "fear of change" that people have by nature. The first FSM was developed in the Royal Academy of Doctors case of Barcelona by Aluja and Kaufmann [7]. While these authors implemented fuzzy sets, we have added improvements to this methodology that allow us to make a conjunction between the cause-effect relations of Aluja and Kaufmann [7] and those of Kaplan and Norton [3], obtaining therefore a fuzzified strategic map that in binary language [10] can be interpreted by the recommender agent ALEX. Our research work is based on proposing a possible solution for the fault in the balanced scorecard model and strategic maps of Kaplan and Norton [10]. According to our analysis, this fault exists only in the human process. We propose to improve the methodology of Kaplan and Norton for implementing "fuzzy sets" or "subconjuntos difusos" (in Spanish) by incorporating the theory of "forgotten effects" of Aluja and Kaufmann, so that people can understand the strategies, and so that computers can calculate using them and thus systematically support the employees in achieving the final goal. 3.1. Search Equivalence Between Strategy Map And FSM Kaplan and Norton state: "A vision describes a desired result; a strategy, nevertheless, must describe how those results will be reached" [4]. The strategic map of a BSC must be explicit in showing the hypothesis of the strategy. Each indicator of the BSC comprises a chain of relations, causes and effects that connect the desired results of the strategy with the inducements that make them possible. The strategic map describes the process of transformation of intangible assets to tangible results with respect to the client and the shareholders [5].The FSM is a first approach, in which a fuzzy system of effects and causes is organized as a strategic map, allowing the combination of multiple strategies to achieve the desired effect according to the needs of the organization. The methodology of maxmin convolution of fuzzy sets allows us to relate two series of causal sets and other effects to a greater degree of certainty, a fundamental requirement for constructing a strategic map according to the methodology of Kaplan and Norton. 3.2. Methodology And Algorithms Our methodology has as its base the methodology selected by Aluja and Kaufmann to construct the relations of "forgotten effects". In consultation with experts in the domain, we take the following first steps. First: a set of causes A with elements An and a set of effects B with Bn elements is chosen: A = [A1, A2, ..., An]; B, is the Effects set, B = [B1, B2, ..., Bm]. Second, three initial matrices of incidence are constructed to determine the incidence degrees to represent the effect of the causes A on the effects B. In the second matrix, the incidence degrees are determined that have the causes on these same ones, and in the third one determines the incidence degrees that have the effects on ~ these same ones. M, is the incidence Matrix n*m determined by experts, A is a ~ ~ Incidence fuzzy matrix between n*n, B is a Incidence fuzzy matrix between m*m, M ~ ~ is a Incidence fuzzy matrix between A * B , then, by the methodology of Kaufmann and Aluja [7], we make the maximin convolution to obtain the following fuzzy matrix.
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
117
On this final matrix we propose add a row and a column of averages that will allow us to make the FSM.
PA is average of the row that composes the causes.
PB is average of the column that composes the effects.
T, is a threshold determined by experts. 3.3. Building the Map We begin with the top levels and move toward the lower levels. First we consider the effects that have averages greater than or equal to some T hat the experts will determine. These effects are made the elements of the first outer level, because they contain more incidences. Kaplan and Norton describe these elements as "sheltering the strategy".
The effects that do not fulfill the requirement to be of the first outer level will form the second outer level.
The objective effect will be the that has more incidences greater one than T and their average of incidences is the greater one, therefore will be the effect that heads the map.
118
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
The effect "strategy axis" has the minor number of incidences of greater value and
~
that is the one that less depends on the others, to see matrix B . Axis effect
The external cause is great average of incidences they have towards the effects will be those of the third level since it means that the effects depend directly on these.
Figure 1. D˜=Maximin convolution matrix of causes and effects of ARLab.
The "cause axis" has greater average towards the "effect axis" and that is the one that
~
less depends on the others, to see matrix A .
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
119
4. Case: Strategic planning in ARLab The Agents Research Lab (ARLab) is concerned with the development and analysis of AI techniques and control architectures for both agents and multiagent systems. The laboratory in his eagerness to laboratory into high-quality investigations, proposed customized objectives: sending articles to journals and choosing conferences to attend that would in-crease networking [15]. Laboratory staff offered great resistance to these changes, be-cause the previous approach was to publish mostly at conferences, and at the end of a program, attempts would be made to publish in journals. It was thus very difficult for the two academic researchers to convince the other members of the need for a "change", we take the FSM propose to accelerate the change process that needed ARLab. 4.1. FSM of ARLab We have two sets, A (causes) and B (effects) , to see figure 2 and 3.When doing convolution is the matrix that relates the degrees of incidence between causes and effects, to see equation 1 and figure 1. We can thus match these changes to the FSM of ARLab and call them "states of the fuzzified strategic map". In ARLab, the fuzzified map is obtained in the following form: T, is 0.8 and working with the previous methodology we obtain: for an initial state, we can accept that the Laboratory:A1 and A2 are initial causes, and for a better example, we took the value from the element (v) = 1; that is, 1 denominates that the action was completed and 0 denominates that the action was not made. Has good numbers of PhD thesis completions (B5), has many PhD students (A4), has many visitors (A5), has a good number of teaching staff (A1), and has an acceptable number of projects (A2). We took this as the root state and allowed fluctuations in the map, where a fluctuation will be for us a new set of causes (within the map) that allow us to improve on the previous results as shown in figure 5.
~
Figure 2. A = Relations matrix, causes-causes of ARlab.
120
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
~
Figure 3. B = Relations matrix, effects-effects of ARlab.
Figure 4. Strategic map of ARLab with FSM.
Figure 5. BSC modifications of ARLab.
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
121
4.2. Interpretation New Causes = realized = 1 New Causes = not realized = 0 With figure 4,it would interpret that to receive the maximum rank of prestige (B6), the lab must increase the number of doctorates (A3), write papers for journals (B2), and attend conferences (B1), as indicated by action 1. If the intention of the organization is to follow strategy (B6), it follows that
Figure 6. Example of possible ARLab recommendation by ALEX.
to have a good number of doctorates (A3) and to write papers for journals prevails over the action to increase doctorates (A3) and to attend conferences (B1). Then with the new strategy, Our Agent ( we call ALEX) would interpret that increasing the number of finished theses (B5) would be totally equivalent to naming the three new causes (A3), (B2), and (B1) as more likely to achieve the goal than the two first causes (A3) and (B2) alone. This table summarizes the recommendation process, to see figure 4. One can see how the strategy recommendation process follows a three-step evaluation. The first step is to rate the strategy by means of the FSM introduced in this paper. 5. Example of Recommendation In this example of recommendation of our Recommender Agent that will be called: Agent For Leaping into Strategy EXperience (ALEX), to see figure 6, the FSM indicates that in the present state, it is better to publish a journal paper than a conference paper. The second step is the collaborative filtering (CF) that compares the action patterns of three users (named R, J and P), matching R with J better than with P, because they attend the same conferences, and then suggests that R should attend the conference that J has at-tended but R has not. The third step is the opinion-based filtering (OBF) from [3], where ALEX looks up its contact list for the most trusted agent, and this is the RA of P (with trust as 1) rather than the RA of Javier (with trust as
122
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
0.5), and the opinion that the RA from P is that R must send a paper to a journal instead of to a congress. Once we have the three ratings provided by the three points of view (FSM, CF and OBF), to see figure 6, then an aggregation process follows, with weights that give an idea of the profile that the RA ALEX should have regarding user R. In this case, w1 means a high understanding of the strategy that normally is the profile of managers, w2 means high loyalty to associates, and w3 means high obedience to managers or trusted people in the company. Any profile is a combination of the three features, and in the example, R is considered to be a manager who behaves somewhat loyally to his associates and shows some trust in his superiors. The final recommendation is that, in the current state, it is better to prepare a paper for a journal than to attend a conference. 6. Conclusions In this paper, we furthered our investigation of strategic maps, and we gave an example of a practical case in our laboratory, assisting our director to take right decisions that minimize the risks arising from the employees’ resistance to change.
7. Future Work 1. Forgetting factors for state S in a temporal window; 2. Sequences: A single token pre-cedes others and vice versa; 3. ALEX will uses fuzzified strategic maps (FSM), aggregated with other recommendation techniques such as collaborative filtering and opinion-based filtering, ALEX, Agent for Leaping into strategy Experience, will be a recommender agent that recommends actions to employees according to a strategy adopted by their organization. Acknowledgements We acknowledge the assistance of the Universities Investigation and Information Society Department, AGAUR, Catalonia, Spain, and Spanish government grant DPI2005-09025-C02-02, "Cognitive Control Systems".
References [1] Del Acebo, E; Oller, A; De la Rosa, J.Ll.; Three Criteria for Fuzzy Systems Quality Evaluation; Congreso Workshop on Automation, Viena (Austria);2001, 71-80. [2] Del Acebo, Esteve; De la Rosa, Josep Lluis; A Fuzzy System Based Approach to Social Modeling in Multi-Agent Systems; Congreso AMAAS’ 2002; Publicación Proceedings del 1st AMAAS2 2002 World Congress; Bologna (Italia). [3] Kaplan, Robert S.; Norton, David P. Harvard Business Review, Jan/Feb92, Vol. 70 Issue 1, p71, 9p, 1 chart, 1 diagram, 1 graph; (AN 9205181862). [4] Kaplan, Robert S.; Norton, David P.. Harvard Business Review, Sep/Oct93, Vol. 71 Issue 5, p134, 7p, 3 diagrams; (AN 9312031654). [5] Kaplan, Robert S.; Norton, David P.. Harvard Business Review, Jan/Feb96, Vol. 74 Issue 1, p75, 11p, 2 charts, 3 diagrams, 1 graph; (AN 9601185348).
R.U. Ruiz Ordóñez et al. / Fuzzified Strategic Maps
123
[6] Kaplan, Robert S.; Norton, David P.Having Trouble with Your Strategy? Then Map It. By: Harvard Business Review, Sep/Oct2000, Vol. 78 Issue 5, p167, 10p, 3 diagrams, 1c; (AN 3521290) [7] Kaufmann, Arnold; Gil Aluja Jaime (1988); Modelos para la investigación de los efectos olvidados. Ed. Milladoiro. Vigo (Spain). (ISBN 84-4043657-2). [8] Gil Aluja Jaime (1988); Introducción de la teoría de la incertidumbre en la gestión de empresas. Ed. Milladoiro. Vigo (Spain). (ISBN 84-931229-4-7). (see English edition on Ed. Springer 2002). [9] Peters Tom (1998), El círculo de la Innovación, , Págs. 76-77, Ediciones Deusto S.A. 1998. [10] Ruiz, Ronald; De la Rosa Josep Lluis; Guzmán Obando, Javier,(2005). "Implementación de Mapas Estratégicos En Sistemas Difusos para mejorar La Dirección Empresarial ". I Spanish Informatics congress, Symposium of fuzzy logic, Granada, Spain. September, 2005. [11] Russell, Stuart; Norvig Peter (2004), Intelligence Artificial: An enfoque modern, Prentice, Hall. [12] The Boston Consulting Group (2002), Ideas about strategy, Page. 294, Editions Deusto S.A. 2002.
124
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
A Qualitative Representation Model about Trajectories in 2-D J.V. ÁLVAREZ-BRAVO*, J.C. PERIS-BROCH+, M.T. ESCRIG-MONFERRER++ J.J. Álvarez-Sánchez*, F.J. González-Cabrera* * Dpto. de Informática, Universidad de Valladolid Escuela de Informática (Campus de Segovia), Spain {jvalvarez , jjalvarez , fjgonzalez}@infor.uva.es +
++
Dpto. de Lenguajes y Sistemas Informáticos Dpto. de Ingeniería y Ciencia de los computadores Universidad Jaime I, Castellón, Spain +
[email protected] ,
[email protected]
Abstract Description of trajectories has been dealt in quantitative terms by means of mathematical formulas. Human beings do not use formulas to describe the trajectories of objects in movement, for instance, for shooting or catching a ball. Common sense reasoning about the physical world has been commonly solved by qualitative models. However, no qualitative model about trajectories of objects in movement has been developed up to our knowledge. In this paper, a qualitative representation model about kinematic properties of a system consisted of several objects moving uniformly (constant speed) in a 2D space has been developed. This representation is based on a geometric description of the most relevant kinematical aspects through the relation between the trajectories of two moving objects.
Keywords: Spatial Reasoning, Qualitative Reasoning, Qualitative Trajectories, Motion.
1 Introduction The description of movement is necessary in the robotics field for several aspects. Up to now, most of the approaches developed are quantitative and describe the movement around the robot with respect to itself: the quantitative model to plan the motion of mobile robots (determination of the no collision trajectories) developed in [Latombe, 1991] has been used to maintain a certain formation class of mobile robots using a geometric approach [Belta, 2001], and to define a strategy leader-follower approach applied to the planetary exploration [Barfoot, 2001]. Another quantitative approach to define a near-time-optimal trajectory for wheeled mobile robots has been developed in [Choi, 2001]. A quantitative model for planning the motion of parallel manipulators has been developed in [Merlet, 1993]. Few
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
125
works exist in the literature which use qualitative representations: in [Liu, 2005] a qualitative representation of orientation combined with translation for a parallel manipulator for diagnosis is presented. To date, collision detection methods have been totally quantitative, obtaining much more details than strictly necessary at a high computational cost [Ericson 05]. However, human beings do not use formulas to describe the trajectories of objects in movement, for shooting or catching a ball. We use the common sense reasoning. Qualitative Spatial Reasoning (QSR) is, perhaps, the most common way of modelling commonsense reasoning in the spatial domain. A qualitative representation can be defined as that representation which makes only as many distinctions as necessary to identify objects, events, situations, etc. in a given context. Qualitative models are suitable for further reasoning with partial information in a robust manner, in such cases when complete information is either unavailable or too complex to be efficiently managed. As far as we know, there is not any work, quantitative or qualitative, that carries out the description of the movement considering each object involved with respect to the other ones. Under this approach and given a specific task, (e.g. catching a ball, etc…) a robot can make a decision taking into account the global situation (if there is another robot going to catch the ball and it is closer to this one. In this case our robot can perform another action). Therefore, in this paper we are going to carry out a model that describe qualitatively the trajectories in 2-D of the objects in movement, consider as points, by means of a representation that allow us to perform a reasoning process about this features using a composition operation based on the transitive property. Under this approach, a qualitative model is defined by the following steps: x
The information that we are going to represent. In this particular case, we are going to represent the relative trajectory of an object b with respect to (wrt) the trajectory of an object a, namely, b wrt a.
x
The Basic Step of the Inference Process (BSIP). For the concept of trajectories, the BSIP can be defined such as: given three trajectories “a”, ”b” and “c” which compose two relationships where the first one is described by means of the features of “b” wrt “a” (rba), and the second one is described by means of “c” wrt “b” (rcb); the BSIP consists of obtaining the relationship of trajectory described by “c” wrt “a” (rca). rca=rcb
rba
x
The Full Inference Process (FIP). When more trajectory relationships among several objects in movement are provided, then the FIP is necessary. It consists of repeating the BSIP as many times as possible, with the initial information and the information provided by some BSIP, until no more information can be inferred.
In this paper we are going to focus our attention in the qualitative representation of trajectories, which corresponds to the previous first point. The remainder of this paper is organized as follows. Section 2 presents the kinematic and geometric aspects which are used to qualitatively describe trajectories. In section 3 we will show the qualitative representation model of trajectories. Finally, in section 4, our conclusions and prospects are explained.
126
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
2 The Qualitative Kinematic Representation of Trajectories For representing the kinematical properties of several objects in movement it is necessary to introduce the concept of trajectory. A trajectory can be considered as the representation of the different dynamical states of a moving object. From a geometric point of view, a trajectory represents the positions of a moving object at different times (Figure 1). In the former definition we stress in the concept of trajectory as an entity whereas in the second definition the real dynamical states of the object are showed.
p , ti
p, ti+1
p , ti+2
p, ti+3
Figure 1. Representation of a trajectory as the succesive positions of an object at different times.
Both of them are useful for the purpose of representation. The first one for allowing us to work with the trajectory as an entity considering only their geometrical properties among themselves. The second one for allowing us to grasp some geometrical aspects that involve the relative position of the mobile objects inside the trajectories. However, taking isolated trajectories it is not enough powerful to support a reasoning process. To implement an inference process about the kinematical properties of this kind of systems it is necessary to define a transitive composition operation. Since this operation involves three objects, it is necessary to define a representation that considers at least two mobile objects. Therefore, in this work, the qualitative representation considers the kinematical features between two trajectories. It is important to remark that this approach has been developed considering a uniform movement. The presence of any kind of acceleration will involve other geometric considerations. Moreover, the model considers only trajectories in a 2D space. The most relevant kinematical aspects between two moving objects are described by means of geometric features such as the position of the two objects with respect to the crossing point, between the two trajectories, the relative position between objects, and so on. In subsection 2.1 we describe the geometrical features used to represent the qualitative relationship between the trajectories. In subsection 2.2 the geometrical features used to characterize the relative dynamical positions of the objects are shown. Finally in subsection 2.3 the kinematical behaviours related to the different geometrical and kinematical aspects mentioned before are described. These features will have an important role during the inference process. 2.1 Qualitative relationships between trajectories The qualitative relationship between two trajectories can be represented by two geometrical aspects: the angle “I” between the trajectories (taking into account the movement direction of the objects), and the minimum distance “dmin” between any two points that belong to them.
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
127
Therefore, we can consider five possible relationships of pair of trajectories in 2-D (Figure 2): (a)
(b) (e)
(c)
(d) Figure 2. The five qualitative relationships between trajectories.
x
The two trajectories are coincident, with the same direction. We use the iconical representation (n) for this case. It can be described by: I= 0º, dmin=0. (Figure 1a).
x
The two trajectories are parallel and with the same direction. We use the iconical representation (nn) for this case. It can be described by: I= 0º, dmin>0. (Figure 1b).
x
The two trajectories are coincident, with different direction. We use the iconical representation (l)for this case. It can be described by: I= 180º, dmin=.0 (Figure 1c).
x
The two trajectories are parallel and with different direction. We use the iconical representation (np) for this case. It can be described by: I= 180º, dmin>0. (Figure 1d).
x
The two trajectories are crossing each other. We use the iconical representation (X) for this case. It can be described by: 0º< I <180º, dmind0 (Figure 1e).
2.2 Qualitative relationships between the dynamical positions of the objects In this subsection we consider the features which characterize the geometrical relationships between the different positions of the moving objects at different times. This features make a division of the relationships between trajectories and allow us to reason about the behaviour of the trajectories (e.g. whether two objects can collide or not). First we are to extract the relationships between the positions of the two objects at time ti . In the next subsection, the temporal evolution of these relationships will be shown. The geometrical features we use to represent the relationships between objects at time ti are (Figure 3): 1.
The distance between objects.
2.
The angle defined between the line that join the objects and the trajectory that contains the second one.
3.
The position of the crossing point with respect to each one of the objects.
4.
The relative position of one object with respect to the other one taking into account a rotation with respect to the crossing point.
5.
The relative position of one object with respect to the trajectory of the other one. Each trajectory divides the space in three zones using the direction of movement: right, in and left. If the relative position is “in” then an additional geometric pa-
128
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
rameter is used: the relative movement direction. This parameter represents the relative direction of one object with respect to the trajectory of the other one. One object can move toward the right, toward the left or along the trajectory. P1
left
toward right
distance angle
in
along Crossing point toward left
P2
Figure 3. Geometrical relationships between objects P1 and P2 at time t
With respect to the crossing point we can distinguish the following eight different relative positions of the moving objects (Figure 4): 1
1
1
2 (a)
1
2
2 (b)
(e)
1
2
(f)
1
1 2
2
1
(c)
2
(d)
(g)
2
(h)
Figure 4. Relative positions of two objects, which describe a crossing trajectory with respect to the collision intersections.
x
The crossing point is in front of the two mobile objects. This situation is depicted such as (f,f) (Figure 4a). x The crossing point is behind the two mobile objects. This situation is depicted such as (b,b) (Figure 4b). x The crossing point is in front of the two mobile objects. This situation is depicted such as (f,f) (Figure 4a). x The crossing point is behind the two mobile objects. This situation is depicted such as (b,b) (Figure 4b). x The crossing point is in front of the object P1 and behind the object P2. This situation is depicted such as (f,b) (Figure 4c). x The crossing point is behind the object P1 and in front of the object P2. This situation is depicted such as (b,f) (Figure 4d). x The crossing point is in front of the object P1 and in the same position with respect to the object P2. This situation is depicted such as (f,i) (Figure 4e). x The crossing point is in the same position with respect to object P1 and in front of the object P2. This situation is depicted such as (i,f) (Figure 4f). x The crossing point is behind the object P1 and in the same position with respect to the object P2. This situation is depicted such as (b,i) (Figure 4g). x The crossing point is in the same position with respect to object P1 and behind the object P2. This situation is depicted such as (i,b) (Figure 4h). As remarkable issues, it is important to notice that:
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
129
x
The collision (the two moving objects upon the crossing point) is not taken into account due to the high geometrical ambiguity that is involved under this situation (no objects distinguish ability).
x
The qualitative kinematical relationships labeled as {nn,np,n,l} present an ambiguity due to the crossing point is not well defined. Therefore, the former relationships, {nn,np}, present the following relative position of the crossing point with respect to the mobile objects: {(f,f),(b,b)} or {(f,b),(b,f)}. However the latter relationships {n,l} are depicted by (*,*) since any possible situation wrt the crossing point can be considered.
x
The relationships labeled as {(f,i),(i,f),(b,i),(i,b)} represent a transition situation between two of the following relations: {(f,f),(b,b),(f,b),(b,f)}. Therefore, any description about the kinematical behaviors using the former ones has to take into account this feature in order to maintain the consistency of the representation.
With respect to the relative position of the objects, a comparison is performed by superimposing the two trajectories by means of a rotation with respect to the crossing point. The direction of the rotation depends on the position of the objects with respect to the crossing point. Therefore, in the cases {(f,f), (b,b)} the rotation has to be such as the objects are facing the same direction once the trajectories are superimposed. On the other hand, if {(f,b), (b,f)} is satisfied, the rotation has to be such as the objects are facing different directions (Figures 5). This criterion was adopted to maintain the consistency of the representation (figures 6). We can distinguish the following three relative positions of the moving objects: x
The mobile object P1 is behind the mobile object P2. This situation is depicted such as (b). x The mobile object P1 is in front of the mobile object P2. This situation is depicted such as (f). x The mobile object P1 is in the same position with respect to the mobile object P2. This situation is depicted such as (i). The relationships {(b,i),(f,i)} present an ambiguity wrt the relative position between objects due it is not well defined the direction of the rotation. In these cases this feature is depicted by {b,f} considering both values. P1
P1 P2 P2
i
f
in front of ehind
in
(a)
(b)
Figure 5. Relative position of two objects describing a crossing trajectory when the position of the objects wrt the intersection point is: (a) {(f,f),(b,b),(b,i),(f,i)}, (b) {(f,b),(b,f)}.
130
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
P1
P2 f
b i P1
P2 f
b
Figure 6. Relative position of two objects describing parallel trajectories (a) or coincident trajectories (b)
2.3 Kinematical behaviours As we are dealing with dynamical objects, we have to deal with the temporal evolution of the geometrical magnitudes. This additional information allows the definition of one of the most important kinematical condition: whether two objects can collide or not. The following aspects are included in the qualitative representation model about trajectories (Figure 7): x
The temporal evolution (derivative) of the distance between the objects, Gd='d/'t where 'd=dt2-dt1 and 't=t2-t1. The domain is {+,0,-}.
x
The temporal evolution (derivative) of the angle defined between the trajectory of the second one and the line that join them, GD='D/'t where 'D=Dt2-Dt1 and 't=t2t1. The domain is {+,0,-}.
x
The ratio between the first object speed with respect to the second object speed. The domain is {<,>,=}
From a qualitative point of view and taking into account that the interval of time is always positive, then the temporal evolution of these magnitudes can be represented only by the sign of their increments. V1 P1t2
P1t1
'd=dt2-dt1 P2t1
'D=Dt2-Dt1 V2 P2t2
Figure 7. Additional kinematical aspect
Attending to the ratio between the speed of the two objects and the geometrical relationships described before we can distinguish the following kinematical behaviours of the moving objects:
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
131
1. Ratio between Speeds {<,>} The derivative of the distance when the relationships between trajectories are {nn,np,X}. When the speed of the objects is not equal and for any relationship between trajectories, except {n,l}, the perpendicular line from the object with the smallest speed to the trajectory of the other one determines the border between an approaching ('d<0) or moving away ('d>0) movement (Figure 8). For instance, if the relationship {nn, X} are assumed, P1 is speeder than P2 and P1 is before the perpendicular line from P2 (smaller speed), then P1 is approaching to P2, otherwise P1 is moving away P2. In these former cases if the derivative of the angle is constant a collision will occur. If the relationship between trajectories is {np}, P1 is speeder than P2 and P1 is before the perpendicular line from P2 (smaller speed) then P1 is moving away P2, otherwise P1 is approaching to P2. In this latter case collision does not exist. approaching moving away
P1
Vp1>Vp2
P1
P2
Figure 8. Kinematical situation when the speed of the two objects is different, the objects are{(f,f),(b,b)} wrt the crossing point and the relationship between trajectories is {X}.
The derivative of the angle when the relationships between trajectories are {nn,np, X}. For any relationship between trajectories that satisfy the relative position of the crossing point (f,f) wrt the mobile objects, except {n}, the line (thick dotted line in Figure 9) that describes a constant temporal evolution of the angle determines the border between an increasing ('D>0) (P1 is before this line, Figure 9a) or decreasing ('D<0) (P1 is after this line, Figure 9c) value of the derivative of the angle. The geometrical situation that defines this line depends on the ratio between the speed of the first object with respect to the speed of the second one. P1t1
P1t1
P1t1
P1t2
P1t2
'D>0
P2t1
P2t2 P2t1
P1t2 'D<0
'D=0
P2t2
Vp1>Vp2
P2t1
P2t2
Figure 9. The derivative of the angle when the speed of the objects is different, the relative position of the objects wrt the crossing point is (f,f), the relationship between trajectories is {X} and the geometrical conditions of (a) 'D>0, (b) 'D=0, (c) 'D<0 are satisfied.
132
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
In the case (f,f), if the derivative of the distance is decreasing and the temporal evolution of the angle is constant (0) the condition of collision will be satisfied. The derivative of the angle in the case (b,b) can be performed in a similar way. If a parallel trajectories with the same direction (nn) is considered, this line fit in with the perpendicular line wrt the smallest speed object. For any relationship between trajectories that satisfy {(f,b),(b,f)}, except (l), then the derivative of the angle is always increasing and there is not collision. The derivative of the distance and the angle when the relationships between trajectories are {n,l}. Finally, for the relationships between trajectories {n,l} the derivative of the angle is (0) and the derivative of the distance depends on the ratio of the speeds and the relative position between objects. For instance, if the relationship {n} is assumed and if P1 is speeder than P2 then if P1 is behind P2 then P1 is approaching to P2 otherwise P1 is moving away P2. The derivative of the distance in the case {l} can be performed in a similar way . 1.
Ratio between Speeds {=}
The derivative of the distance and the angle when the relative position between objects is equal. If the speeds of both objects, P1 and P2, are {=}, object P1 is in an equal position wrt object P2, and the relationship between trajectories is represented by {X} then an approaching or moving away movement is determined by means of the position with respect to the intersection point. In this case the derivative of the distance is not zero and the derivative of the angle is constant. If both objects are (f,f) with respect to the crossing point, then a collision will occur (Figure 10). If the relationship is {nn} then the temporal evolution of the angle and distance is constant since no approaching or moving away movement are possible. P1t1 P1t2 Approaching
'D=0 P2t2
P2t1
Vp1=Vp2
P2t2 P2t1 Moving away P1t1
P1
Figure 10. The derivative of the distance and the angle when the speed of both objects is equal and the object P1 is in an equal position with respect to the object P2.
The derivative of the distance and the angle when the relative position between objects is different. When the speeds are equal and the relative position between the objects is not equal and the trajectories satisfy the relationship {X, np, l} then, the derivative of the distance and the angle are determined by means of the relative position between objects and their posi-
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
133
tion with respect to the crossing point (if this latter is well defined). In Figures 11, and 12 the temporal evolution of the distance are shown considering only the relative positions (f,f) and (f,b) and the {X} relationship between trajectories. The cases (b,b) and (b,f) can be performed in a similar way. In these cases collision does not exist. P1
(b)
(a) Approaching
P1
P2
Approaching
P2
Vp1=Vp2 Vp1=Vp2
Figure 11. Approaching: The speed of both objects is equal, the relationship between trajectories is {X} and (a) object are in the case (f,f) and P1 is behind P2 or (b) objects are in the case (f,b) and P1 is in front of P2. (a)
(b)
P1
P1 P2
Moving away Moving away P2
Vp1=Vp2
Vp1=Vp2
Figure 12. Moving away: The speed of both objects is equal, the relationship between trajectories is {X} and (a) object are in the case (f,f) and P1 is in front of P2 or (b) objects are in the case (f,b) and P1is behind P2.
In Figure 13 the derivative of the angle considering only the (f,f) case and the {X} relationship between trajectories is shown. For instance, if P1 is before the line (thick dotted line) that defines the equal relative position between objects then the derivative of the angle is increasing (figure 13a) otherwise is decreasing (figure 13b). The behaviour of this magnitude in the case (b,b) can be performed in a similar way. P1t1 P1t2
Vp1=Vp2
P1t2 'D<0
'D>0 P2t2 P2t1
Vp1=Vp2
P1t1
P2t2 P2t1
Figure 13. The derivative of the angle when the speed of the two objects is equal, both of them are (f,f) wrt the crossing point, the relationship between trajectories is {X}and (a) P1 is behind P2 or (b) P1 is in front of P2.If the objects are in the cases {(f,b),(b,f)} then the derivative of the angle is always increasing.
Finally, the case that consider two objects with the same speed and involving parallel or coincident trajectories with the same direction {nn,n} present a constant temporal evolu-
134
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
tion of the angle and distance since no approaching or moving away movement are possible. 3 The Qualitative Representation Model about Trajectories Once the different features are described, the next step is to define a nomenclature that will join all these aspects and will allow the description, in a compact way, of the different qualitative relationships between two moving objects. The nomenclature chosen is the following:
Tr{'d},{'D},{rporc}{rs},{rpco},{rpolr}, (rd)* where: x Tr represents the type of relationship between the trajectories of objects (P1 wrt P2) and can take the following values {n,nn,l, np, X}. x 'd represents the derivative of the distance and can take the following values {+,0,-}. x 'D represents the derivative of the angle and can take the following values {+,0,-}. x rporc represents the relative position between objects (P1 wrt P2) taking into account a rotation wrt the crossing point and can take the following values {f,i,b} x rs represents the ratio between the speed magnitudes (P1 wrt P2) and can take the following values {<,>,=} x rpco represents the relative position of the crossing point with respect to the mobile objects and can take the following values: (f,f),(b,b),(b,f),(f,b), (f,i),(i,f),(b,i),(i,b). x rpolr represents the relative position of one object with respect to the trajectory and can take the following values {r,i,l}. if rporl is “i”, the relative direction (rd)* then can take the values {r,a,l}. A complete description of the qualitative kinematical relations, under this notation, is shown in the Table 1, first column. In column 2 of Table 1, the converse relation is also provided. The converse operation can be defined as: (X R Y)(Y RX) where X, Y represent two mobiles and R represents a qualitative kinematical relationship.
R
R
R
R
nn-,-,b nn-,-,i>,{(f,f),(b,b)}, {r,l} nn+,-,f >,{(f,f),(b,b)},
<,{(f,f),(b,b)}, {l,r}
nn-,+,f nn-,+,i<,{(f,f),(b,b)}, {l,r} nn+,+,b<,{(f,f),(b,b)},
X +,-,f X +,-,i>,(b,b) , {r,l} X +,-,b>,(b,b) , {r,l}
nn0,0,b=,{(f,f),(b,b)} ,
nn0,c,f=,{(f,f),(b,b)}, {l,r}
X -,-,b>,(b,b) , {r,l}
X -,+,f <,(b,b) , {l,r}
nn0,0,i=,{(f,f),(b,b)} ,
nn0,c,i=,{(f,f),(b,b)} ,
X -,+,f >,(f,b) , {r,l}
X -,+,f<,(b,f) , {l,r}
>,{(f,f),(b,b)}, {r,l}
{r,l}
{r,l}
{r,l}
{l,r}
{l,r}
>,(b,b) , {r,l}
X +,+,b<,(b,b) , {l,r} X +,+,i<,(b,b) , {l,r} X +,+,f<,(b,b) , {l,r}
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
{r,l}
{l,r}
X -,+,i >,(f,b) , {r,l} X -,+,b>,(f,b) , {r,l} X +,+,b>,(f,b) , {r,l} X +,+,b>,(b,f) , {r,l}
X -,+,i<,(b,f) , {l,r} X -,+,b<,(b,f) , {l,r} X +,+,b<,(b,f) , {l,r} X +,+,b<,(f,b) , {l,r}
np-,+,i >,{(f,b),(b,f), {r,l} np+,+,b>,{(f,b),(b,f)},
>,(b,f) , {r,l} np-,+,i <,{(f,b),(b,f)}, {l,r} X +,+,i <,{(f,b),(b,f)}, X +,+,f >,(b,f) , {r,l} np+,+,b
X +,+,i<,(f,b) , {l,r} X +,+,f<,(f,b) , {l,r}
n-,0,b>,(*,*) , i,a n+,0,f >,(*,*) , i,a n0,0,b=,(*,*) , i,a np-,+,f >,{(f,b),(b,f)},
{r,l}
n-,0,f<,(*,*) ,i,a n+,0,b<,(*,*) , i,a n0,0,f=,(*,*) , i,a np-,+,f <,{(f,b),(b,f)},
{l,r}
135
{r,l}
{l,r}
np-,+,f =,{(f,b),(b,f)},
X -,+,f >,(b,f) , {r,l}
X -,+,f<,(f,b) , {l,r}
np-,+,i =,{(f,b),(b,f)},
np-,+,i=,{(f,b),(b,f)}, {l,r}
X -,+,{b,f} >,(f,i) ,
X -,-,f <,(i,f) , i, {r,l}
np+,+,b=,{(f,b),(b,f)},
np+,+,b=,{(f,b),(b,f)},
X +,-,f >,(i,f) ,i,{r,l}
X +,+,{b,f} {r,l}}
l -,0,f >,(*,*) , i,a
l -,0,f <,(*,*) , i,a
X +,+,{b,f} ,{r,l}
X +,+,b <,(i,b) , i,
l +,0,b>,(*,*) , i,a
l +,0,b<,(*,*), i,a
X -,+,b >,(i,b) ,i,{r,l}
X -,+,{b,f}<,(b,i) ,
=,(*,*) , i,a
=,(*,*), i,a
X -,0,i X -,+,b=,(f,f) , {r,l} X +,0,i=,(b,b) , {r,l} X +,+,f =,(b,b) , {r,l} X -,+,f =,(f,b) , {r,l} X -,+,I =,(f,b) , {r,l} X +,+,b=,(f,b) , {r,l} X -,+,{b,f} =,(f,i),{r,l} X +,+,{b,f}
X -,0,i=,(f,f) , {l,r} X -,-,f =,(f,f) , {l,r} X +,0,i=,(b,b) , {l,r} X +,-,b=,(b,b) , {l,r} X -,+,f =,(b,f) , {l,r} X -,+,i =,(b,f) , {l,r} X +,+,b=,(b,f) , {l,r} X -,-,f<,(i,f) , i, {r,l} X +,+,f<,(i,b) , i, {r,l}
np-,+,f =,{(f,b),(b,f)},
{r,l}
{r,l}
{l,r}
l -,0,f l +,0,b=,(*,*) , i,a X -,+,b>,(f,f) , {r,l} X -,0,b >,(f,f) , {r,l} X -,-,b>,(f,f) , {r,l} X -,-,i>,(f,f) , {r,l} X -,-,f >,(f,f) , {r,l} X +,-,f >(f,f) , {r,l} X +,+,f >,(b,b) , {r,l}
l -,0,f l +,0,b=,(*,*), i,a X -,-,f <,(f,f) , {l,r} X -,0,f <,(f,f) , {l,r} X -,+,f <,(f,f) , {l,r} X -,+,i<,(f,f) , {l,r} X -,+,b<,(f,f) , {l,r} X +,+,b<,(f,f) , {l,r} X +,-,b<,(b,b) , {l,r}
X +,0,f
X +,0,b
>,(b,b) , {r,l}
{r,l}
>,(b,i)
=,(f,f) , {r,l}
=,(b,i),{r,l}
<,(f,i) ,
{r,l}
{r,l}
<,(b,b) , {l,r}
Table 1. Complete description of the qualitative kinematical relationships between two moving objects.
4 Conclusion and prospects In this work a complete qualitative representation of trajectories of mobile objects in 2-D has been presented. In order to obtain this detailed description, the most relevant aspect with respect to the kinematic of the problem of two moving objects have been considered. In this sense, this description is able to grasp an important kinematical aspect such as the collision condition.
136
J.V. Álvarez-Bravo et al. / A Qualitative Representation Model About Trajectories in 2-D
This representation is being used to implement the basic step inference process (BSIP) and the full inference process (FIP), as current work. This qualitative model about trajectories of objects is currently applied to robotics in the following aspects: x Collision detection for avoidance or cause. x Qualitative description of dynamic worlds for map building or communication.
References [Barfoot, T.D., 2001] T.D. Barfoot, E.J.P. Eaton, G.M.T. D´Éleuterio. Development of a network of mobile robots for space exploration. The 6th International Symposium on Artificial Intelligence, Robotics and Atomation in Space, 2001. [Latombe, J.C, 1991] J.C. Latombe. Robot Motion Planning, Kluwer, 1991. [Belta, C, 20011] C. Belta, V. Kumar. Motion generation for formation of robots ,In Proceedings of the 2001 IEEE International Conference on Robotics and Automation, Seoul, Korea, 2001. [Choi, J.S., 2001] J.S Choi and B.K. Kim. Quantitative derivation of a near-time-optimal trajectory for wheeled mobile robots. IEEE Transactions on Robotics and Automation, vol. 17, no. 1, february 2001. [Ericson 05]Ericson, C., Real-Time Collision Detection. The Morgan Kaufmann Series in interactive 3-D Tecnology. Ed. Otherwisevier. ISBN: 1-55860-732-3, 2005. [Liu H, 2005] Liu, H., Goghill G.M. Qualitative Representation of Kinematic Robots. Proceedings of the 19th International Workshop on Qualitative Reasoning, Graz, Austria. 2005. [Merlet, J.P., 1993] Merlet J.P.. Direct kinematics of parallel manipulators. Robotics and Automation, IEEE Transactions on Volume 9, Issue 6, Dec. 1993.
137
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Proposition of NON-probabilistic entropy as reliability index for decision making Eduard DIEZ-LLEDO, Joseph AGUILAR-MARTIN Laboratoire d’architecture et analyse de systèmes 7, Av. Colonel Roche 31000 (Toulouse) Abstract. After the definition of probabilistic entropy proposed by Shannon, many other authors have adapted this theory to the domain of the non-probabilistic entropy and the fuzzy sets towards the fuzzy entropy theory. The main goal of the fuzzy entropy is to provide an index so that the fuzzy degree (fuzziness) of a nonprobabilistic set could be quantified. The mathematical expression proposed by DeLuca and Termini for the calculation of the concept of fuzziness was based in that also proposed by Shannon since it is considered as a reference in the domain of uncertainty measure and information. However, other function families have been introduced as measures of uncertainty not only in the classic concept of information theory but also in other areas like decision making and model recognition. The fuzziness indexes are widely used with great relevance in the field of uncertainty management applied to complex systems. Most of those indexes are proposed in decision-making theory so as to discern between two exclusive choices. We proposed in this paper an index that could express the reliability of making a decision taking in account the information provided by a non-probabilistic set of alternatives.
Keywords. Non-probabilistic entropy, decision-making, fuzzy set
Introduction The information theory quantifies the information contained in a message build up by means of a set of N symbols that composed a code [1] . Shannon’s information entropy [2] is based on the probability of a symbol to appear in the message. The entropy index of a message is calculated with the mathematical expression (1) which is expressed in function of the probability set of appearance for each symbol, under the probabilistic hypothesis that the addition of all probabilities is equal to one. N
S ( p ) = − K ⋅ ∑ pi ⋅ log pi
(1)
i
Talking in the domain of automatics, the message analysed is formed by the various states in which the system could evolve. The probabilistic entropy is so an index that quantifies the degree of information provided by the system. The fuzzy logic introduced a new non-probabilistic scenario where the addition of not probabilities but membership degrees is no more constrained by being equal to one. De Luca and Termini [3] [4] proposed is this new scenario a non-probabilistic entropy, or fuzziness index, based on the definition of a set of axioms. The probabilistic function by Shannon has been used as base for a family of functions fulfilling those axioms. This measure of fuzzy entropy has become relevant and largely used in the domain of fuzzy information.
138
E. Diez-Lledo and J. Aguilar-Martin / Proposition of NON-Probabilistic Entropy
1. Fuzziness indexes The non-probabilistic entropy functions have the objective of evaluating a fuzziness index for a fuzzy set [4]. A finite fuzzy set is defined by a denumerable set and the membership degrees of each of its elements. The non-probabilistic entropy represents then the fuzziness of the set by means of the elements composing it [5][6]. The membership degree is a function of the definition set E, with cardinality c = card E , over the unity interval I = 0,1 . Being E a finite set, we can
[ ]
( )
⎡μ1 ⎤ ⎢ ⎥ define the vector μ = ⎢ ⎥ where μ i = μ(xi ) xi ∈ E . DeLuca and Termini ⎢μ i ⎥ ⎢ ⎥ ⎢⎣μ c ⎥⎦ proposed the axioms that should be accomplished by the entropy functions H (μ ) over c
[ ]
non probabilistic sets H : μ ∈ ℜ → 0,1 as follows [5][7][6][4] :
H (μ ) = 0 ⇔ μ ∈ {0,1} P2. max H (μ ) ⇔ ∀i μ i = P3. H (η) ≤ H (μ ) ⇔ η ≤ S μ P1 :
1
2
(2)
The relation of magnitude order ≤ S is a comparator operator known as "sharpness". A fuzzy set η is considered sharper than a fuzzy set μ if:
∀x ∈ E Si Si
μ ( x) ≤ 0.5 μ ( x) ≥ 0.5
then η ( x ) ≤
μ ( x) then η ( x ) ≥ μ ( x )
(3)
Other axioms have been proposed relatives to the set theory. However, many authors [3][8][4] agree that the three axioms already presented above are sufficient to characterise properly a fuzzy entropy. The functions that accomplish the axioms from (2) can be defined by the general expression:
⎞ ⎛ C H (μ ) = h⎜⎜ ∑ wi ⋅ T (μ i ) ⎟⎟ ⎠ ⎝ i
(4)
Where according to [8][9] : (i) wi ∈ ℜ
+
(ii) T (0) = T (1) = 0
(5)
E. Diez-Lledo and J. Aguilar-Martin / Proposition of NON-Probabilistic Entropy
[ ]
(iii) T (μ i ) : 0,1 ⎯ ⎯→ ℜ has an only maximum in μ i = +
monotone for μ i <
1
2
and for μ i >
+
1
1
2
139
and is
2
+
(iv) the function h : ℜ ⎯ ⎯→ ℜ is monotone increasing De Luca and Termini have introduced as function T (.) Shannon’s probabilistic entropy under the form:
S ( μ i ) = μi ⋅ ln μi + (1 − μi ) ⋅ ln(1 − μi )
(6)
The expression of the fuzzy entropy introduced by de DeLuca and Termini is: C
H DLT (μ ) = K ⋅ ∑ S (μ i )
(7)
i
+
where K ∈ ℜ is a standardization constant. The use of the Shannon expression as base for a new definition of entropy is founded in the vast literature and studies that have shown its validity as a measure of reference in information theory. Nevertheless, other families of functions far from the one shown in (6) have been used in domains like decision making [10] and classification [3][4], but always respecting the axioms from (2) and also under the form of the expression in (4), where wi = K and the function T (.) is written as:
T ( μ i ) = f ( μ i ) + f (1 − μ i )
(8)
2. Decision indexes 2.1. Introduction The membership of one or more elements expresses their adequacy to a class. On the other hand, the memberships of an individual relative to several groups or classes express its adequacy to each of these classes when making a decision among one of them. When monitoring a system the individual is represented by a set of variables that define the actual system state and the classes are the possible known system states. The state with the most significant membership is considered as the state in which the system evolves at the present time. The reliability of choosing a certain state is directly related to the capacity of the diagnosis algorithm of making a decision among all the possible state memberships. Contrary to the fuzziness indexes, the concerning problem is not anymore the adequacy of several individuals to one class but the adequacy of one individual to the different possible classes.
2.2. Decision entropy proposition Entropy has a well-known and largely demonstrated relation with information: more the set is ordered, the more informative it is considered and so the lowest entropy is assigned to it. The most informative fuzzy set in decision making when choosing among several classes is that who assigns the individual to a class with the maximum membership degree while the other membership degrees are zero, so the adequacy to
140
E. Diez-Lledo and J. Aguilar-Martin / Proposition of NON-Probabilistic Entropy
any other class is null. This configuration provides us the most precise and reliable information for making a decision:
μ = [0,0,...0,1,0,...0]
(9)
On the other hand, we found out the maximum entropy when all membership degrees are equal. Therefore, the reliability of a decision would be null with respect to the information provided by the set. We consider that our choice corresponds always to the maximum membership degree, thus we can define μ M = max μ i . Even though,
[ ]
this hypothesis can be changed when considering a state or class favoured over the others by initial hypothesis. We proposed in this paper a redefinition of decision entropy that would provide an index of reliability in decision-making. The axioms are proposed in a parallel base line as those proposed by De Luca Termini [4] : R1: H ( μ ) = 0 ⇔ ∀i ≠ M ; μ i = 0 R2: max H (μ ) ⇔
∀i, j ; μ i = μ j
(10)
R3: H ( η) ≤ H (μ ) ⇔ η ≥ F μ Let now analyse the interpretation of those axioms. R1 reflects that the minimum entropy value, so zero, is reached when the individual can be assigned to a class with a non null membership degree while the other membership degrees corresponding to the rest of classes are considered null. Making a decision in such scenario should be the most reliable choice with null entropy and, so, a maximum information index. • R2 as a complementary axiom from R1 shows that the minimal information provided by a set is found when all membership degrees are identical. In such case, entropy is maximal since making a decision appears to have no reliable criterion. • R3 introduces the proposition of a new operator for comparison in the base line of that introduced by De Luca and Termini: Let us have a fuzzy set μ , •
()
then the fuzzy set δ μ is called contrast index. The contrast index is so defined by means of the difference between the adequacy degree of de decision M and the memberships to the rest of the i-classes.
δi = μ M − μi
∀i ≠ M
(11)
Let us use the current definition of the fuzzy set cardinality, also known as sigmacount, as the addition of the memberships of the correspondent fuzzy set elements: c
Card[μ ] = ∑ μi . Then a fuzzy set η is considered more reliable for making a i =1
decision than the fuzzy set μ if:
Card[δ(η)] > Card[δ(μ )]
(12)
The reliability of a decision should take in account not just the adequacies of the memberships compared to μ M but also the intensity for choosing μ M itself, since
0 ≤ μ M ≤ 1 and making the hypothesis of considering μ M maximum would be to
E. Diez-Lledo and J. Aguilar-Martin / Proposition of NON-Probabilistic Entropy
141
strong. Therefore, the operator for comparison is proposed to be function of the cardinality of the contrast index and μ M itself. DEFINITION: A decision based on a non probabilistic fuzzy set η is considered more reliable than the one based on a non probabilistic fuzzy set μ if the reliability index corresponding to η is bigger than this associated to μ:
Fia (η) ≥ Fia (μ )
η ≥F μ ⇔
(13)
Where the Fia operator is defined as:
Fia (μ ) = μ M + Card[δ (μ )]
(14)
2.3. Proposition of a decision entropy function Once the necessary axioms are exposed, the next step is to define a family of functions that would satisfy them. This family of functions could be founded on the axioms from (10) under the form of (4). The difference between set membership degrees has been used previously in the definition of distance measures between two fuzzy sets [11]. These distances measure the index of information shared by two fuzzy sets by comparing the differences of their membership degrees either their ratio. Kosko [12] proposed a distance based on that principle so as to measure the entropy of a fuzzy set as well as for the definition of distances between two fuzzy sets.
⎡ p⎤ l ( A, B ) = ⎢∑ μ A ( xi ) − μ B ( xi ) ⎥ ⎣ i ⎦ p
1
p
(15)
Rao [13] also proposed a family of quadratic entropies so as to measure the relationship between the members of a population. This family of quadratic entropies provides an index of diversity/similarity of a population based on distances between individuals as well as their probability of appearance. This approach has been largely used in the domains of biologics and genetics, where the distance between individuals is understood as the characteristics in which the different species differ [14]. We proposed a decision entropy as the comparison of the maximum membership degree of a fuzzy set, so the suitable choice, to the rest of memberships of the same fuzzy set. As a result, the larger would be the difference between μ M and the memberships, the more informative the choice is considered. Moreover, an exponential function would be used so as to maximize the information provided by the choice the wider from the membership it would be [8]. The information provided by the set for the decision making aid could then be expressed as: ( μ M − μi )
I D ( μ ) = K ⋅ ∑ (μ M − μi ) ⋅ e i
where K = μ M ⋅
e
μM
(16)
142
E. Diez-Lledo and J. Aguilar-Martin / Proposition of NON-Probabilistic Entropy
The standardization constant K corresponds to the maximum value reached by the function so as to let I D ∈ 0,1 . After that, the entropy is calculated as the complementary of information, also in the interval [0,1]:
[ ]
H D ( μ ) = 1 − I D (μ )
(17)
The larger would be the entropy, the less informative the set is considered and so the less reliable becomes our choice. Having a look back to the original theory of information [1] we find out the link between the entropy and the information itself, as an idea of disorder, and the capacity of predicting a state within a certainty interval.
3. Analysis of decision making 3.1. The two state case Let us analyse the behaviour of the function presented in the basic case of two nonprobabilistic states. In Figure 1 we found the decision entropy surface where the memberships of each state are represented in the plan axes:
Figure 1. Decision surface between two non-probabilistic events 3.2. Decision entropy for three state case Let us study now the behaviour of the decision entropy when choosing among three non-probabilistic states μ1 , μ 2 , μ 3 . As example, let’s fix as constant μ1=0.2 so
[
]
as to evaluate the entropy in function of the two other events. In Figure 2 we observe the entropy variation vis-à-vis μ3 supposing constant μ2 at different values. With μ2<0.2 we found the maximum entropy at μ1 = μ 3 = 0.2 , and above all in the particular case of μ1 =
μ 2 = μ 3 = 0.2 , since in this scenario our reliability of making a decision is
143
E. Diez-Lledo and J. Aguilar-Martin / Proposition of NON-Probabilistic Entropy
null. On the other hand, the more μ3 progresses towards the unity value, the more the entropy decreases, so increasing the reliability of our choice. Moreover, we can also observe a changing in maximum coordinates when moving towards μ2>0.2. The maximum entropy corresponds then at the case of μ 2 = μ 3 , in spite of the entropy maximum decreases when the two membership values approach to the unity. This behaviour of the function should be understood by taking in account that even the choice between two equal membership degrees is not too reliable, there are close to the maximum membership and so they could be both considered as optimal choices. 1
1 mu2=0.2 mu2=0.1 mu2=0.0
0.9
0.8
0.8
0.7
0.7 E ntropie de déc is ion
Entropie de décision
0.9
0.6 0.5 0.4
0.6 0.5 0.4
0.3
0.3
0.2
0.2
0.1
0.1
0 0
0.2
0.4
0.6
0.8
1
0 0
mu2=0.2 mu2=0.5 mu2=0.8 mu2=1 0.1
0.2
0.3
0.4
mu3
0.5 mu3
0.6
0.7
0.8
0.9
1
Figure 2. a) Entropy vs mu3 for values of μ2<0.2 b) Entropy vs mu3 for values of μ2>0.2 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 mu2=0.2 mu2=0.5 mu2=1
0.1 0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Figure 3. Maximum entropy when μ1=μ2
1
144
E. Diez-Lledo and J. Aguilar-Martin / Proposition of NON-Probabilistic Entropy
At last, we focus in the case when μ1 = μ 2 . The maximum entropy, which is expressed in function of μ3, should be reached when finding three identical membership degrees. Indeed, in Figure 3 can be seen that the maximum entropy corresponds to the less reliable decision memberships setting.
4. Conclusions and perspectives The main goal of the entropy proposed in this paper is the evaluation of the information provided by a non-probabilistic set in the decision-making domain. This evaluation of the information should not just take in account the fuzziness of the set, but also the reliability of making a decision in such scenario, where the maximum membership is by hypothesis considered the optimal choice. The necessary axioms for such objective have been proposed in coherence and compatibility with those from the fuzzy entropy by DeLuca and Termini, as well as a function that comply those axioms. The perspectives of our work are focused in the generalisation of the family functions that would satisfy formally the axioms proposed. Moreover, we are looking forward the application of weighted decision-making, considering that not all decisions have the same influence in the system studied. Besides, the reformulation of entropy is being applied in the monitoring and diagnosis domain for state transition validation so that its validity in a real case could be proved.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
G. Cullman, M. Denis-Papin, L.-C. Kaufman. Eléments du calcul informationnel. ED. Albin Michel, 1960. C.E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, vol.27 379 - 423, 623-656, July, October 1948. A. DeLuca, S. Termini. Entropy of L-fuzzy sets. Information and control, 24, pp.55-73 1974. A. DeLuca, S.Termini. A definition of a non probabilistic entropy in the setting of fuzzy sets theory. Information and control, 20, 301-312, 1972. Trillas E, Sanchis C (1979) On entropies of fuzzy sets deduced from metrics. Estadistica Española 82-83, pp. 17-25. E. Trillas, C. Alsina. Sur les mesures du dégrée du flou. Stochastica, vol.III pp. 81-84, 1979 Trillas E, Riera T, (1978) Entropies in finite fuzzy sets. Information Sciences 15, 2,pp. 159-168. N. R. Pal and J. C. Bezdek, Several new classes of measures of fuzziness , Proc. IEEE Int. Conf. on Fuzzy Syst., 928-933, Mar. 1993. Knopfmacher, J. On measures of fuzziness. Journal of Mathematics Analysis Applications. 49, 529-534 (1975). R. Capocelli, A. DeLuca. Fuzzy sets and decision theory. Information and control, 23 pp. 446-473, 1973 Dinhabandhu Bahandari, Nikhil R. Pal and D. Dutta Majumder. Measurements of discrimination and ambiguity for fuzzy sets. Electronics and communication science unit. IEEE 1992 B. Kosko. Fuzzy entropy and conditioning. Information Sciences, vol. 40, pp.165-174, 1986 C.R. Rao, Diversity and dissimilarity coefficients: A unified approach. Theor. Popul. Biol. 21, 24-43 (1982) Botta-Dukat, Zoltan. Rao’s quadratic entropy as a measure of functional diversity based on multiple traits. Journal of vegetation science 16, 533-540 (2005)
3. Neural Networks
This page intentionally left blank
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
147
Kohonen self-organizing maps and mass balance method for the supervision of a lowland river area Frédérik Thiery, Esther Llorens, Stéphane Grieu 1 and Monique Polit Laboratoire de Physique Appliquée et d’Automatique, Université de Perpignan, France Abstract. The Têt, main river of the Pyrénées-Orientales department (south of France) has a significant impact on the life of the department. The management of its water quality must be largely improved and better monitored. In this sense, the present work takes part in a global development and evaluation of reliable and robust tools, with the aim of allowing the control and supervision of the Têt River’s lowland area. A simplified model, based on mass balances, has been developed to estimate nutrient levels in the stream and to describe the river water quality. Due to, the application of mathematical models for river water quality as support tools is often limited by the availability of reliable data, Kohonen selforganizing maps were used to solve it and to avoid the data missing. This kind of neural networks proved to be very useful to predict missing components and to complete the available database, describing the chemical state of the river and the WWTPs operation. Keywords. WWTP, river, Kohonen self-organizing maps, mass balance, supervision.
1.
Introduction
For many years, the management of hydraulic resources, the efficiency of WasteWater Treatment Plants (WWTPs) and the protection of environment are major concerns. Water managers and scientists agree that a bad resources management, as well qualitative as quantitative, or a plant malfunction can have an extremely negative impact on both environmental (fauna and flora) and human health. One of the most important impacts is the bad water quality. In the last decades water quality of European aquatic systems has especially decreased being one of the most serious problems to solve. Rivers have suffered a nutrient enrichment (nitrogen and phosphorus) due to large nutrient loads from human activities (i.e. effluents from WWTPs), which leads to a river functionality loss [1, 2, 3, 4]. As rivers can not assimilate all nutrients, these are transported downstream affecting, at the last term, the coast ecosystems. As a consequence, nitrogen has become the major contributor to coast marine ecosystems pollution [5]. Moreover, the situation in Mediterranean regions is in general more critical [6]; due to the scarcity of 1
Correspondance to: Stéphane Grieu, Laboratoire de Physique Appliquée et d’Automatique, Université de Perpignan, 52 Avenue Paul Alduy, 66860 Perpignan, France. Tel.: +33 4 68 66 22 40; Fax: + 33 4 68 66 22 87; E-mail:
[email protected].
148
F. Thiery et al. / Kohonen Self-Organizing Maps and Mass Balance Method
water (related with the deep seasonality of river flows) and high tourism impacts (i.e. high density of population placed in the lowland river basins or in the coast increases the demand for water resources and the quantity of generated wastewater). The European Water Framework Directive (WFD) [7] appeared as an attempt to face the situation. Its aim is to achieve a good health of surface waters by 2015. According to it, the WFD establishes a set of steps to follow. The present paper is a contribution in the Têt River studies focused in one of these steps: estimation and identification of point pollution sources and evaluation of their impacts to the river. In this sense, the present work takes part in a global development and evaluation of reliable and robust tools [8], with the aim of allowing the control and supervision of the Têt River’s lowland area. The study area covers the last 14 km of the Têt River, main river of the PyrénéesOrientales department (south of France), where the major economic activity is tourism. The studied reach is affected by one tributary (La Basse) and two WWTPs, located at Perpignan and Canet-en-Roussillon [9]. As the WWTPs effluents are the main nitrogen pollution sources in the reach, the main purpose is to estimate their nitrogen loads within the river and to evaluate their impact to the river chemistry health. A simplified mathematical model based on mass balances was developed to estimate nutrient levels in the stream and to describe the river water quality. The study reach is divided in six subreaches. Each subreach is represented by one compartment, where the inputs, outputs and retention processes for nutrients (basically, nitrogen) are described. However, the application of mathematical models for river water quality as support tools is often limited by the availability of reliable data [10]. In order to solve this and to avoid the data missing, Kohonen self-organizing maps were used [11, 12]. This kind of neural networks proved to be very useful to complete the available database describing the chemical state of the river and the WWTPs operation. The paper presents the both developed methodologies (maps and model) and their results.
2.
Materials and methods
2.1. The Kohonen self-organizing map (KSOM) The KSOM is a neural network based on unsupervised learning [13, 14]. 2.1.1. Network structure The Kohonen self-organizing map consists of a regular, usually two-dimensional, grid of neurons (output neurons). Each neuron i is represented by a weight, or model vector, mi = [mi1,…,min]T where n is equal to the dimension of the input vectors. The set of weight vectors is called a codebook [11]. The neurons of the map are connected to adjacent neurons by a neighbourhood relation, which dictates the topology of the map. Usually rectangular or hexagonal topology is used. Immediate neighbours belong to the neighbourhood Ni of the neuron i. The topological relations and the number of neurons are fixed before the training phase allowing the configuration of the map. The number of neurons may vary from a few dozens up to several thousands. It determines the granularity of the mapping, which affects the accuracy and generalisation capability of the KSOM.
F. Thiery et al. / Kohonen Self-Organizing Maps and Mass Balance Method
149
2.1.2. Training algorithm
Figure 1. Input pattern presentation. The Kohonen feature map creates a topological mapping by adjusting not only the winners’ weights, but also adjusting the weights of the adjacent output units in close proximity or in the neighbourhood of the winner (Figure 1). So not only does the winner get adjusted, but the whole neighbourhood of output units gets moved closer to the input pattern. Starting from randomized weight values, the output units slowly align themselves such that when an input pattern is presented, a neighbourhood of units responds to the input pattern. As training progresses, the size of the neighbourhood radiating out from the winning unit is decreased. Initially large numbers of output units will be updated, and later on smaller and smaller numbers are updated until at the end of training only the winning unit is adjusted. Similarly, the learning rate will decrease as training progresses, and in some implementations, the learn rate decays with the distance from the winning output unit [13]. 2.1.3. Prediction of missing components The first part of the present work is based on the estimation of missing (unknown) values in the database describing the chemical state of the Têt River and the WWTPs operation. The KSOM, after a training phase carried out using available complete input vectors, can be used to estimate missing values in new input vectors, by means of BMU (Best Matching Unit) weight vectors.
Figure 2. Prediction of missing components of an input vector using a KSOM.
150
F. Thiery et al. / Kohonen Self-Organizing Maps and Mass Balance Method
General regression of s on e is usually defined by dž = ƒ (s|e). That is the expectation of the output s given the input vector e. To motive the use of the KSOM for regression, it is worth noting that the codebook vectors represent local averages of the training data. Regression is accomplished by searching for the best matching unit using the known vector component of e. As an output, an approximation of the unknown components of the codebook vector is given (Figure 2) [11]. 2.2. Mathematical model developed A simplified mathematical model based on mass balances was developed to estimate nutrient levels in the Têt stream and to describe its water quality. 2.2.1. Mass balance method A mass balance is an accounting of material entering and leaving a system. The main particularity of the mass balances is that they assume the conservation of mass principle (i.e. matter can not disappear or be created). Taking into account this principle, the mass that enters a system must either leave the system or accumulate within the system, i.e. E + G = S + A, where E denotes what enters to the system, G denotes the production term, S denotes what leaves the system and A denotes accumulation within the system. G and A may be negative or positive. Mass balances have been used to design chemical reactors, to analyse alternative processes in production of chemicals, in pollution dispersion modelling [15], etc. Although they are often developed for total mass crossing the boundaries of a system, they can also focus on one element (i.e. carbon) or one chemical compound (i.e. ammonium). In the present study, the system is the lowland of Têt River. The model is focused on different chemical compounds, considered indicators of water nutrient pollution. 2.2.2. Model structure The developed model tries to represent the last 14 km of the Têt River, before flowing into the Mediterranean Sea. In order to represent as maximum as possible the system, this is divided in six subreaches (Figure 3). Each subreach corresponds by one compartment, where the inputs, outputs and retention processes for nutrients are described in a mass balance environment. Because of the characteristics of one compartment are different of the characteristics of the other ones, each compartment has its particularity. So, the compartments 3, 4 and 6 describe what happens into the stream, where nutrient retention processes play an important role (except in the compartment 3) and where polluted inputs are not considered. Compartments 2 and 5 describe those subreaches where Perpignan’s and Canet-en-Roussillon’s WWTPs dump into the river, respectively. Two inputs to the river derived from both WWTPs are considered: the bypass (on raining days and/or when waste water flow is so high that WWTP can not treat all) and the effluent (the resulting water of depuration processes). Finally, compartment 1 describes the subreach where La Basse tributary flows into the Têt River and where some water is taken for agriculture activities and drinking consumption. In this compartment nutrient retention processes are not considered like the compartments 2 and 5, because of in these the large of the subreach is not enough
F. Thiery et al. / Kohonen Self-Organizing Maps and Mass Balance Method
151
for the retention processes. Moreover, the large of the subreaches is another differential aspect (i.e. while the large of compartment 2 and 5 is considered negligible, the large of compartment 4 is 2.5 km).
Where, R is a sampling point into the Têt River or La Basse tributary S refers to the Têt water taking B is the WWTPs’ bypass E is the WWTPs’ effluent P refers to the Perpignan’s WWTP C refers to the Canet-en-Roussillon’s WWTP
Figure 3. Model structure. In all of them it is considered that there is no accumulation, taking the term A as zero. In reference of the production term G, this is zero where nutrient retention processes are not considered. The consideration of the nutrient retention existence in each compartment is the result of the treatment of some Têt experimental data [16], in order to represent as maximum as possible the processes within the river. Where the stream experiments nutrient retention processes, G is composed by self-depuration equations developed within the STREAMES European project [3] for nitrogen compounds. The equations that sum up the model are shown in the Figure 4.
Where, for each compartment, n is the number of inputs m is the number of outputs Qe is the input flow Qs is the output flow Cs refers to the input concentration Ce refers to the output concentration
i refers to the compound (DOC, nitrites or nitrates) j refers to the compound (phosphates or ammonium) k is the reference number of the compartment G refers to the nutrient retention, which is: 0 when k = 4 and j = phosphates or ammonium 0 when k = 6 and j = phosphates
Figure 4. Model equations for each subreach. 2.3. Linking KSOMs with the mass balance model In order to face the limited reliable data and the data missing (filling out the existing lacks in the database), KSOMs were used. The provided data completed the available database, which was after used by the mass balance model as input data. These data were related to the river and either the bypass and effluent of both WWTPs.
152
3.
F. Thiery et al. / Kohonen Self-Organizing Maps and Mass Balance Method
Results
3.1. KSOMs results Ten KSOMs have been trained using available data characterizing the Perpignan’s WWTP, the Canet-en-Roussillon’s WWTP and the Têt River compartment 1 for two periods of the year (winter and summer). Table 1 summarizes for each network the optimal size of its output layer and the optimal number of epochs carried out during its training phase, by means of the quantization error. The computing time needed for an efficient training phase of a KSOM was about a few hours. Table 1. KSOMs grids sizes and training epochs. Location Perpignan’s WWTP (bypass/effluent) Canet-en-Roussillon’s WWTP (bypass/effluent)
Class Summer Winter Summer Winter Summer Winter
Têt River compartment 1
Grid size 11x5 10x5 9x5 8x5 10x6 9x6
Quantization error 2.419 2.673 1.987 2.038 1.602 1.793
Epochs 500 450 450 450 300 300
Table 2 presents some results (Ep) of the missing components prediction using the ten trained KSOMs neural networks. Predicted components are displayed in grey and available components in the database are displayed in white. The analysis of the provided data by the KSOMs reflects a strongly fitting between the nutrient data for the Têt and the Perpignan’s WWTP (both effluent and bypass) and its historical pattern, accordingly to the season. However, the situation is different for the results of the effluent of Canet-en-Roussillon’s WWTP. Ammonium and nitrates data fit moderately with the winter pattern. The same occurs for total phosphorus in summer. Although, the error rate is so small that data is considered good to use in the mathematical model. Table 2. Example of some results for the missing components prediction (in grey: predicted components, in white: available components). Perpignan’s WWTP effluent (Ep) Day Flow (m3/d) NH4 (mg/l) NO2 (mg/l) NO3 (mg/l) PT (mg/l) DBO (mg/l)
02.06.01
02.13.01
02.20.01
02.27.01
06.06.01
06.12.01
06.19.01
06.26.01
36837 39.2 0.85 2 1.4 28
38824 18.8 0.5 4 0.6 27
37291 35.4 1.13 3.2 1.8 25
36730 27 0.2 2.2 0.4 21
36530 44.5 1.2 1.1 2.1 19
39505 28.5 0.63 2.8 1.5 22
41044 28 0.5 2.8 1.5 20
35437 32.3 1.3 3.5 1.8 20
3.2. Estimation of the nitrogen levels and related river water quality Some relevant results, basically related to the stream flow and to the dissolved organic carbon (DOC), ammonium (NH4), nitrites (NO2) and nitrates (NO3) concentrations instream, were obtained from the running of the developed mass balance based model. These data allowed the estimation of the nitrogen and the flow levels within the study reach (the lowland of the Têt River) for February (winter) and June (summer) of
F. Thiery et al. / Kohonen Self-Organizing Maps and Mass Balance Method
153
2001. Both months were considered the best to observe the WWTPs impacts to the river water quality due to their different meteorological features. High river flow dependence of meteorology (the average flow in the ending of winter is 7217±159 l/s and in the beginning of summer, 1788±236 l/s) corroborates it. This situation is typical of the Mediterranean area. Average nitrogen levels of the river water along the study stream are represented in the Figure 5, where the distance numbers refer to the compartment outputs of the model (except point 1, which refers to R00). The darker line represents the winter and the lighter line, the summer.
Figure 5. Nitrogen compounds concentrations along the study site. WWTP of Perpignan (Pp) seems to have a greater negative impact to ammonium, ammonia and nitrites concentrations than the WWTP of Canet-en-Roussillon (Pc), because of it treats a higher amount of water. However, the situation for nitrates is different, as the Basse impact is higher than the both WWTPs. Even, it is possible to see in the figure that the outflows of Pp and Pc contribute to the river nitrate dilution. All of these cases can be especially observed during summer conditions, when water discharge is lower and a higher part of the flow down of WWTPs are the effluent and bypass of both WWTPs. The effect of their outflows is magnified by low stream flow. To sum up, in concentration terms, the Pp contributes to the increasing ammonium and nitrites loads within the river, meanwhile its impact on nitrates is very small or even positive. These conclusions lead to think that the nitrification processes in the Pp are partial and they should be improved to reduce its impact to the river. By other hand, the major impact of the Pc is on ammonium terms. But, in water quality terms the conclusions change due to they are referred to qualitative values and no to quantitative. Meanwhile the river water quality [17] has a very good category for nitrates in both seasons and a very good (in winter) / good (in summer) category for nitrites along the study site, the situation is very different for ammonium and ammonia. In both cases the quality before the Pp outflows is very good. After them, it becomes very bad for ammonia for both seasons and in summer for ammonium. In winter the water quality for ammonium after the Pp is moderate. So, these values corroborate the greater impact of Pp on river water quality in reference with ammonium and ammonia compounds.
154
4.
F. Thiery et al. / Kohonen Self-Organizing Maps and Mass Balance Method
Conclusion
The obtained results for both parts of the work can be considered as satisfactory. Kohonen self-organizing maps proved to be very useful for estimating missing values in a database using available parameters, independently of the nature of the system. They were used in two different systems (river and WWTP) describing the chemical state of the river and the WWTPs operation. By other hand, the simplified model based on mass balances developed to estimate nutrient levels in the stream and to describe the water quality of the Têt River also proved to be an efficient supervision tool.
References [1] Sterba O., Mekotova J., Krskova M., Samsonova P. and Harper D., Floodplain forests and river restoration, Global Ecology and Biogeography Letters 6, pp. 331-337, 1997. [2] Comas J., Llorens E., Martí E., Puig M. A., Riera J. L., Sabater F. and Poch M., Knowledge acquisition in the STREAMES project: the key process in the environmental decision support system development, AI Communications, The European Journal on Artificial Intelligence, 16, pp. 253-265, 2003. [3] Llorens E., Desenvolupament d’un sistema expert com a eina per a una millor gestió de la qualitat de les aigües fluvials, Thesis dissertation. Universitat de Girona, DL 2004. ISBN: 84-688-8628-9, 2004. [4] Martí E., Aumatell J., Godé Ll., Poch M. and Sabater F., Nutrient retention efficiency in streams receiving inputs from wastewater treatment plants, Journal of Environmental Quality, 33, pp. 285-293, 2004. [5] Howarth R. W., Human acceleration of the nitrogen cycle: drivers, consequences, and steps toward solutions, Water Science and Technology, 49 (5–6), pp. 7-13, 2004. [6] Llorens E., Comas J., Poch M., Martí E., Puig M. A., Riera J. L., Sabater F., Pargament D., Pusch M., Venohr M. and Vervier P., Integrating point and diffuse pollution in an EDSS for stream management at reach scale, Proceedings of the EWA Conference Nutrient Management. European Experiences and Perspectives (Aquatech 2004). EWA European Water Association. ISBN: 3-937758-30-5, pp.159-182, 2004. [7] Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for community action in the field of water policy, Official Journal L327, 2000-10-23, P0001. [8] Grieu S., Traoré A., Polit M. and Colprim J., Prediction of parameters characterizing the state of a pollution removal biologic process, Engineering Applications of Artificial Intelligence, Vol. 18, Issue 5, pp. 559-573, 2005. [9] Thiery F., Grieu S., Traoré A., Estaben M. and Polit M., Neural networks for estimating the efficiency of a WWTP biologic treatment, Frontiers in Artificial Intelligence and Application: Artificial Intelligence Research and Development, Vol. 131, pp. 25-32, 2005. [10] Deksissa T., Meirlaen J., Ashton P. J. and Vanrolleghem P. A., Simplifying dynamic river water quality modelling: a case study of inorganic nitrogen dynamics in the Crocodile River (South Africa), Proceedings of IWA Conference on water and wastewater management for developing countries. Vol. 2, pp. 332-339, 2001. [11] Alhoniemi E., Simula O. and Vesanto J., Analysis of complex systems using the self organising map, Laboratory of computer and information science, Helsinki University of Technology, Finland, 1997. [12] Simula O., Vesanto J., Alhoniemi E. and Hollmen J., Analysis and modelling of complex systems using the self-organizing map, Laboratory of computer and information science, Helsinki University of Technology, Finland, 1999. [13] Grieu S., Thiery F., Traoré A., NGuyen T. P., Barreau M. and Polit M., KSOM and MLP neural networks for on-line estimating the efficiency of an activated sludge process, Chemical Engineering Journal, Vol. 116, Issue 1, pp. 1-11, 2006. [14] Hong Y-S. T., Rosen M. R. and Bhamidimarri R., Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis, Water Research, Vol. 37, pp. 1608-1618, 2003. [15] Cassell E. A., Meals D. W., Aschmann S. G., Anderson D. P., Rosen B. H., Kort R. L. and Dorioz J. M., Use of simulation mass balance modeling to estimate phosphorus and bacteria dynamics in watersheds, Water Science and Technology, 45, pp.157-168, 2002. [16] Ludwig W., Serrat P., Cesmat L. and Garcia-Esteves J., Evaluating the impact of the recent temperature increase on the hydrology of the Têt river (southern France), Journal of Hydrology, 289, pp. 204-221, 2004. [17] SEQ-EAU Version 2. Système d’Evaluation de la Qualité de l’Eau des Cours d’Eau, 2003. http://www.eaurmc.fr/
4. Computer Vision
This page intentionally left blank
157
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Two-Step Tracking by Parts Using Multiple Kernels a
a
a
Brais Martínez , Luis Ferraz and Xavier Binefa a Universitat Autónoma de Barcelona,Computer Science Department,08193 Bellaterra, Barcelona, Spaine-mail: {brais, luis.ferraz, xavier.binefa}@upiia.uab.es Abstract. This paper addresses the problem of tracking IR image sequences by using kernel weighted histograms. The work is performed over the basis of the multiple kernel tracking algorithm presented in [3]. We present a new, novel, two-step tracking method which allows a tracking of independent parts of the same object by giving a higher flexibility to the multiple kernel model. This is performed by a progressive approximation of the movement by first estimating the global displacement with a multi-kernel estimator in order to have enough robustness and then, in the second step, the residual displacements of each part. The outcome is a method yet robust to partial occlusions, articulated motions or projectivities over the image with an application to partial occlusion detection and model update. Keywords. Computer Vision, Tracking, 3D movements, Multikernel tracking
1. Introduction Tracking is a wide explored topic with a great number of applications, as security, military tasks or traffic control. There are several approaches to this problem treating a large number of difficulties involved, as occlusions, changes on luminance, model aspect changes, projectivities, etc. Some of the most famous algorithms over the last years were the kernel tracking methods, as mean shift [1], a quite simple method yet robust to many of the usual problems and particle filters [8], an effective method for treating clutter problems and occlusions. There have been other approximations from a similar perspective than Mean Shift, trying to overcome some of its limitations. For instance, [2] is an adaptation of mean shift for a scale space tracking and [3] tries to approximate the parameters of the projective transformations between images. In spite of the popularity and good performance of the kernel tracking methods, there are several problems leading to tracking failure. The convergence to background clutter to a small region on the target (variable bandwidth) limits its performance. Our objective is to partially solve these performance deficiencies by improving two different aspects as the model consistency and the flexibility of the model to allow projective transformations and non rigid movements. The particularities of IR sequences turn the model definition into a high demanding problem, specifically because of the lack of contrast the high signal to noise ratio and the low information given by just one colour channel. The model improvement is based on the use of models that pick information
158
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
from several parts of the image. It has been shown in recent literature that these models can define the objects of interest better than global descriptors. This tactic is used in [4,5], where an object is modelled by what is called a constellation [6] of several parts. This work had the purpose of representing objects in a very descriptive way, so little differences between similar classes could be detected. This tactic of representing objects by its multiple parts has several examples in the detection task, but it is a new issue for tracking problems. The work in [3] opened the possibility of performing a kernel tracking by parts, although this is not the main focus given by the authors. In the other hand, the improvement on performance given by flexible model is shown, in the particle filter context, by [9], which models dancing bees by using several parts, allowing more flexible movements. This paper is organized as follows: First we present the model representation and the tracking method, based on [3,4]. This is not an original representation, althougt it is used from a new point of view. We will then present a new two-step tracking method. Section 4 analizes the utility of this new method and section 5 is devoted to the presentation of several experimental results showing the performance of the method against partial occlusions, 3D rotations and movements out of the image plane, together with some results on its model update properties. 2. Tracking with Multiple Kernels We use the method proposed by [3] for a multiple kernel tracking. The representation is based on weighted histograms, where the weights depend on the values of some predefined kernel functions. Given a model histogram, the tracking consists on finding the most similar region in the sense of the SSD distance (Sum of Square Difference, also known as Matusita metric). Each point has a numeric value associated, representing the similarity of the model histogram and the histogram given by the kernels centred on this point. The multiple kernel tracking has several advantages against single kernel tracking. We have chosen this improvement to reach two main achievements. In first place, to perform a tracking of the different parts conforming an objective and, in second place, to improve the robustness of the single kernel trackers. This second property is owing to the fact that the description provided by a combination of kernels is much more descriptive than the representation based on a single kernel, as shown in figure 1. Since the representation distinguishes against clutter more robustly, the possibilities of failing the tracking because of a background distraction are much lower. This tactic allows the use of more information about the behaviour of the objective, as the matching with the model of each part. The representation of the objective target is based on weighted histograms. Each pixel xj increases the value of the corresponding histogram bin qi by the weight assigned to the pixel by the Kernel K. If we call c the central position of the kernel:
159
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
where b assigns the point xj to its bin, δ is the Kronecker function (its value is 1 if its two parameters are equal and 0 otherwise) and C is a normalization factor. In order to define a histogram, it is necessary to specify the kernel and its center position. The expression on (1) can be represented with a higher simplicity by the use of a matrix U, Ui,j = δ(b(xj),i), so q(c) = Ut.K(c). The goal of the algorithm is to find the point c which minimizes the difference between the model representation, the histogram q, and the histogram centred on c, p(c). The Matusita metric is used to compare these histograms:
A kernel is a derivable function, so the histogram is also derivable, as it only involves sums of kernel values. As a notation, if we call
δ K (c) = J K (c) we δv
have an
expression of the derivative of the histogram on a direction v:
Now it is possible to derive the histogram and, in consequence, to use the first order Taylor approximation:
where d(p) is a matrix with p on its diagonal, lead to an estimation of the displacement of the objective, Δc. By using the notation W = d ( p ) possible to express the minimum of Δc as:
−
1 2
U t J K , it is
Note that this expression is a closed minimization, involving only some matrix multiplications and the inverse of a 2 x 2 matrix, resulting on a very fast algorithm. For constructing a model given by n kernels, if each kernel generates a b bin histogram, it is necessary just to store the histograms one beside the other, in order to construct a n.b bin histogram. The model is defined by the kernel functions: Ki = Ki(θ)),
160
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
and by their relative positions. If the center of the region is c, the position of the kernel i will be c + Dci:
The histograms extracted with each kernel Ki centred on c + Dci define the combined histogram on the position c. With this histograms and the iterative process given by (6) it results on an estimation of the displacement, cnew = c + Δc. This estimation is inexact because of the use of a first order Taylor approximation, resulting on an iterative procedure. It converges, if the initial conditions are good enough, to the part of the image with a maximum of similarity compared to the model. In order to clarify why it is a good choice to use this method, there are a few considerations that might be taken into account. In first place, it is not the same to perform several trackings at once than performing a combined tracking. The necessary structure for this compound tracker is natural with this algorithm, so it is especially suitable. In second place, there is a reason referring to the computational cost. In order to estimate the displacement of the objective on each new frame, the only things needed are the histograms and its derivates. It is possible to organize the histogram calculation so the values of the kernels are computed offline. On each new frame it is necessary to calculate the matrix U. Tacking into account the equations of the histogram and its derivative, (3) and (6), it is shown that the most expensive calculation, the matrix U, is performed once on each new frame independently of the number of kernels. This simplifies the calculations since the computational requirements growth with each new kernel are very acceptable.
3. Two-Step Tracking by Parts For a metric as the Matusita, it is easy to define a probability by the expression:
Where d is a normalization factor, usually very expensive to compute, so in practice it is avoided. Each kernel generates a probability function, different between them, depending on the kernel internal parameters. More specifically, they have different mode positions, which is crucial for this method. As we will translate the problem onto this kind of surfaces, it is possible to speak of modes (maximization) instead of minimization. As said before, using the tool of the probability surface, it is possible to notice how the compound estimator, taken from the whole kernel set, is much more robust than the simple estimators generated by one kernel (figure 1). Each kernel set can be thought as an estimator of the displacement by using the maximization algorithm. The estimator of the complete set is called the compound estimator and each single kernel generates a single estimator. Using these definitions, the first task is to estimate the movement of the whole objective with this compound estimator, seeking
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
161
for robustness. Once this is made the remaining task is to estimate the residual movements, that is to say, the movement of the parts with respect to the body (for example, an arm of a man walking). This second estimation is not as harder as the first one. If the compound estimator is set properly, the initial position is close to the solution (the global movement is potentially much higher than the relative ones), so there is needed just a little adjustment on each new frame. Due to this fact, this second maximization can be performed with the individual estimators, less robust, with guaranties of convergence.
Figure 1. The top left image shows a car, which the model histogram represents. The other two graphics show a surface where each point has the value of the similarity between the model histogram and the histogram centred on this point. The graphic on the center is generated by a single kernel model and the right one is generated by 3 kernels. The model generated by 3 kernels is obviously more descriptive against clutter. It generates a clearer mode and the rest of the surface has much lower values new
If each part i is considered to be centred on the point ci
, the two step tracking is
defined as:
When the convergence of each individual part is reached, it is possible to calculate the similarity with regard to the model. This similarity can be interpreted as a measure of the quality of the estimator. With a fixed structure, the global maximization does not imply the maximization of the similarity of each single part (fig. 5). Because of this deficiency it is not possible to know if the mode has disappeared or it is simply on other part of the image. This method enables the convergence of each single estimator to its mode, so if the part does not match with the model it is because of a mode disappearance (due, for example, to an occlusion). This control enables a detector of partial occlusions and its results are shown afterwards.
4. Model Update and Vanishing Modes In colour sequences but, specially, on IR imagery, the problem of the model update has a great importance [7]. Along a sequence it is very likely to find changes on the grey level of the objective region. There are also other problems, as zooms or 3D rotations
162
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
of the targets that performs big changes over the model. The usual update tactic, where the model is a combination of the last convergence regions, may produce the common effect of a template-drift. This problem is characterized by a progressive impoverishment of the model, resulting on a slow tracking failure. On sequences with a high noise level (as in IR) and in presence of projective transformations over the objective aspect this tactic becomes useless. The algorithm presented here, thought from the model update point of view, is not so ambitious as the general template updates but in the other hand it doesn’t take the same risks. This tactic enables to have a model similar enough to the real state of the objective all along a sequence. Figure 7 shows how the model can keep close to the real image with the relative position update of the two-step tracking.
Figure 2. The surfaces on the bottom show the similarity surface (as in figure 1). The two car images on the top are spaced 25 frames. The right image has just one mode, while the first image has two different modes corresponding to the top left and top right part of the car. This is due to the fact that the model changes on the central part of the car. The white spot is the almost continuous
As outlined previously, the method presented on this paper suits specially to deal with vanishing modes. It is very common, due to partial occlusions or to changes in the model, to find that some modes of the similarity function have disappeared. Since the method follows the modes corresponding to each single kernel, it is possible to recognise when this modes has disappeared. The method with rigid structure can give bad response because of other situations, as in figure 5. This method allows the possibility of recognising this vanishes and to perform trackings over a partial
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
163
occlusion without deficiencies in the model. Some experimental results in partial occlusions are specified later on figure 3. The figures 1 and 2 shows situations with vanishing modes due to an objective region change and to partial occlusions respectively. 5. Experimental Results The experiments showed the usefulness of the two-step tracking method applied to sequences with partial occlusions, as in the example at figure 3. The figure 4 show the similarity between the model and the region of convergence on each new frame and it is considered as a measure of quality of the method. When this value is too low, it is more likely to have convergence to clutter. There are also examples of experiments made on sequences presentig oclusions and projectivities deforming the aspect of the model. Acknowledgements This work was partially funded with Cicyt grant n TIC2003-06075. Thanks are due also to Tecnobit SL for yielding the IR images.
Figure 3. The different points show the position of the three parts of the body. Green points correspond to normal parts and red to the occluded ones. The relative positions of the occluded parts remain constant. At the end of the occlusion, the legs are recovered.
164
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
Figure 4. The graphics corresponds to the sequence on figure 3 and show the similarity of the model and the convergence region over 100 frames. The left graphic corresponds to the usual multikernel algorithm. The model is no longer representative on the frames presenting occlusions, while the right graphic show how our method detects the occlusion and keep the model representative all along the sequence.
Figure 5. Over a sequence including a zoom, the flexible relative positions possibilities the adaptation to the objective change (the upper row). The rigidity of the model forces the tracker to loose the head, showed on the lower row. The probability falls dramatically for this reason.
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
165
Figure 6. The tracking is performed even with the difficulties of this sequence, with long displacements between frames, heavy clutter and a great deformation of the model. On the last image, the parts are perfectly positioned, even when the motor (the white part) is occluded. The method also detects this occlusion and maintains the tracking perfectly.
Figure 7. Graphics with the similarity values over a tracking sequence of 180 frames affected by a zoom (the ideal is a value of 1). The left graphic shows the values with the two-step tracking. The right one shows the values without the actualization of the relative position.
References [1] V. Ramesh V. Comaniciu and P. Meer: “Kernel-based object tracking,” in Transactions on Pattern Analysis and Machine Intelligence, 2003, vol. 25(5), pp. 564–577. [2] Robert Collins, “Mean-shift blob tracking through scale space,” in Computer Vision andPattern Recognition (CVPR’03). June 2003, pp. 234–240, IEEE. [3] Gregory D. Hager, Maneesh Dewan, and Charles V. Stewart, “Multiple kernel tracking with ssd.,” in Computer Vision and Pattern Recognition (CVPR’04), 2004, pp. 790–797. [4] Alex Holub and Pietro Perona, “A discriminative framework for modeling object class,”Computer Vision and Pattern Recognition (CVPR’05), 2005. [5] D. Nair and J.K. Aggarwal, “Bayesian recognition of targets by parts in second generation forward looking infrared images,” Image and Vision Computing, vol. 18, no. 10, pp. 849–864,
166
B. Martínez et al. / Two-Step Tracking by Parts Using Multiple Kernels
July 2000. [6] R. Fergus, P. Perona, and A. Zisserman, “Object class recognition by unsupervised scaleinvariant learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Madison, Wisconsin, June 2003, vol. 2, pp. 264–271. [7] Iain Matthews and Takahiro Ishikawa and Simon Baker, “The Template Update Problem,” in Proceedings of the British Machine Vision Conference,September 2003. [8] Michael Isard and Andrew Blake, "Condensation – conditional density propagation for visual tracking", in International Journal of Computer Vision, vol. 29, num. 1, pp. 5-28, year 1998 [9] Grant Schindler and Frank Dellaert, "A Rao-Blackwellized Parts-Constellation Tracker", in ICCV Workshop on Dynamical Vision; International Conference on Computer Vision, 2005
167
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
n-Dimensional Distribution Reduction Preserving its Structure
Eduard VAZQUEZ
a,1
a
a
, Francesc TOUS , Ramon BALDRICH and Maria VANRELL
a
a
Computer Vision Center, Dept. Ciències de la Computació Edifici O, Universitat Autònoma de Bacelona. 08193 Bellaterra (Barcelona), Spain
Abstract. The work proposed in this paper deals with n-dimensional distribution reduction. Instead of working with only local maxima we argue that it is also important to conserve how this maxima could be joined to better understand them. The main idea is to maintain the topological structure of the distribution in order to facilitate interpretation of the data. To achieve this reduction we work with the creaseness definition and ridge extraction, as a structure descriptor of a ndimensional surface. In this way we have obtained promising results, that could be applied into a wide range of problems. Keywords. n-dimensional reduction, creaseness, ridge extraction
1. Introduction The interpretation of n-dimensional distributions, when n is greater than 2, is a problem which has not an intuitive neither unique solution. Depending on the problem needs, more or less accuracy can be tolerated. Although many times it is enough to find a collection of isolated representative points, some other times, the most important information is given by the relationship between these expressive points. Ndimensional distributions are n-dimensional data points sets with weighted values as, for instance, colours in an image, as figure 1 shows. In this paper we suggest a method which reduces the amount of information of ndimensional distributions, achieving a useful simplification of the data. In this interpretation the connectivity of the different representative points is guaranteed, in order to keep the structure of the surface distribution. To achieve a useful spatial reduction of a n-dimensional distribution, we must keep just the most important and representative information. One simple approach is to use local maxima as a significant points of a n-dimensional distribution. This is because 1 Correspondence to: Eduard Vazquez, Computer Vision Center, Edifici O, Universitat Autònoma de Bacelona. 08193 Bellaterra (Barcelona), Spain. E-mail:
[email protected]
168
E. Vazquez et al. / n-Dimensional Distribution Reduction Preserving Its Structure
when a local maximum occurs, it is supposed to represent an important information point. But if we keep just these local maxima, information about the structure of the distribution is losed. The main problem of this approach is that a local maximum is a single point information, and is not enough to achieve a valid description of distributions with complex or unclear structures.
Figure 1. An example of a 3-dimensional distribution: (b) RGB-histogram of (a).
Figure 2. Different possible shape interpretations for m1 and m2. Without p1, p2 and p3, is not possible to distinguish between (a) and (b).
Figure 2(a) shows a surface with two local maxima, m1 and m2, and a collection of points pi = p1,p2,p3. Just with m1 and m2, the shape of the surface is not good enough described. Without points pi, is not possible to know if m1 and m2 are isolated points or are connected. These two different interpretations mean the difference between surfaces showed in figures 2(a) and 2(b). The problem to find points Pi is directly related with the problem of ridge extraction [1]. In this framework, m1 and m2 are the peaks of the landscape, and pi are
E. Vazquez et al. / n-Dimensional Distribution Reduction Preserving Its Structure
169
ridge points. Hence, the ridge accomplishes both requirements: first, the local maxima appear, and second, the shape of the structure is represented by the curve which joins the peaks across the path with maximum height. What this work proposes is to use a method that measures how much important each point is, in order to spurn the non-representative ridges. Afterwards it defines an algorithm for ridge extraction to achieve compact representation of data which allows to easily study and interpret a given distribution.
2. Creaseness Regard to the problem of achieve a method that correctly finds a measurement of how much important are each point, we are focused the solution on creaseness analysis. The work in [2],based on the structure tensor, provides a useful tool in our framework, and it is, in fact, the method used in the present work. The creaseness analysis associates to every point the likelihood to be a ridge point.
Figure 3. (a) Boundary C on a 3D regular grid according to the six nearest neighbours. (b) Geometry ˜ involved in the definition of k d at pi
⎛ ∂f ∂f ⎞ .... ⎟ be a gradient vector at x, and G(x; σ) be a d-dimensional xd ⎠ ⎝ x1
Let w( x) = ⎜
Gaussian centered at x, with normal deviation σ. Then, we define the structural tensor as:
170
E. Vazquez et al. / n-Dimensional Distribution Reduction Preserving Its Structure t
Where (w(x) w (x)) is the Hessian matrix and σ1 the integration scale. It means that we find the eigenvalues and eigenvectors of a neighbourhood centered at x, with size proportional to σ1. Then, let w' (x; σ1) be the eigenvector corresponding to the largest eigenvalue of S(x; σ1). Because in the analysis of S(x; sigma1) opposite directions are equally treated, we recover such directions as follows:
where sign(x) takes the value +1 if x> 0, -1 if x< 0 and 0 if x =0. The structural tensor study assumes that every point has a preferred orientation. When combined with the normal vector it gives us a magnitude of how the surface orientation and the surface tension agree. Hence, if B = x1, ..., xr , are the set of points that form a discrete boundary C of a neighbourhood centered on x, and N = n1, ..., nr are the set of unit normal vectors of B as figure 3 shows, then the multilocal creaseness measure kd at xi in the discrete case is:
(3)
Creaseness value is the average of the dot product between the dominant gradient vector of a neighbourhood, and the normal vector of a point belonging to this neighbourhood. In other words, the creaseness measures the grade of similarity between the direction of a point and the dominant direction of this point’s neighbourhood. The σ parameter determines the relationship of a point vs. its neighbours, so that it is considered a potential ridge point. I.e.: as greater is σ more relevant the point has to be to became a ridge point. Figure 3 (b) shows an example. 3. Method Because the dominant gradient vector of a neighbourhood is the direction where the most meaningful change on orientation happens, the structural tensor will be a good description of the hypersurface shape, because the shape of a surface can be defined by the changes on it. For instance, the cube’s shape can be defined by his twelve edges. Hence, in order to finally find the most representative information of a surface, we must keep just the points with high creaseness values. These points belong to the curves that follow the landscape’s gradient direction from one maximum to another through a saddle point, the so-called watershed curve [3]. 3.1. Ridge Extraction The problem of ridge extraction has been wide treated on existing literature. Watershed algorithms [4], based on an inmersion process, are a common approach. Imagine we pierce each minimum of the landscape and we plunge the landscape into a lake with a constant vertical speed. The water entering through the holes floods the landscape’s
E. Vazquez et al. / n-Dimensional Distribution Reduction Preserving Its Structure
171
surface. During the flooding, two or more floods coming from different minima may merge. We build a dam on these points. At the end of the process only the dam emerge and they constitute the watershed of the landscape. On the practice, in order to achieve good results, is not enough to pierce the minimums. Due to difficulties to find the local minima and the irregularities of the landscape, we must put marks on the points/areas that will be pierced. But independently of the selected criteria to find the potential marks, there exists some problems that prompt some undesired results with this flooding process. A common way to find the marks is to work with the negative values, the valleys, which divide the different existing mountains. But the isolated ridges with negatives values just in one side or even just with positive values, became a problem for this criterion. Nonetheless, in existing literature some other approaches, commonly based on singular value decomposition, deal with ridge extraction problem [5] [6]. Our approach is based on these algorithms. Ridges are commonly defined as a local maximum in one direction[7]. Intuitively, a ridge can be defined as the path you follow on a mountain, where there is always a drop both to your left and to your right, e.g., a connected sequence of pixels having intensity values which are higher in the sequence than those neighbouring the sequence. Formally, to find a ridge we must search the points which reach a local maximum in the gradient direction. The algorithm applied on a digital image is as follow: Let p1 be a point of a ridge. The height of p1 is bigger than the other nd - 2 neighbours, where nd are the number of neighbours on a d-dimensional space. For instance, for d =2, nd =8, and for d =3, nd = 26. Because a ridge is a line, p1 has two neighbours, p2 and p3 which belong to the ridge. Then, with neighbours p2 and p3, the next cases can occur: a) p1 is a local maximum (figure 4(a)). b) p1 is not a local maximum. Then, just one of both P2 or P3 can be strictly higher, than p1 (figure 4(b)). c) p1 is not a local maximum and p2 and p3 are higher. This case will be treat when we talk about singularities (figure 4(c)).
172
E. Vazquez et al. / n-Dimensional Distribution Reduction Preserving Its Structure
Figure 4. Different cases of a ridge point: (a) local maximum, e.g., a peak; (b) p1 is a ridge point but not a peak, then, just p2 can be higher, and (c) p1 is a local minimum of the ridge because both p2 and p3 are higher.
If we approach the problem on the question of how many higher neighbours can have a ridge point?, the problem is reduced towards an easy point’s classification as follow: points with no more than one higher neighbour will be a ridge point. As a result, each point will be labelled with the number of higher neighbours. Then, we keep the points labelled with a value lower than two giving point in a) and b). Finally, each different sequence of connected points is labelled as a different ridge in order to distinguish form one ridge to another. But as we introduced before, there exists singularities that this classification method does not take into account. The problem becomes when two or more ridges converge. In this case, the convergence point, which effectively is a ridge point, has more than one neighbour with more height.Thus we must define some criteria in order to include these singularities on the final result. This criterion has to distinguish between a convergence point and a point that is just a neighbour of a ridge. For both points two or more possible ways to climb up exist, but, for a convergence point, different ways keep to different ridges. It is easy to know because each ridge has a different label. Figure 5 shows a graphical example of the singularities. Finally, considering all this issues, we present an scheme of the distribution reduction algorithm: 1. Apply creaseness operator. 2. Ridge extraction. (a) Find and extract ridges. (b) Detect singularities to join ridges. (c) Label every ridge and its points.
E. Vazquez et al. / n-Dimensional Distribution Reduction Preserving Its Structure
173
Figure 5. p1 is a convergence point because every possible step to climb takes to different ridges r1, r2 and r3 thus p1 has to be added as a ridge point; p2 is not a ridge point because there are different steps which take to the same bridge
Figure 6. (a) original image; (b) rgb histogram; (c) dominant colours (ridges) extracted from the histogram; (d) labeled image from results in (c).
174
E. Vazquez et al. / n-Dimensional Distribution Reduction Preserving Its Structure
4. Experimental results and Conclusions Our operator can be useful in any problem which needs a structure reduction of a cloud of n-dimensional weighted points, to achieve a simpler representation of the data. As an example, we are applying our method to colour image histograms. An rgb histogram has a 3 dimensional representation with weighted points which can not be easily interpreted. First, we use the creaseness operator on the colour histogram. The dominant direction of the structural tensor is directly related to the dominant colour. The second step is the ridge extraction. Obtaining the most representative colours in the image. The result is a 3-dimensional curve that preserves the main structure of the colour histogram. To evaluate how the operator performs we have grouped the collection of rgb points which belong to the influence area of ridge ri and assigned the same color as the average of the colours in ri. Figure 6(a),(b) shows an image and his rgb histogram. Figure 6(c) shows the ridges extracted, and figure 6(d) the grouping achieved.
Figure 7. (a) original image; (b) rgb histogram; (c) dominant colours (ridges) extracted from the histogram using σ =0.1; (d) labeled image from results in (c); (e)(f) same as (c) and (d) with σ =0.7
E. Vazquez et al. / n-Dimensional Distribution Reduction Preserving Its Structure
175
The operator can be tuned to fit to each problem specifications. As we said before, making possible to adjust the sensibility of the operator to the importance of the existent ridges, the value of σ change creaseness results, thus, ridges finally extracted. This effect is shown on figure 7, where (a),(b) show the original image and his histogram. Figure 7 (c),(d) the ridges extracted and the resulting image with σ =0.1. figure 7(e)(f) same as (c) and (d) with σ =0.7. In the first case, three representative colours are extracted, instead of two colours extracted with σ =0.7. By increasing σ, purple and shadowed purple are represented with the same colour, because are similar between them but not enough with green. Our proposed method achieves promising results. Nonetheless these are just prior results and have to be improved. Further work is to find a criterion to classify ridges in order to effectively adapt results to any problem requirements. For instance, on figure 6 (c) appear a ridge near the white. This ridge is given from the specular reflectance and have just three points. Another approach is to find ridges which has a longer near ridge, e.g., a more representative ridge with practically the same average colour. But for other problems maybe the short isolated ridges can be an important information. The method will be tested on a texture description problem and on a colour constancy method to evaluate its results on different problems.
Acknowledgements This work has been partially supported by project TIN 2004-02970 (Ministerio de Educación y Ciencia).
References [1] L. Wang and T. Pavlidis, “Direct gray-scale extraction of features for character recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 10, pp. 1053–1067, 1993. [2] A.M. López, D. Lloret, J. Serrat, and J. J. Villanueva, “Multilocal creaseness based on the level-set extrinsic curvature,” Computer Vision and Image Understanding: CVIU, vol. 77, no. 2, pp. 111–144, Feb. 2000. [Online]. Available: http://www.idealibrary.com/links/doi/10.1006/cviu.1999.0812; http://www.idealibrary.com/links/doi/10.1006/cviu.1999.0812/pdf; http://www.idealibrary.com/links/doi/10.1006/cviu.1999.0812/ref [3] A. M. López, F. Lumbreras, J. Serrat, and J. J. Villanueva, “Evaluation of methods for ridge and valley detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 4, pp. 327–335, 1999. [4] J. M. Gauch and S. M. Pizer, “Multiresolution analysis of ridges and valleys in grey-scale images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 6, pp. 635–646, 1993. [5] S. Aylward and E. Bullitt, “Initialization, noise, singularities, and scale in height ridge traversal for tubular object centerline extraction,” 2002. [Online]. Available: citeseer.ist.psu.edu/aylward02initialization.html [6] S. R. Aylward, J. Jomier, S. Weeks, and E. Bullitt, “Registration and analysis of vascular images,” Int. J. Comput. Vision, vol. 55, no. 2-3, pp. 123–138, 2003. [7] A. Bishnu, P. Bhowmick, S. Dey, B. B. Bhattacharya, M. K. Kundu, C. A. Murthy, and T. Acharya, “Combinatorial classification of pixels for ridge extraction in a gray-scale fingerprint image.” in ICVGIP, 2002.
This page intentionally left blank
5. Planning and Robotics
This page intentionally left blank
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
179
Seat allocation for massive events based on region growing techniques Víctor Muñoz, Miquel Montaner, Josep Lluís de la Rosa {vmunozs,montaner,peplluis}@eia.udg.es - Universitat de Girona Abstract. This paper provides a technique for resolving the seat allocation problem for massive events, that consists in distributing people across the stadium keeping the spectators preferences and satisfying rules from the administration. The method is based on a region growing technique inherited from the Computer Vision field, with which a first candidate to the solution is obtained, followed by an improvement process.
1. Introduction Massive events, as sportive events, involve a huge amount of spectators. This enormous flow of people requires a good organization and control so that no problems arise in its development. In this kind of events, such as Olympic Games, Grand Prix racings, soccer world cups, citizens that wish to attend usually buy a ticket that allows them to enjoy the competition in some kind of seats of the stadium with several features, but they do not buy the physical seat at the sport ground. When all the tickets have been sold, one of the duties of the organizing committees is to distribute the available localities to the tickets sold. Distributing persons is a complex process when myriads of tickets should be assigned to multiple stadium zones. An additional difficulty is the fact that often, tickets are not sold to a single person but in group, therefore a team of people come together. Nowadays, the distribution process has been performed manually. This task takes a lot of time, is repetitive and tedious, and complex due to the constraints imposed by the organization committee. Therefore, a system that assigns automatically the tickets to the seats, taking into account all the constraints, would be helpful for the organizers. Our work is concerned with the development of a tool that supports such allocation task. Particularly, we have applied search techniques combined with region growing techniques from the Computer Vision field, obtaining significant results. The developed techniques have been applied in the data provided by the FIA (Federation Internationale de l’Automobile) regarding the 2003 F1 championship, one of the most important sportive events regarding the attendance. This paper is organized as follows. Section 2 describes the problem, in section 3 provides the cost function used in the optimization method, and section 4 describes the method itself. In section 5 we analyse the results obtained and in section 6 we relate our research to previous works. We finish with some conclusions and discussion in section 7. 2. Seat allocation for massive events Seat allocation in massive events is characterized by three main components: ticket groups (TG), seats, and distribution rules established by the organization. First we provide the description of all this features, and then we formulate the allocation problem.
180
V. Muñoz et al. / Seat Allocation for Massive Events Based on Region Growing Techniques
Ticket groups (TG) In the sportive events scenario, costumers are provided by a set of tickets that are split in different ticket groups (TG). A ticket group is composed by the following attributes: Request, TOG, customer id, amount of tickets, category, type, price type, dispersion and rank (see table 1).
Table 1. Examples of ticket groups.
Table 2. Examples of seat configuration F1.
Request and TOG are both the identifier of the ticket. Category, Type and Price type relates the kind of seat to which the ticket should be allocated. The dispersion attribute is a flag that is activated when the groups are submitted to a dispersion criteria (see distribution rules). Finally, the rank indicates the priority of the group in the assignment process. Seats The amount of seats available for each event depends on the stadium that often is divided into different categories and zones due to its huge dimension. Each seat is characterized by the following attributes: zone, row, column, category, sector, type, status, price type, rank and assign. Table 2 shows and example of seats for the circuit of figure 1. The zone, row and column are related to the real place in which the seat is physically located. The category, type, price type and rank, means the same as for the TG. The sector is related to the services provided in the zone. The status attribute is related to the visibility of the seat, usuallt standard, but can also be obstructed (Obstr) or no visibility (Killed). Finally, the reserved attribute indicates if the seat is already booked. This attribute is useful for giving the possibility of allocate a semi-occupied stadium, or allocating the entire stadium in different phases.
Figure 1. F1 racing scenario. Zones corresponding to seats marked by letters, from A to N.
Regarding the zones, their seats are distributed in several different forms. They can be represented as a matrix, each cell representing a seat. The matrix can contain blocked cells when some seats doesn’t exist due to the structure of the zone (i.e. non squared zones).
V. Muñoz et al. / Seat Allocation for Massive Events Based on Region Growing Techniques
181
Distribution rules Each organizing committee can establish a particular set of rules in order to perform the assignments of tickets to seats. However, a common rule is that each TG has to be assigned to seats of the same category and status. This rule implies that a preliminary filtering should detect possible overbooking situations. Regarding the remainder distribution rules, they are particular to each competition, they can be disabled, and some of them give the hints to produce optimal solutions. For example, the FIA rules are the following: RO1: TG and seats should agree regarding the sector, type and price type. RO2: Big TGs are divided into subgroups (TS) according to a given sub-group size, allowing some margin deviation and a remainder. These parameters (group size, deviation and remainder) are provided in each event. Each subgroup inherits the attributes of the group (category, rank, etc.). RO3: The ticket ranks should agree as much as possible with the assigned seat ranks RO4: A maximum and a minimum amount of tickets of one group (having more tickets than the minimum) are allowed in the same row. RO5: Never leave one ticket alone (of a group having > 1 tickets) (see figure 2 a) RO6: If some tickets of a TS do not fit in any single zone, the TS can also be split while maintaining a minimum number of tickets in each part. RO7: Two TS of the same TG having the dispersion flag activated, either cannot be assigned to the same zone, or can be assigned to the same zone if there is some distance between them (measured in number of seats). R08: Avoid leaving empty seats at the edge of rows (see figure 2 b). R09: When not all the tickets have been sold, there should be an uniform distribution(sparsity) of the assigned seats in a given zone and in the overall scenario, in order to give the appearance that the zone and the entire stadium is fuller than it really is (see figure 2 c).
Figure 2. Examples of good (X) and bad (Y) seat allocations. In (a) some tickets are leaving alone in a row; in (b) there are some empty seats at the edge of rows; in (c-right) the distribution of allocated seats is not uniform.
The problem Once the different components of our problem have been defined, namely, the tickets groups, the seats (and zones), and the rules, the seat allocation problem can be defined as finding seats for each ticket of a group, so that the rules are satisfied and optimized. 3. The fitness function In order to operationalize the optimization process based on the optimization rules, we have defined the fitness function of a candidate solution, GF. This function tries to measure the distribution degree and fitness of the different groups in the
182
V. Muñoz et al. / Seat Allocation for Massive Events Based on Region Growing Techniques
allocation, penalising the fact of leaving some tickets unassigned. It has been defined as follows:
Where GTOS is the fitness of the groups (regarding their rank and joint fitness), GD is the sparsity of the groups (including both the in-zone and overall sparsity degree), nu is the number of unassigned tickets, and pGTOS, pGD and pu are their weights. We cannot expose the rest of the formulas due to insufficient space. Please see [7] for further details. 4. Methodology The goal of our methodology is to provide an allocation of physical seats to tickets of the TGs optimizing the fitness function previously defined. Since we are dealing with a large scale problem, our goal is to develop a method that assures to find a solution as soon as possible, and then to improve the solution as time passes, hence it is an anytime method. Regarding our problem, the first rule tells us that seat and ticket categories must agree. This constraint helps us in dividing the problem in as much sub-problems as categories we have. Several allocation processes, one per category, can concurrently be run (see figure 3). Each process deals exclusively with the data of its category, and therefore, with a lower complexity that the global problem. The final solution is the joint of the results obtained in each category.
Figure 3. Division of the search space in n parallel processes according to ticket categories.
The allocation of TG to seats in a given category is based on a search algorithm that tries to obtain a first candidate solution as soon as possible. The inputs of the algorithm are the set of TS, ts1, …, tsm, sorted according to their priority. Since the category is the same for all the members of the same assignment process, the key attribute is then the rank. In each level of the search tree, a TS is being assigned to seats corresponding to one zone or more zones (depending on the optional rules). If some TSs cannot be assigned anywhere, they are temporarily forgotten, and the algorithm continues with the rest of the TS. The forgotten group is treated at the end, when some optional rules can be relaxed in order to allocate them. Once a first solution is achieved, a local search method tries to improve it. Region growing for seat assignment In order to allocate seats to a TS, a method has been defined based on a common technique used in Computer Vision for image region segmentation: the region growing algorithm [8]. This algorithm roughly consists on sowing a seed in an image,
V. Muñoz et al. / Seat Allocation for Massive Events Based on Region Growing Techniques
183
so that as the seed grows, it occupies all pixels of a given region. We though that such method can be applied to our allocation problem, if a TS can be mapped as a region, and a seat zone as the image. From this point of view, it’s necessary to select a seed (that is a seat in a given zone) for each TS, and then, to grow up the seed until all tickets of the TS take up the seats. Consistently, the method that we propose is based on three steps: (1). Select the zone with the ranking most according to the TS rank with enough seats to allocate it. (2). Select a seed. (3). Grow up the seed until all tickets has been allocated. These steps are iterated until the TS has been entirely assigned. In the remaining of this section, all the steps are detailed. Seed selection Seed selection depends on the dispersion value of the TG and the sparsity rule (RO9). The easiest case is when the dispersion attribute is off and the sparsity rule is not activated. Then the process consists of selecting as seed the empty seat of the zone with highest rank. When the sparsity rule is on, the TS should be distributed widespread in the zone. One way to achieve such distribution is to compute a distance from the assigned TSs in the zone. However, this strategy has a deterministic behaviour that makes similar distributions over all the zones(see figure 4a). Conversely, a random seed selection provides a uniform and widely better distribution in each zone (see figure 4b). Hence, we use a random method.
Figure 4. (a) Distance-based sparsity (b) Random sparsity
Seed growing up Once a seed has been selected, that is, a ticket of the TS has been assigned to a seat of a zone, the remaining seats of the TS should be allocated around it. This process is iterative: in each iteration one ticket is assigned to a seat. The seat is selected according to a neighbourhood policy. At the beginning, the seat is selected among the neighbours of the seed; in the second interaction, the seat is selected among all the neighbours of the previous allocated seats (the seed, and the second seat), and so on until all tickets have been assigned. Thereby, at each iteration, the seed growing up algorithm keeps a list of seat candidates (neighbours) among which the best seat is selected for a ticket (see figure 5). The selection method is based on the fact that all the seats of the group should be together (grouping factor) and the seat category as the distribution rules point out.
Figure 5. Seed growing up example. Cross circles are allocated seats, while grey cells are candidates(neighbours) for new ones.
An important problem arises when the selected seed cannot grow enough to appropriately allocate all the tickets of the TS due to, for example, some of the rules are not satisfied (see figure 6). That means that another seed should be selected for the group. This seed, however, can be valid for another TS. Since the growing process is
184
V. Muñoz et al. / Seat Allocation for Massive Events Based on Region Growing Techniques
costly, one interesting thing to know is whether the seed is a bad choice for all the TS or not. Then, the seed is checked for all the TS of the same category. If it does not work for any of them, then the seed is labelled as no-good and no other TS will test it again.
Figure 6. Different results changing the seed of a TS.
Unassigned group treatment Leaving TS unassigned is a good option trying to find a first approximation to the solution to the allocation problem. However, if there are some unassigned groups, the fitness of the solution abruptly descends, since it penalizes very much unassigned groups. Then, an additional treatment is required trying to assign as many TS as possible. If a TS has been skipped, that means that there is no zone with enough seats to allocate it. Then, the only way to have room enough in a zone is by undoing the allocation of some of the TS and checking a new combination that diminishes the resulting number of unassigned groups. The strategy we propose is to undo assignments of TS close to free seats (undo-TS). Then, the resulting free zone is bigger and eventually, unassigned zones can be placed there. The undoned TS remaining can be tested in other zones. Then, in each iteration of this step, we expect to decrement the number of unassigned TS, while completing more seat zones. Local search The region growind method before-written provides a first candidate solution to the problem, one for each category. Then, for each candidate, a local search algorithm is started in order to iteratively move to a better neighbour solution. This local search is based on changing the assignments of the allocated TS in a zone, in order to improve overall fitness. Among the different trials in a zone, the best allocation is finally selected.
Figure 7. Example of finding space for unassigned TS.
Note that this process can be applied in parallel for any zone of each category. 5. Experimental Results In order to experimentally prove our methodology, we have chosen the following configuration: Stadiums from 5,000 to 50,000 seats. 5 categories. 5,000 ticket groups, from 1 to 40 tickets each. Distribution rules parameters: 40 tickets maximum in a subgroup, 10 tickets maximum in a same seat row, and the dispersion
V. Muñoz et al. / Seat Allocation for Massive Events Based on Region Growing Techniques
185
flag is not activated. All the experiments have been carried out in a Pentium IV 3GHz, 1 GB of RAM. In Figure 8 there is an example of two zones with the seats assigned according to the first candidate solution step and the same zones after improving the results with the treatment of unassigned groups method. In the first solution, 4960 tickets have been assigned, while 40 remains unassigned. The fitness value of the first solution is 12.28. In the improved solution, all the tickets have been assigned, and the fitness achieved is 87.08. In general, figure 9 shows the fitness behaviour related to the number of tickets to be distributed (from 5000 to 50000) for both, the first candidate solution and the improved solution. It is possible to see how the improvement step influences the results. However, such results are not achieved for grant: they consume much more time than the first solution.
Figure 8. Results obtained in two blocks after improving the allocation of the 1sr solution.
Figure 9. Fitness (left) and time (right) for the first candidate solution. X axis are the number of tickets to assign and the Y axis are the fitness (left) and time (right) values.
Regarding execution time, there is a significant difference between the time required for the first solution and the one for the improvements. The former are much lower: 6 minutes for 50,000 tickets. Regarding the local search, it is expensive, taking some hours. The time required for the local search is high, but it has the advantage that can be stopped at any moment, providing the best solution found up so far. Hence, in this sense, the algorithm exhibits an anytime behaviour. 6. Related work There are some related works regarding seat allocation, mainly in the airline domain [4, 5, 6]. Their goal is to optimize the sells, while in our problem, the key issue is to distribute them. Another important point in our problem is the amount of seats for allocate. While in a flight, the capacity is at most 500, in our seat allocation problem we are dealing with up to 100.000.
186
V. Muñoz et al. / Seat Allocation for Massive Events Based on Region Growing Techniques
Regarding the region growing technique, it has been applied to site allocation problems in which the allocation of multiple sites of different land uses to an area is optimized. For example, recent works of Aerts and his colleagues [1, 2, 3] are investigating integer programming techniques for integrating spatial decisions and resource allocation. We think our work is in line with this kind of research. 7. Conclusions In this paper we have presented a methodology to deal with the seat allocation problem for massive events. In such kind of problems groups of tickets with very different attributes (categories, ranks, status) should be assigned to stadium seat zones, characterized also by categories, ranks, size, etc. In addition, the organization committee imposes some distribution rules that should be satisfied in some cases and optimized in other ones. The methodology we propose takes advantage of the well-known region growing technique and provides a first good candidate solution for the allocation process. Then, in a subsequently step, a local search method is applied in order to improve the solution. The experimental results obtained have shown that our methodology works well, given that in the case of dealing with a huge amount of tickets (about 50.000), we obtain a realistic response time. 8. References [1] J.C.J.H. Aerts, E. Eisinger, G.B.M. Heuvelink, T.J. Stewart: “Using Linear Integer Programming for Multi-Site Land-Use Allocation”. Geographical Analysis, Vol. 35, No. 2, April 2003. [2] J.C.J.H. Aerts, G.B.M. Heuvelink: “Using simulated annealing for resource allocation”. Int. Jorunal of Geographical Information Science vol. 16, no. 6, 571-587, 2002. [3] Aerts, Jeroen C. J. H.: "Using Linear Integer Programming for Multi-Site Land-Use Allocation".Geographical Analysis - Volume 35, Number 2, April 2003, pp. 148-169 [4] D. Bertsimas, I. Popescu: “Revenue Management in a Dynamic Network Environmen”. Transportation Science, 37(3):257-277, 2003. [5] J-P. Côté , P. Marcotte, G. Savard: “A bilevel modeling approach to pricing and fare optimization in theairline industry". Journal of Revenue and Pricing Management 2, 2003, 23-36 [6] B. Freisleben; G. Gleichmann: “Controlling airline seat allocations with neural networks”. Proceeding of the Twenty-Sixth Hawaii International Conference on System Sciences, 1993, Volume iv, 5-8 Jan. 1993 Page(s):635 - 642 vol.4 [7] V. Muñoz, M. Montaner, J.L. de la Rosa: “Spectator Distribution Algorithm”. Career Final Project, 2005. URL: http://eia.udg.es/~vmunozs/pfc_einf/pfc_einf.html [8] S.W . Zucker: “Region Growing: Childhood And Adolescence”. Journal CGIP, p 382-399, 1976.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
187
Solving the Response Time Variability Problem by means of metaheuristics† Alberto GARCÍA, Rafael PASTOR and Albert COROMINAS Institut d’Organització i Control de Sistemes Industrials (IOC), Universitat Politècnica de Catalunya, Barcelona, Spain
Abstract. The Response Time Variability Problem (RTVP) is a combinatorial NPhard problem which has a wide range of real-life applications. It has recently appeared in the literature and has therefore not been widely discussed to date. The RTVP has been solved in other works by mixed integer linear programming (for small instances) and heuristics, but metaheuristic procedures have not previously been used. In this paper, a solution to the RTVP by means of multi-start, GRASP and PSO procedures is proposed. We report on our computational experiments and draw conclusions. Keywords: response time variability, fair sequences, scheduling, metaheuristics.
Introduction The Response Time Variability Problem (RTVP) consists in sequencing a list of products, events, clients and jobs in such a way that the variability in the time they spend waiting for their next turn to obtain the resources they need is minimized. This problem has recently been defined in the literature and to date very few papers have been published on the subject [1], [2], [3]. Corominas et al. [2] have proved that the RTVP is a combinatorial NP-hard problem and, with the exception of a few special cases, they have in fact found an optimum solution to the problem only for small instances. Therefore, solving the problem by means of heuristic and metaheuristic procedures is entirely justified. In this paper, a solution to the RTVP is put forward by applying the following three procedures: multi-start, GRASP (Greedy Randomized Adaptive Search Procedure) and PSO (Particle Swarm Optimization). The multi-start method is based on generating initial random solutions and on improving each of them to find a local optimum, which is usually done through a local search procedure. GRASP, designed by Feo and Resende [5] in 1989, can be considered to be a variant of the multi-start method in which the initial solutions are obtained using directed randomness. They are generated by means of a greedy strategy in which random steps are added and the choice of the elements to be included in the solution is adaptive.
†
Sponsored by the Spanish Ministry of Education and Science’s project DPI2004-03472; co-funded by the FEDER.
188
A. García et al. / Solving the Response Time Variability Problem by Means of Metaheuristics
PSO is a metaheuristic procedure designed by Kennedy and Eberhart [6] in 1995. The original algorithm was designed for working with continuous functions of real variables and has obtained good results. Furthermore, it has recently been adapted for the purposes of working with combinatorial problems such as the travelling salesperson problem [7] or the flowshop problem [8]. In spite of these good results, there are not many PSO methods for solving combinatorial optimization problems. The remainder of this paper is set out as follows: Section 1 presents a formal definition of the RTVP; Section 2 briefly describes the methods used and how they were adapted to solve the RTVP; Section 3 explains how the values for the metaheuristic parameters were established; the computational results are shown in Section 4; and finally, the conclusions are put forward in Section 5.
1. Response Time Variability Problem (RTVP) The Response Time Variability Problem occurs whenever products, clients or jobs need to be sequenced so as to minimize variability in the time between the instants at which they receive the necessary resources. The RTVP occurs in a wide range of real-life applications. For example, it is a common occurrence in the automobile industry in the sequencing of models [9] and in the Asynchronous Transfer Mode (ATM) when multimedia systems need to broadcast video or sound at a specific time [10]. These kinds of situations are often considered to be distance-constrained scheduling problems, in which the distance between any two given consecutive units of the same product is bounded. However, in the RTVP the aim is to minimize variability in the distances between any two consecutive units of the same product and to find a feasible solution that optimizes this objective. The RTVP is formulated as follows. Let n be the number of products, di the n
number of units of product i and D the total number of units ( D
d i ). Let s be a ¦ i 1
solution of an instance in the RTVP that consists of a circular sequence of units ( s s1 s 2 ! s D ), where sj is the unit sequenced in position j of sequence s. For all products i in which d i t 2 , let t ki be the distance between the positions in which the units k+1 and k of product i are found (i.e., the number of positions between them). As the sequence is circular, position 1 comes immediately after position D; therefore, t di i
is the distance between the first unit of product i in a cycle and the last unit of the same product in the preceding cycle. Let t i be the average distance between two consecutive D
units of product i ( t i
di
). For all products i in which d i di
n
objective is to minimize the RTV
1 , t1i is equal to t i . The
(t ki t i ) 2 . ¦¦ i 1 k 1
For example, let n 3 , d A 2 , d B 2 and d C 4 ; thus, D 8 , t A 4 , t B 4 and t C 2 . A feasible solution is the sequence (C, A, C, B, C, B, A, C) where RTV
>5 4
2
@ >
@ >
3 4 2 4 6 4 2 2 2 2 3 2 1 2 2
2
2
2
2
2
2
@
282
12
A. García et al. / Solving the Response Time Variability Problem by Means of Metaheuristics
189
Corominas et al. [2] proved that the RTVP is NP-hard. The RTVP was optimally solved by means of mathematical programming, up to 40 units [3], and by means of heuristic procedures plus local optimization [2].
2. Multi-start, GRASP and PSO metaheuristic methods 2.1. Multi-start method The multi-start method consists in generating random solutions, applying local optimization methods and preserving the best results. The pseudocode of the adaptation of the multi-start method is 1. Let the value of the best solution found be Z f . 2. While (actual time < execution time), do: 3. Get a random initial solution X 4. Apply the local optimization to X and get X’ 5. If value (X’) < Z , then Z = value (X’) Random solutions are generated as follows. For each position from 1 to D in the sequence, we randomly obtain which product will be sequenced with a probability equal to the number of units of that type of product that remain to be sequenced divided by the total number of units that remain to be sequenced. The local optimization is applied as follows. A local search is performed iteratively in a neighbourhood that is generated by interchanging two consecutive units; the best solution in the neighbourhood is chosen; the optimization ends when no neighbouring solution remains that is better than the current solution. 2.2. Greedy Randomized Adaptive Search Procedure (GRASP) method Like the multi-start method, GRASP consists in generating solutions, applying local optimizations and preserving the best results. However, the generation of solutions is performed by applying a heuristic with directed randomness, which is usually a random variation of a simple greedy heuristic. At each stage in the heuristic, the next product to be added to the solution is randomly selected from a list of candidates with a probability proportional to the value of an associated index. The pseudocode of the GRASP adaptation is almost the same as that of the multistart method: the only difference is the way in which the initial solutions are obtained, which is as follows. For each position from 1 to D in the sequence, the product to be sequenced is randomly selected from the candidate list with a probability proportional to the value of its Webster index. This index, defined in [2], is as follows: let G 1 2 and let x ik be the number of units of product i that have already been sequenced in the sequence of length k, k = 0, 1, …; the value of the Webster index of product i to be di sequenced in position k 1 is . xik G The local optimization used is the same as the optimization used in the multi-start method.
190
A. García et al. / Solving the Response Time Variability Problem by Means of Metaheuristics
The size of the candidate list was set to 5 candidates. 2.3. Particle Swarm Optimization (PSO) method Kennedy and Eberhart designed the PSO metaheuristic by establishing an analogy to the social behaviour of flocks of birds when they search for food. Originally, this metaheuristic was designed to optimize continuous functions of real variables [6]. Due to its good performance, it has been adapted for the purposes of working with combinatorial problems [7], [8], [11]. In this kind of algorithm, the particles, which correspond to the birds, have a position (a feasible solution) and a velocity (the change in their position), and the set of particles form the swarm, which corresponds to the flock. At each step in the PSO algorithm, the behaviour of a particle is the result of the combination of the following three factors: 1) to continue on the path that it is following, 2) to follow the best solution found and 3) to go to the best position found by the swarm. The formalization of this behaviour is expressed in the following two equations: vt 1 X t 1
c1 vt
c 2 ( BPt X t )
c3 ( BSPt X t ) X t vt 1
(1) (2)
where vt is the velocity of the particle at time step t; X t is the position of the particle at time step t; BPt is the best position of the particle up to time step t; BSPt is the best position of the swarm up to time step t; and c1, c2 and c3 are the coefficients that weight the importance of the three types of decision. The values of coefficients c1, c2 and c3 are usually fixed in advance. To apply the PSO algorithm to the RTVP, the elements and the operations of the equations (1) and (2) have to be defined. 2.3.1. Position of the particle As mentioned above, a position represents a feasible solution. The position is represented by a D-length array that contains the sequence of D units. 2.3.2. Velocity of the particle The expression (X2 – X1) represents the difference between two positions and it is the velocity needed to go from position X1 to X2. This velocity is an ordered list of transformations (called movements) that must be applied to the particle so that it changes from its current position to the other one. Two types of movements, each of which had two variations, were considered. The first type of movement, called M1, is a pair of values (Į / j). For each position s in the sequence X1, a check is conducted to determine whether the unit in this position s is equal to the unit in position s of sequence X2. If they are different, Į is the unit in position s of X2 and j is position s. Thus, this movement denotes that the unit in position j must be exchanged for the first unit that is equal to Į and that is to the right of position s. This concept is used to solve the CONWIP problem [11].
A. García et al. / Solving the Response Time Variability Problem by Means of Metaheuristics
191
The second type of movement, called M2, is a pair of positions (i, j). These values indicate that the units that are sequenced in positions i and j have been exchanged. Two examples of the movements that are needed to move to position X2 (A-B-CA-B-C-A-B-C) from position X1 (A-A-A-B-B-B-C-C-C) are shown below. M1: movements (B/2), (C/3) and (C/6) are needed. A-A-A-B-B-B-C-C-C ĺ (B/2) ĺ A-B-A-A-B-B-C-C-C ĺ (C/3) ĺ A-B-C-A-B-B-A-C-C ĺ (C/6) ĺ A-B-C-A-B-C-A-B-C M2: movements (2,4), (3,7) and (6,8) are needed. A-A-A-B-B-B-C-C-C ĺ (2,4) ĺ A-B-A-A-B-B-C-C-C ĺ (3,7) ĺ A-B-C-A-B-B-A-C-C ĺ (6,8) ĺ A-B-C-A-B-C-A-B-C There would seem to be no difference between M1 and M2, but when two velocities are added (see Section 2.3.4) then lists of movements that refute this may appear. The two variations for each movement are: 1) if only the type of product is used to compare two units (this variation is called T and it is used in examples above), and 2) if the unit number is used to compare two units and therefore a unit is only equal to itself (this variation is called F). For example, in the case of variation F, position A1-A2-A3B1-B2-B3-C1-C2-C3 (in which the number next to each letter is a unit identifier for each product) is different to position A2-A1-A3-B1-B3-B2-C1-C2-C3, but in variation T the two positions are equal (they appear as A-A-A-B-B-B-C-C-C). The difference between two positions using variation F will always be greater than or equal to the difference when variation T is applied. 2.3.3. External multiplication of a coefficient by a velocity The coefficients c1, c2 and c3 yield values of between 0 and 1. When a coefficient is multiplied by a velocity, it indicates the probability of each movement that is to be applied. For example, if we multiply velocity [(B/2), (C/3), (C/6)] by coefficient 0.6, three random numbers between 0 and 1 are generated for comparison with coefficient 0.6; if the values are 0.3, 0.8 and 0.4, then movements (B/2) and (C/6) are applied, whereas movement (C/3) is not. The resulting velocity of the multiplication is therefore [(B/2), (C/6)]. 2.3.4. Sum of velocities The sum of two velocities is simply the concatenation of their own list of movements. 2.3.5. Sum of a velocity plus a position The sum of a velocity plus a position gives the same result as applying each movement of the velocity to the position. 2.3.6. Pseudocode of the algorithm 1. Initiate the particles with random positions and empty velocities. 2. While (actual time < execution time), do: 3. Update the best swarm position. 4. For each particle: 5. update its best position and apply the two PSO equations.
192
A. García et al. / Solving the Response Time Variability Problem by Means of Metaheuristics
The random positions are generated in the same way as the random solutions in the multi-start method. 3. Fine-tuning the PSO parameters Adapting metaheuristics to a specific problem does not end with the definition of the space of solutions or the local search; moreover, it is required to set the parameters. The value of the parameters is vital because the results of the metaheuristic for each problem are very sensitive to them. To fine-tune is very expensive and it is usually done by intuitively testing several values. For the purposes of this paper, we fine-tuned the parameters using a recent technique called CALIBRA [12]. CALIBRA is an automatic configuration procedure based on statistical analysis techniques (Taguchi’s fractional factorial experimental designs) coupled with a local search procedure. A set of 60 representative instances was used to fine-tune the algorithms and a set of 740 units was used to test them. The four parameters to be fine-tuned were the number of particles in the swarm and coefficients c1, c2 and c3. The range of the values used to fine-tune the algorithms was [5,30] for the number of particles and [0,1] for the coefficients. CALIBRA needed 35 hours to fine-tune each algorithm.
4. Computational results As described in Section 2.3.2, depending on the type of movement (M1 or M2) and the variation (T or F), we have four PSO algorithms (called M1-F, M1-T, M2-F and M2-T), as well as the multi-start algorithm and the GRASP algorithm. The algorithms ran 740 instances, which were grouped into four classes (185 instances in each class) depending on their size. The instances in the first class (called CAT1) were generated using a random value of D (number of units) between 25 and 50, and a random value of n (number of products) between 3 and 15; for the second class (called CAT2), D was between 50 and 100, and n between 3 and 30; for the third class (called CAT3), D was between 100 and 200 and n between 3 and 65; and for the fourth class (called CAT4), D was between 200 and 500 and n between 3 and 150. The algorithms were coded in Java and the computational experiments were carried out using a 3.4 GHz Pentium IV with 512 Mb of RAM. Firstly, the six algorithms were run for 50 seconds for each instance. Table 1 shows the averages of the RTV values to be minimized for each class of instances. Table 1. Averages of the RTV values to be minimized PSO
Multi-start
GRASP
M1F
M1T
M2F
M2T
CAT1
68.79
66.83
83.14
80.93
11.33
13.90
CAT2
445.55
509.89
604.27
517.05
48.10
91.64
CAT3
3050.38
4335.87
4488.44
3888.79
320.63
541.52
37937.76 30029.34
79823.89
57041.74
CAT4
28955.82 48917.80
A. García et al. / Solving the Response Time Variability Problem by Means of Metaheuristics
193
In Table 1 it can be seen that the best results for the three first classes are given by the multi-start method, followed by the GRASP method, whereas the PSO algorithm yields the worst results. However, in the case of class CAT4, in which the instances are largest, the order is the reverse: the four PSO algorithms yield better results than the GRASP method, and the multi-start method gives the worst results. The reason for this is that the multi-start method does not have time to locally optimize a single solution for 87.57% of the instances in the CAT4 class; this happens in the GRASP method for 84.32% of the instances. The second computational experiment consisted in locally optimizing the solutions that were obtained with the PSO algorithms in the first computational experiment. The optimization used was the same as the multi-start optimization; it stops after 50 seconds if the optimization has not been completed. Table 2 shows the averages of the RTV values obtained for each class of instances. Table 2. Averages of the RTV values of the PSO local optimized solutions M1F
M1T
M2F
M2T
CAT1
21.61
24.43
23.61
25.65
CAT2
67.42
89.75
77.56
95.14
CAT3
229.32
406.63
302.06
427.09
CAT4
15842.12
29604.35
20560.1
15537.62
The results obtained using M1F for the instances in class CAT3 after local optimization are better than the results obtained using the multi-start method. Moreover, the optimization times for the first two classes are negligible and the average time for the third class is between 4.26 and 5.84 seconds (using M1F and M1T, respectively). The instances in the first three classes were all locally optimized. However, there was not enough time to optimize all the instances in class CAT4: only 60 instances (32.43%) were locally optimized based on the solutions that were obtained using M1F. Finally, the six procedures were re-run for 200 seconds using the instances in class CAT4, which are the most difficult to solve. In the case of PSO algorithms, 100 seconds were spent on obtaining a solution and a further 100 seconds, at the most, were spent on locally optimizing the previous solution. Table 3 shows the average of the RTV values obtained for class CAT4 (the values in parenthesis were obtained using the PSO algorithms before local optimization was applied). Table 3. Average of the RTV values of the CAT4 instances M1F
M1T
M2F
M2T
(24022.52)
(44697.30)
(36445.60)
(29838.01)
8782.07
21432.13
14892.35
11984.25
multi-start
GRASP
39719.71 30020.35
The results show that all the PSO algorithms give better results than the multi-start and GRASP algorithms. In this last experiment, 97 instances (52.43%) were locally optimized after applying the M1F algorithm.
194
A. García et al. / Solving the Response Time Variability Problem by Means of Metaheuristics
5. Conclusions and future lines of research In this paper we have presented our solution to the RTVP (a problem that has not been widely researched to date), to which six algorithms were applied: one multi-start, one GRASP and four PSO. The results show that the best procedure is the multi-start for small instances (between 25 and 100 units and between 3 and 30 products). However, for bigger instances (between 100 and 500 units and between 3 and 150 products) the search should be more specific as the four PSO algorithms are much better than the multi-start and GRASP methods and the latter are better than the multi-start methods. Moreover, as was to be expected, there is a significant improvement in the solutions that were obtained using the PSO algorithm to which local optimization had been applied. Future research will consist in adapting new metaheuristic procedures, such as for example simulated annealing and tabu search.
References D. León, A. Corominas, A. Lusa, Resolución del problema PRV min-var, Working paper IOC-DT-I2003-03, UPC, Barcelona, Spain, 2003. [2] A. Corominas, W. Kubiak, N. Moreno, Response time variability, Working paper IOC-DT-P-2004-08. UPC, Barcelona, Spain, 2004. [3] A. Corominas, W. Kubiak, R. Pastor, Solving the Response Time Variability Problem (RTVP) by means of mathematical programming, Working paper IOC-DT, UPC, Barcelona, Spain, 2006. [4] R Martí, Multi-start methods, Handbook of Metaheuristics, Glover and Kochenberger (eds.), Kluwer Academic Publishers, pp. 355-368, 2003. [5] T.A. Feo, M.G.C. Resende, A probabilistic heuristic for a computationally difficult set covering problem, Operations Research Letters, vol. 8, pp. 67-81, 1989. [6] J. Kennedy, R.C. Eberhart, Particle swarm optimization, IEEE International Conference on Neural Networks, Australia, 1995. [7] B. Secrest, Travelling salesman problem for surveillance mission using PSO, PhD thesis, Air Force Institute of Technology, Ohio, USA, 2001. [8] C.J. Liao, C.T. Tseng, P. Luarn, A discrete version of PSO for flowshop scheduling problems, Computers & Operations Research, in press, corrected proof available online, 5 December 2005. [9] Y. Monden, Toyota Production Systems, Industrial Engineering and Management Press, Norcross, GA, 1983. [10] L. Dong, R. Melhem, D. Mossel, Time slot allocation for real-time messages with negotiable distance constraint requirements, Real-time Technology and Application Symposium, RTAS, Denver, 1998 [11] C. Andrés, R. Pastor, J.M. Framiñán, Optimización mediante cúmulos de partículas del problema de secuenciación CONWIP, Eighteenth Conference on Statistics and Operations Research SEIO’04, Cádiz, Spain, 2004. [12] B. Adenso-Díaz, M. Laguna, Fine-tuning of algorithms using fractional experimental designs and local search, Operations Research, vol. 54, no. 1, pp. 99-114, 2006.
[1]
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
195
Planning under temporal uncertainty indurative actions1 J. Antonio Alvarez, Laura Sebastia, and Eva Onaindia Dpto. Sistemas Informáticos y Computación Universidad Politécnica de Valencia Camino de Vera s/n 46022 Valencia, Spain Abstract. Recently, planning under temporal uncertainty is becoming a very important and active field within the AI planning community. In this paper we present a new approach to handle planning problems with uncertain duration of actions. The novelty of this approach is that the management of the temporal uncertainty in durative actions is done independently of the planning process itself, which allows us to reduce the problem complexity. The experiments will show that we are able to obtain a plan that satisfies the user requirements for all the tested cases. Consequently, this work can be considered as a promising approach to handle this type of problems. Keywords. Planning and optimization, Constraint satisfaction problems
1. Introduction Over the last years, the AI planning community has devoted a great effort to adapt and enrich the planning techniques to solve real-world problems. Currently, the existing methods are able to solve problems that involve temporal issues or resource consumption. The standard PDDL language has been extended to consider these new functionalities in the form of durative actions and the introduction of numeric fluent expressions (PDDL 2.1 [5]). The information about actions durations or resource consumption can be represented in two different ways: (1) by means of constant values such as (= ?duration 5) or (decrease (energy ?x) 10) and (2) through the values returned by the evaluation of an expression such as (=?duration (/(- 80 (energy ?x)) (recharge-rate ?x))) or (increase (energy ?x) (* ?duration recharge-rate ?x))). However, we cannot always determine exactly the duration of an action or the amount of resources an action will need. In some cases there is an uncertainty associated to both the action duration or resource consumption. For instance, we may know that loading a pallet in a truck takes between 10 and 15 minutes and that the exact duration depends on external causes that are not modelled in the problem. This problem of handling uncertainty in time and resource consumption was first introduced by Bresina et al. [6]. In this paper, authors outline a class of problems, typical of Mars rover operations, that are problematic for current methods of planning. In particular, when planning the activities for a Mars rover (1) actions can be concurrent and have differing durations, (2) there is uncertainty concerning action durations and consumption of continuous resources like power and (3) typical daily 1 This work has been partially funded by the Spanish government CICYT project TIN2005-08945-C06-06 (FEDER) and by the Valencian government project GVA06/096.
196
J.A. Alvarez et al. / Planning Under Temporal Uncertainty Indurative Actions
plans involve on the order of a hundred actions. This class of problems represent a new challenge for the planning community. Following the same search direction, Biundo et al. [2] have developed an application based on HTN-planning that handles uncertain time consumption of actions in planning. This uncertainty is represented by means of continuous probabilities, which allows the construction of plans that are guaranteed to meet certain probability thresholds with respect to given time limits. In this paper, we present a new approach to solve planning problems under temporal uncertainty. Similarly to Biundo et al.'s work, we handle problems where actions are associated to an uncertain duration and our application is aimed at building plans which satisfy the user requirements with respect to plan duration (makespan): a maximum makespan with a minimum probability; i.e. we can guarantee with a minimum probability that the makespan of the obtained plan will not exceed a makespan threshold. However, our approach has a main difference with respect to Biundo et al.'s work: our mechanism to handle uncertainty is independent from the planner; thus, this routine can make use of any temporal planner. This paper is organized as follows. Next section details the functional behaviour of our approach. Section 3 summarizes the experiments we have performed and finally, section 4 concludes and outlines some further work. 2. Description of our approach We will introduce some basic definitions before presenting the features of our system. A planning problem with durative actions is a planning problem in the usual form: P=
, where I is the initial state, G is a conjunctive goal and A is the set of durative actions that can be applied in the domain. A durative action a is a tuple, a= pre(a), add(a), del(a), due(a)) where pre(a) is the set of action's preconditions,is its add list, and is its delete list, each of which constitutes a set of facts is a fixed value 2 that indicates the duration of the action a. However, the planning problem we are dealing with is a problem with durative actions, whose durations are uncertain. In order to represent uncertainty in the duration of the actions, we make use of continuous random variables which are used in stochastics to model continuous events. We have selected normal-distributed random variables to describe actions duration. This is because most of the stochastic processes that deal with continuous random variables can be approximated using a normal probability distribution, as it is shown in the literature on this field. The definition of an action is then slightly modified as follows. Each action a in the domain is a durative action action with a stochastic duration. More specifically, an action a is a tuple a= (pre(a), add(a), del(a), dur(a)), where pre(a), add(a) and del(a) have the same meaning as above and dur(a)=(μ, V2)is the random variable that describes the uncertain duration of the action . The mean value μ of such a random variable represents the average amount of time consumed by the action, while the variance V2 describes the uncertainty of its duration. We denote this type of actions as probabilistic durative actions. We define a planning problem with probabilistic durative actions as a tuple P=, where I is the initial state, G is a conjunctive goal and A is the set of Dur(a)
,
2 We consider a fixed value for the sake of simplicity. It could be easily substituted by an expression as shown in section 1.
J.A. Alvarez et al. / Planning Under Temporal Uncertainty Indurative Actions
197
probabilistic durative actions that can be applied in the domain. R is a pair of values (U, PU) which represents the user requirements. These requirements state that a solution plan for the problem P is now considered valid only if the probability of not exceeding the given duration limit U is greater than the user-defined threshold PU. In the remainder of this section, we explain how our application works. An schema of this system is shown in figure 1. One of our goals is to handle uncertainty in the actions duration independently from the planner. For this reason we have built a system that works in two stages: 1. Initialization phase: This phase transforms the probabilistic problem in a deterministic one so that it can be solved by any temporal planner. 2. Iterative phase: This stage computes first the solution plan for the deterministic problem. Afterwards, the problem is transformed back again to the probabilistic context in order to compute the probability that the makespan of the current plan does not exceed U. If this probability satisfies the user expectations, the processs finishes. Otherwise, a portion of the plan is removed and replaced by the result of a replanning mechanism. These phases are detailed in the next subsections.
Figure 1. Schema of our system
2.1. Initialization phase The goal of this first stage of our system is to transform the problem with probabilistic durative actions PP into a problem with deterministic durative actions PD. This is achieved by assigning a fixed duration di, to each action ai so that we ensure that the action will take di time units with a probability close to 100%. Given a normal-distributed random variable X, stochastic literature shows that:
198
J.A. Alvarez et al. / Planning Under Temporal Uncertainty Indurative Actions
Therefore, it follows that . This allows us to conclude that,given an action duration dur(a)=(μ a, V2a), which is normal-distributed, the probability of this action a to take shorter than μ a+3Va is greater than 99.7%. We consider this value good enough to affirm that dur(a)=μ a+3Va is a deterministic duration. Our problem with probabilistic durative actions is converted into a problem with (deterministic) durative actions by assigning the duration of each action to di=μ a+3Va .That is, given PP=, where aA is defined as a= (pre(a), add(a), del(a), dur(a)), where pre(a), add(a) and dur(a)=(μ, V2), PP is converted into PD= where dur(a)=μ+3V . The resulting PD problem can be solved by any temporal planner. 2.2. Iterative phase This phase, as figure 1 shows, consists basically in three steps: 1. First, the solution plan for the corresponding problem with (deterministic) durative actions is solved by the LPG [1] planner. We have selected this planner because it is one of the fastest planners nowadays and moreover, it is capable of obtaining different solutions for the same problem. 2. Then, given the user requirements R= (U,PU), we compute the probability that the P makespan does not exceed U. 3. If P>PU then the process finishes. 4. Otherwise, a portion of the plan is removed and replaced by a different subplan. Steps 2 and 3 are detailed below. 2.2.1. Step 2: Compute the probability P This step is performed in the probabilistic environment, that is, we consider again the duration of each action as a normal-distributed random variable. The random variable that describes the duration of a plan is obviously the sum of all the random variables of the single actions. Given n normal-distributed random variables Xi, stochastic literature shows that 3 : The overall duration of a plan Pl= is represented by a new random variable 3.Given actions durations dur(ai)=(μ i, V2i), provided that all these random variables are independent and using the formula above, we get that: (1) However, in our case, we work with a parallel plan and this formula cannot be applied directly to the plan as it is. In order to compute 3 for a parallel plan Pl,it is necessary to find a sequence of actions from a « start » action to an « end » action that has a maximum duration. This sequence is called the critical path. At this point, we compute the critical path of the plan Pl returned by the LPG planner by using the algorithm described in [3]. Given that this plan was obtained by considering the probability of the duration of each action is greater than 99.7%, we can 3
In case all Xi are independent variables.
J.A. Alvarez et al. / Planning Under Temporal Uncertainty Indurative Actions
199
affirm that the sequence of actions included in the critical path will take longer than any other sequence in any case. Let CP=(a1, ..., an) be the critical path of the plan Pl. Then, the distribution of the random 3=(μ 3, V23) variable that represents the makespan of the plan is computed by applying the formula 1. Now it is easy to check whether this plan satisfies the user requirements R=(U,PU) or not by evaluating the following expression: In case the evaluation of this expression returns true, the process finishes. Otherwise, Step 3 is executed. 2.2.2. Step 3: Replanning In case the probability that 3 does not exceed U is not great enough, the plan needs to be rebuilt to increase this probability. We have developed a rewriting method which basically, consists in selecting a part of the plan that can be (hopefully) replaced by a better subplan, i.e. a subplan that increases the probability that the global plan does not exceed U. We select two points in the critical path of the plan: the start point and the end point that indicate which portion of the plan will be substituted. The start point and the end point can be selected by using different methods. In this work we have tested two preliminary methods. The first method is a random selection of the start and end points (random algorithm). The second method (fixed algorithm) consists in computing the cumulative probability of the actions in the CP, considering the duration of a single action is calculated as U divided into the number of actions. This way, we obtain the probability for each action. If there is a decrease in the probability between two consecutive actions ai-1 and ai the node ai is registered. When the probability between two consecutive actions aj-1 and aj increases, the node aj is also registered. We repeat this process for all the actions in the critical path and nally the pair of nodes (ai,aj) with the greater difference is selected as start point and end point, respectively. The start and end points also dene the new planning problem that needs to be solved: The initial state of this problem is the state reached after executing the plan from the first action until the start point. The goal is obtained by applying the following process. The literals in the original goal G are added to a queue Q. The last action in the plan is visited, all the literals in the add list of these actions are removed from Q and all the preconditions are added to Q. The remaining actions in the plan until the end point are visited backwards and again the literals in the add list are removed from Q and the preconditions are added to Q. When this process nishes, the goal of the new planning problem corresponds to the set of literals remaining in Q. Once this new planning problem has been solved, the resulting plan is inserted in the corresponding part of the original plan. Then, the probability of this new plan is computed and evaluated, that is, we go back to Step 2. 3. Results The experiments we summarize in this section were performed with two main purposes: (1) checking whether our two-level approach (that is, considering the uncertainty in the duration of the actions separately from the planning itself) would be
200
J.A. Alvarez et al. / Planning Under Temporal Uncertainty Indurative Actions
interesting and (2) establishing a comparison between the two recommendation algorithms described in section 2. We executed our system to solve all the problems defined in the depots, driverlog and rovers domains of the IPC-03 (International Planning Competition 2002) [4]. It was requested in all cases a probability of 75%, with different values of maximum duration. We run a maximum of 20 iterations of the iteration phase of the process. We define: x The percentage of successes as the number of plans obtained after the iteration phase that satisfy the user requirements with respect to the number of problems executed for each domain. x The percentage of improvements as the number of plans obtained after the iteration phase whose P is higher than the P of the original plan with respect to the number of problems executed for each domain. x The average of deleted actions as the relation between the overall number of deleted actions and the number of solved plans. Figures 2, 3 and 4 show the results of these tests with both recommendation algorithms. These figures show the obtained probability for each algorithm joint with the number of deleted actions by re-planning. .
Figure 2. Results for the Depots domain
The first point that is interesting to remark is that our approach has obtained at least one plan that satisfies the user requirements for all the problems executed (by using any of the recommendation algorithms). Moreover, it obtained this solution in less than 1 second, which allow us to affirm that it is an interesting approach.
Figure 3. Results for the Driverlog domain
J.A. Alvarez et al. / Planning Under Temporal Uncertainty Indurative Actions
201
Figure 4. Results for the Rovers domain
Tables 1 and 2 show the percentage of successes and improvements and the average of deleted actions for the random and fixed recommendation algorithms, respectively. As it can be observed in table 1, the random algorithm obtains a high number of successes and improvements in all domains. However, in the case of the fixed algorithm (table 2), the percentage of successes is lower although it overcomes 50% in all the cases. Also, a very high percentage of modified plans improve the original plan. On the other hand, we must also consider that the number of deleted actions when the random recommendation algorithm is used is higher than when the fixed algorithm is used. Moreover, if we compare only the actions deleted in those problems when both algorithms found a valid solution, we find the same tendency. This means that the ed algorithm is capable of obtaining valid solution with a lot less effort. The question is: why is it not solving at least the same problems as the random algorithm? The answer is that the way of computing the recommendation with the fixed algorithm prevents the system to do many iterations, that is, the recommendation usually points to the same part of the plan, whereas the random algorithm can modify any section of the plan.
4. Conclusions This work represents a step ahead to solve planning problems under temporal uncertainty. Our approach consists of a two-stage process: the first stage transforms the probabilistic problem in a deterministic one so that it can be solved by any temporal
202
J.A. Alvarez et al. / Planning Under Temporal Uncertainty Indurative Actions
planner and the second stage rewrites the obtained plan in order to meet the user requirements. By considering the uncertainty in the duration of actions separately from the planning stage, the complexity of the problem is higly reduced and the solving process becomes much more efficient. The experiments show an important issue: our approach always obtains a successful plan with any of the two algorithms. For all these reasons, we can conclude this is a good start to deal with this type of problems. With respect to further work, our first goal is to improve the recommendation algorithm, bringing together the benefits of the random and fixed algorithms introduced inthis paper. References [1] Ivan Serina Alfonso Gerevini, Alessandro Saetti. Planning with numerical expressions in LPG. in Proceedings of the 16th European Conference on Articial Intelligence (ECAI-04), IOSPress, Valencia, Spain, 2004. [2] Susanne Biundo, Roland Holzer, and Bernd Schattenberg. Project planning under temporal uncertainty. In Procs. Workshop on Planning and Scheduling: Bridging Theory to Practice at The 16th European Conference on Articial Intelligence (ECAI'04), Valencia, Spain, 2004. [3] Dubruskin and Vladimir. Lecture 6, elementary graphs algorithms, course CS507 data structure and analysis of algorithms., 2001. [4] M. Fox and D. Long. Domains and results of the third international planning competition, 2002. http://www.dur.ac.uk/d.p.long/competition.html. [5] M. Fox and D. Long. PDDL 2.1 : An extension to pddl for expressing temporal planning domains. Technical report, University of Durham, UK, 2002. [6] Bresina John, Dearden Richard, Meuleau Nicolas, Ramkrishnan Sailesh, Smith David, and Washington Rich. Planning under continuous time and resource uncertainty: A challenge for ai. In Proceedings of the 18th Annual Conference on Uncertainty in Articial Intelligence (UAI-02), pages 77.84, San Francisco, CA, 2002. Morgan Kaufmann Publishers.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
203
Building a Local Hybrid Map from Sensor Data Fusion Zoe FALOMIRa, Juan Carlos PERISb, M. Teresa ESCRIGa Universitat Jaume I a Engineering and Computer Science Department b Languages and Computer Science Systems Department Campus Riu Sec, Castellón, E-12071 (Spain) [email protected], [email protected], [email protected]
Abstract. A local hybrid (qualitative + quantitative) map is developed in this paper. The concave and convex corners of the environment and a distance perception pattern, provided by a fuzzy-qualitative sensor fusion process, are included in the map. This map is the base for solving the robot localization, path planning and navigation problem by using qualitative reasoning models. According to the model described in this paper, the size of the representation of the world is given only by the complexity of the environment. Keywords. Fuzzy set theory, sensor fusion, cognitive map, reference systems, autonomous robot navigation.
Introduction To carry out a successful autonomous navigation in complex environments, mobile robots need to build and maintain reliable maps of the world. In order to capture the main features of the environment, robots utilize different kind of sensors (ultrasonar or laser sensors, cameras, bumpers, etc.), each one sensitive to a different property of their environment. However, these sensors provide only numerical information to the robot which can also be ambiguous, imprecise and misleading. Therefore, sensor data fusion and integration methods are required to obtain an accurate interpretation of the robot surroundings. Sensor fusion models in robotics have been characterized in the literature by using probabilistic techniques: Bayesian methods [1], the Dempster-Shafer theory [2][3], Kalman filters [4], Kalman filters and fuzzy logic [5][6], etc. However, these methods obtain a high computational cost and usually provide a description of the world which is more accurate for the task to be performed than the one is actually needed. Moreover, methods based on known statistics are usually difficult to develop when the navigation depends on landmarks [7]. Qualitative methods are not commonly applied to multi-sensor data fusion. Only Reece and Durrant-Whyte’s studies, which obtain a qualitative representation of the robot environment, can be found in the literature. In [8] walls, edges and corners of a room are obtained by detecting regions of constant depth analysing sonar sensor reading cues qualitatively by using the Qsim simulator [9]. In [10], a qualitative representation of the surface curvature (convex, plane, close-concave, far-concave) is
204
Z. Falomir et al. / Building a Local Hybrid Map from Sensor Data Fusion
obtained by analysing data coming from the sensor cues and fusing them by combining a Kalman filter with the Dempster-Shafer evidential reasoning and a finite state machine. Depending on the sensor interpretation of the world features (qualitative or quantitative), different solutions have been developed in the literature for map building, which can be divided in to geometrical, topological and hybrid approaches. In accordance with [7], geometrical maps represent objects according to their absolute geometric relationship by using databases such as grid maps [11], line maps [5] or polygonal maps; whereas topological maps record the geometric relationship between the observed features rather than the absolute position with respect to a reference system [12]. However these approaches present some drawbacks: sometimes topological representations can be ambiguous; and, metric structures are easily influenced by sensor errors and they usually produce too high computational costs. To surmount these limitations, hybrid approaches appeared, which obtain a geometrical representation of the environment and then extract its main landmarks used to build a topological map, as it is done in [13]. In this paper, we first apply the general qualitative sensor fusion approach explained in [14] to a Pioneer 2TM1 with eight sonar sensors and a laser range scanner. Then, we obtain a distance perception pattern of the robot environment which is included in the hybrid representation for robot indoor navigation described in [13]. The remainder of this paper is organized as follows. Section 1 presents our fuzzyqualitative sensor fusion approach. Section 2 describes the inclusion of the distance perception pattern into our hybrid representation for indoor environments. Section 3 presents the simulation terms in the Webots simulator and the results obtained. Finally, in section 4, our conclusions and future work are explained.
1. Fuzzy-Qualitative Sensor Fusion Approach Sonar and laser sensors provide complementary and compatible data for sensor data fusion. Furthermore, this fusion can overcome the reading problems of sonar sensors (specularity, crosstalk, wide beam width, etc.) and laser sensors (surface reflectivity – mirror and glasses undetected–, light noise, etc.) and obtain a more accurate and robust interpretation of the robot environment. The fuzzy-qualitative approach presented in this section changes laser and sonar numerical readings into qualitative distances (real close, near and far), fusing them later at feature level. Finally, it obtains a pattern that defines the distance perception of the robot by using qualitative distance labels, their certainty and location range on the robot. In section 1.1, the fuzzy sets used to convert numerical readings into qualitative distances are described. In section 1.2, the qualitative distance arrays obtained are presented and the sonar array conversion and our fusion approach are explained. 1.1. Obtaining Qualitative Distances By using the fuzzy sets described in Figure 1, we convert the sonar and laser numerical readings into qualitative distances. For each sensor reading, we calculate its certainty of
1
www.activrobots.com
Z. Falomir et al. / Building a Local Hybrid Map from Sensor Data Fusion
205
belonging to the fuzzy sets real close, near and far and we select the qualitative distance that obtains the highest certainty. Two fuzzy sets (real close and near) have been defined to describe the proximity of the obstacle to the robot more accurately. Our arguments for choosing this representation are that (1) human beings can distinguish the distance between objects in a close region much better than in a far region; and (2) qualitative distances which involve more nearness imply robot faster reactions. Certainty real close
far
near
0 0
r
3r
5r
6r
9r
12r
Sonar Sensor maximum reach
Figure 1. Fuzzy sets real close, near and far definition.
Each limit of the fuzzy sets depends on the radius of the robot (r), as we consider that an obstacle placed at a relative distance to a robot, the smaller the robot is, the largest this distance seems to the robot, and the further the obstacle is perceived. Moreover, we also consider the maximum reach of the sonar sensors to be the highest limit of these fuzzy sets, so that the readings obtained from both sonar and laser sensors could be classified into any fuzzy set. However, an alternative distribution of the fuzzy set limits could be chosen depending on the robot’s physical appearance and the kind and reach of its sensors. 1.2. Fusing Qualitative Sonar and Laser Readings for Obtaining a Perception Pattern Once the numerical sensor readings have been transformed into qualitative distances, we obtain a 180-reading array which include all the laser readings and an 8-reading array corresponding to all the sonar sensors. Each element of these arrays contains information about the angular position from which the reading was taken, the qualitative distance related to the numerical reading obtained (‘c’ for real close, ‘n’ for near and ‘f’ for far) and the certainty corresponding to this qualitative distance. The laser reading array provides a description of the robot surroundings at a high granularity (180-reading array), whereas the sonar reading array describes the same environment at a low granularity (only 8 readings are provided in total, taken in intervals of forty or twenty degrees wide). By grouping the readings in the sonar and laser arrays, we can distinguish zones which provide a description of the robot environment at the highest granularity level. Each zone can include more than one angular position. Therefore, for each kind of sensor, a pattern of zones with the same qualitative distance is obtained, as it is shown in Figure 2 (a) and (b).
206
Z. Falomir et al. / Building a Local Hybrid Map from Sensor Data Fusion
f
n
c
f
0º
40º
[60º, 140º]
180º
f
(c)
n
c
f
(a)
(b) f
n
c
[0º, 4º] [35º, 44º] [55º, 144º] [175º, 179º]
n
f
[0º, 21º] [22º, 52º] [53º, 134º] [135º, 165º] [166º, 179º] Figure 2. Patterns of (a) sonar and (b) laser qualitative readings. In (c) the sonar pattern has been transformed into a 180-reading array to allow the fusion with the laser pattern.
In order to fusion the sonar and laser readings we transform the sonar 8-reading array considering the following rules: x Supposing that the beam of each sonar sensor involves a sector of a circle with an angle of 10 degrees, we transform each reading in a 10-degree interval, considering the same qualitative distance and certainty. x In the case where two consecutive equal qualitative readings are obtained, a new interval is defined in the new array. The lower bound of that interval corresponds with the lowest angular location, whereas the upper bound of that interval corresponds with the highest angular position. The certainty of the lower and upper bound of that interval is the original obtained from the fuzzy sets. The rest of the certainties in that interval are calculated as the mean of the certainties corresponding to the lower and upper bounds. The final sonar pattern, which can include void zones, is shown in Figure 2(c). However, these zones are filled in the final perception pattern as a result of their fusion with the laser readings. Once we have both qualitative reading arrays at the same level of granularity, we proceed to the sensor fusion. The main parameters we will take into account in this fusion process will be the angular location of the reading and its corresponding qualitative distance label: x If both sensor readings agree with the qualitative distance for the same angular location, our diagnostic will be enhanced. In this case, the certainty corresponding to the final qualitative distance will be calculated as the mean of both original certainties. x Different qualitative distances for the same angular location involve a disagreement in the sensor readings. This disagreement is solved by applying a conservative policy, which implies a given priority to the qualitative distance that involves more proximity to the robot (real close > near > far > void). In this case, the certainty corresponding to the qualitative distance selected by our policy is the original one obtained from the fuzzy sets. The final distance perception pattern corresponding to the fusion of the patterns in Figure 2 is shown in Figure 3. f
n
c
n
f
[0º, 21º] [22º, 52º] [53º, 144º] [145º, 165º] [166º, 179º] Figure 3. Final distance perception pattern.
Z. Falomir et al. / Building a Local Hybrid Map from Sensor Data Fusion
207
2. A Hybrid Representation of the Robot Environment for Indoor Navigation The hybrid representation of the world defined in [13] relates geometrical information with the relevant aspects of the environment to extract reference systems. We do not use a global reference system to merge that metric information to build a map, but we extract local reference systems and join them by means of their relative orientations, constructing a geometrical representation. Then, on top of this geometrical information, a topological map is obtained. This description can be used by a qualitative reasoning process to carry out localization, local path planning and navigation. In this paper we are going to add to this hybrid representation the perception pattern explained in the previous section. In section 2.1 the hybrid representation is showed. In section 2.2 the inclusion of the distance is explained. 2.1. The Hybrid Representation using Local Reference Systems From the data provided by the laser sensor, we define a qualitative representation, which identifies concave and convex corners as the distinctive points of the environment. The extraction of these points is depicted as follows. We start from the vector of distances given by the laser sensor, then, we calculate the differences between adjacent distances creating a new vector of differences. This vector of differences is filtered in order to eliminate erroneous measures. The filtering process calculates the similarity of each difference with respect to their neighbours, eliminating the discordant values. The number of neighbours examined is given by a window, whose size is determined by the number of the total measures. Afterwards, we shall extract, for each position in the vector of differences, the distinctive points, which will be used to define the reference systems. The set of distinctive points form the current view, which represents the hybrid information extracted from a particular location of the robot in the environment (Figure 4). Each cell of this vector represents a sensor reading translated into a hybrid representation. The description of each distinctive point is given by a set of three elements , where DPi is the kind of distinctive point and the pair (Di, Ai ) represents the polar coordinate where it has been located with respect to the robot.
DP4 165º
DP3 120º
DP2 80º
DP1 15º
Figure 4. An example of the vector, which represents the current view of the robot. The DPi symbols represents distinctive points.
The distinctive points extracted from the environment are used directly to define reference systems. A reference system is composed of two distinctive points which are neighbours in the current view, In the example showed in Figure 4, three reference systems are obtained: SR12 = (DP1, DP2), SR23 = (DP2, DP3) and SR34 = (DP3, DP4)
208
Z. Falomir et al. / Building a Local Hybrid Map from Sensor Data Fusion
2.2. Local Reference Systems with Qualitative Distances Once we have extracted the set of reference systems from the environment, we shall include the distance perception pattern obtained by the data sensor fusion. The result is a hybrid description of the surroundings of the robot given by a number of reference systems defined as SRij < (DPi, qdi), (DPj, qdj ) >, where x x
DPi and DPj are the distinctive points of the reference system. Each distinctive point is composed by the pair (Di, Ai ) representing the polar coordinate where it has been located with respect to the robot. qdi and qdj are the qualitative distances of the two distinctive points and are composed by the pair ( { c, n, f }, certainty ).
In the model described in this paper, the number of reference systems extracted (and the correspondent size of the representation) only depend on the complexity of the environment. In addition, we can use the distance perception pattern to know if a reference system is open or close. An open reference system contains a free space between the two distinctive points, whereas a close reference system is formed by an obstacle or a wall. Attending to the kind of distinctive points and distance patterns obtained, we can distinguish three possibilities (Figure 5): x
x
x
If both distinctive points of the reference system are concave corners (Figure 5a), they only can represent a close reference system, so that the distance pattern between the two corners changes without jumps (e.g. it can not go from close to far). Both distinctive points of the reference system are convex corners (Figure 5b). Therefore, if the qualitative distances obtained from one corner to the other are equal an object is present (corners 1-2 and 3-4). Otherwise, there is a free space between the two corners (corners 2-3). Both distinctive points are different (Figure 5c). If the qualitative distances between the two corners change from one to a non-consecutive other (e.g. go from close to far) there is a possible free space (corners 1-2). Otherwise, there is a close reference system (corners 3-4).
4 1
3 4 (a)
2
3 (b)
1
2 (c)
Figure 5. Using the distance pattern and the distinctive points to know if there is an open or close reference system.
Z. Falomir et al. / Building a Local Hybrid Map from Sensor Data Fusion
209
3. Simulation and Results of our Sensor Fusion Approach We have implemented our approach in a C++ controller for a Pioneer 2TM simulated at Webots v. 5.1.0, developed by Cyberbotics 2 Ltd. We have modified the features of the simulated sonar sensors by introducing a 10 % of error in their readings and light noise has also been introduced in the laser readings to make them fluctuate between ±1cm along the scanning line, as it happens in real situations. In Figure 6, the results obtained in a simulated situation are presented. In this case, four distinctive points have been found by the robot and three reference systems have been created: SR12, SR 23, SR34. SR12 and SR34, which are considered as open reference systems, and SR23, which is a close reference system that detects an obstacle.
Figure 6. Example of extraction of a hybrid representation of the robot environment.
4. Conclusions and Future Work In this paper we have presented a hybrid description of indoor environments given by a number of reference systems. The distinctive points of these reference systems consist of concave and convex corners combined with a distance perception pattern provided by a fuzzy-qualitative sensor fusion approach. Our aim is to obtain a representation useful for carrying out autonomous robot navigation using qualitative spatial reasoning. According to the model described in this paper the size of the world representation depends on the complexity of the environment. Further work on qualitative sensor data fusion will be carried out by our group in the near future. Firstly, we pretend to find a method which could allow the robot to build dynamically its own distance fuzzy sets from a close and a far reference given a priori. Therefore, the robot could obtain its own qualitative distance perception as human beings do. The fuzzy sets built from the given references would depend on all the variables that affect human perception. They would also adapt to different levels of granularity depending on the distance reference system and the position of the target. Moreover, the complete application of this approach to the real Pioneer 2TM is currently been accomplished. Then, we will prove our expectations with respect to robot qualitative navigation and collision avoidance.
2
www.cyberbotics.com
210
Z. Falomir et al. / Building a Local Hybrid Map from Sensor Data Fusion
Finally, we are working on the development of a complete hybrid representation, in which all the information presented in this paper will allow the integration of several kinds of qualitative representation and qualitative reasoning processes.
Acknowledgments This work has been partially supported by CICYT and Generalitat Valenciana under grant numbers TIC2003-07182 and BFI06/219, respectively. We would also like to thank the support of all the members of the Cognition for Robotics Research Group at University Jaume I.
References [1] [2] [3] [4] [5]
[6] [7] [8] [9] [10]
[11] [12] [13] [14]
Takamasa Koshizen. Improved Sensor Selection Technique by Integrating Sensor Fusion in Robot Position Estimation. Journal of Intelligent and Robotic Systems, 29: 79–92, 2000. Huadong Wu, Mel Siegel, Rainer Stiefelhagen, Jie Yang. Sensor Fusion Using Dempster-Shafer Theory. In Proceedings of IEEE Instrumentation and Measurement Technology Conference, Anchorage, AK, USA, May 2002. Kyung-Hoon Kim and Hyung Suck Cho. Range and Contour Fused Environment Recognition for Mobile Robot. In Proceedings on International Conference on Multisensor Fusion and Integration for Intelligent Systems, 183–188, Dusseldorf, Germany, August 2001. Albert Diosi and Lindsay Kleeman. Advanced Sonar and Laser Range Finder Fusion for Simultaneous Localization and Mapping. In Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1854–1859, Sendai, Japan, September-October 2004. Kazuyuki Kobayashi, Ka C. Cheok, Kajiro Watanabe, Fumio Munekata. Accurate Differential Global Positioning System via Fuzzy Logic Kalman Filter Sensor Fusion Technique. IEEE Transactions on Industrial Electronics, 45 (3): 510-518, June 1998. J. Z. Sasiadek and Q. Wang. Sensor Fusion Based on Fuzzy Kalman Filtering for Autonomous Robot Vehicle. In Proceedings of the 1999 IEEE International Conference on Robotics & Automation, Detroit, Michigan, USA, May 1999. Moshe Kam, Xiaoxun Zhu, Paul Kalata. Sensor Fusion for Mobile Robot Navigation. In Proceedings of the IEEE, 85(1): 110–119, January 1997. S. Reece, H. Durrant-Whyte. A Qualitative Approach to Sensor Data Fusion for Mobile Robot Navigation. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, 36–41, Quebec, 1995. Benjamin Kuipers, Qualitative Reasoning, Modelling and Simulation with Incomplete Knowledge. MIT Press, 1994. Steven Reece. Data Fusion and Parameter Estimation Using Qualitative Models: The Qualitative Kalman Filter, QR-97, Proceedings of the Eleventh International Qualitative Reasoning Workshop, Cortona (Italy), 143-153, June 1997. Klaus-Werner Jörg. World modeling for an autonomous mobile robot using heterogenous sensor information. Robotics and Autonomous Systems, 14:159.170, 1995. Emilio Remolina and Benjamin Kuipers. Towards a general theory of topological maps. Artificial Intelligence, 152:47--104, 2004. Juan Carlos Peris and M. Teresa Escrig. Cognitive Maps for Mobile Robot Navigation: A Hybrid Representation Using Reference Systems. In Proceedings of the 19th International Workshop on Qualitative Reasoning QR-05, Granz, Austria. Zoe Falomir and M. Teresa Escrig. Qualitative multi-sensor data fusion. Jordi Vitrià, Petia Radeva, Isabel Aguiló, Recent Advances in Artificial Intelligence Research and Development, Frontiers in Artificial Intelligence and Applications, IOS Press. Vol. 113, pp. 259-266, ISBN 1-58603-466-9, October 2004.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
211
Map Building including Qualitative Reasoning for Aibo Robots (1)
(1)
(2)
David A. GRAULLERA , Salvador MORENO , M. Teresa ESCRIG (1) Dpto. Informática, Universitat de València Burjassot, Valencia, (Spain) (2) Dpto. Ingeniería y Ciencia de los Computadores, Universitat Jaume I Castellón (Spain)
Abstract. The problem that a robot navigates autonomously through its environment, builds its own map and localizes itself in the map, is still an open problem. It is known as the SLAM (Simultaneous Localization and Map Building) problem. Most of the approaches to solve the SLAM problem divide the space into regions and compute the probability that the robot is in each region. We call them quantitative methods. The drawbacks of quantitative methods are that they have a high computational cost and a low level of abstraction. In order to fulfil these drawbacks qualitative models have been recently used. However, qualitative models are non deterministic. Therefore, the solution recently adopted has been to mix both qualitative and quantitative models to represent the environment and build maps. However, no reasoning process has been used to deal with the information stored in maps up to now, therefore maps are only static storage of landmarks. In this paper we propose a novel method for map building based on hybrid (qualitative + quantitative) representation which includes also a reasoning process. Distinctive landmarks acquisition for map representation is provided by the cognitive vision and infrared modules which compute differences from the expected data according to the current map and the actual information perceived. We will store in the map the relative orientation information of the landmarks which appear in the environment, after a qualitative reasoning process, therefore the map will be independent of the point of view of the robot. The simultaneous localization of the robot will be solved by considering the robot as another landmark inside the map, each time the robot is moving. This map building method has been tested on the Sony AIBO four legged robots on an unknown labyrinth, made of rectangular walls. The path planning strategy has been to explore the environment based on minimizing movements and maximizing the explored areas of the map.
Keywords. Qualitative reasoning, Simultaneous Localization and Map Building Problem.
212
D.A. Graullera et al. / Map Building Including Qualitative Reasoning for Aibo Robots
1. Introduction A current goal in research on autonomous systems is the simultaneous self-localization and mapping of unknown environments. This is known as the SLAM (Self Localization and Mapping) problem [1]. An autonomous mobile robot, which solves the SLAM problem in all possible environments, will be a real autonomous system. To solve the SLAM, the navigation process of the robot must first be able to perform several related tasks, which can be illustrated by the answers to the following questions [2]: - What should I remember? (mapping) - Where am I? (localization) - Where should I go? (path planning) - How can I go? (motion control or navigation) Acquiring and maintaining internal maps of the world is a necessary task to carry out a successful navigation in complex environments. We are going to solve in this paper the problem of map building for a mobile robot on a unknown labyrinth made of rectangular walls. The walls are distributed randomly forming a labyrinth and the robot is left inside with no knowledge of the environment. The robot has to explore the environment and to build a map of the environment. Once the map is built, the robot is kidnapped and located in other place of the labyrinth. The robot must then be able to localize itself in its map. There are in the literature a lot of approaches for building maps in static, structured and relatively small environments. They can be divided into three main strategies: qualitative, quantitative and hybrid approaches. Qualitative models focus on the boundaries of the objects, making divisions of the space more or less detailed. These approaches deal with imprecise information in a manner inspired by the cognitive processes used by humans. The qualitative concept of a topological map, which represents the world using nodes (places) and arcs (relations), has been used in several approaches, such as the one introduced by Kuipers [3]. Another model is defined by Freksa in [4], where schematic maps are used to reason about relative positions and orientations. Other qualitative models have been carried out by several authors. Most of these qualitative models have been implemented mainly in simulations and they have not proposed a complete solution to the SLAM problem. Quantitative methods represent the environment by metrical information obtained by the sensors. The major exponent of this strategy is the grid-based model, introduced by Moravec and Elfes [5]. Quantitative models are affected by odometric and sensory errors. In recent years, many quantitative approaches have been developed using probabilistic techniques to cope with partial and inaccurate information. All of these approaches are based in implementations of the Bayes filter, as the Kalman filter, hidden Markov models, partially observable Markov decision processes or Monte Carlo localization. A survey on this techniques can be found in Thrun [6][7]. Hybrid approaches handle with qualitative and quantitative information, combining the best of the each model. One of the first models for map building was
D.A. Graullera et al. / Map Building Including Qualitative Reasoning for Aibo Robots
213
proposed by Thrun [8], which combines the occupancy grids with topological maps. Other hybrid models can be found in numerous papers, as Escrig [9]. More hybrid models use probabilistic techniques to cope with partial information. The work presented in this paper represents hybrid information into a map: quantitative data provided by robot sensors (most of the times this data contain imprecision); and qualitative data. Moreover, our approach is going to use a qualitative reasoning process which will allow us to solve the four problem above mentioned: mapping, localization, planning and navigation.
2. Map Building Process We suppose the robot is able to explore an area in front of it, in order to detect if this area is free or if it is occupied by one or more walls. Taking into account that the walls are straight, the robot must detect the distance and orientation of each wall which enters in its exploring area. In the case of the robot AIBO, we have implemented a multisensory approach to explore this area, by using the TV camera of the robot, and the infrared range sensor. Both the camera and the infrared sensor are continuously exploring the area in front of the robot, and return a free state if there is no wall detected, or a distance and orientation for each one of the walls detected in the area. The map building process takes this information as input, and has as output the generated map, and the orders to the robot for exploring the environment, in the terms of “walk XX centimetres”, and “turn YY degrees”. 2.1. Initialization At the beginning, the robot map is empty, and the robot detects nothing in the area in front of it, therefore it assumes the initial hypothesis that the scenario is composed of a infinite floor surface, without walls, with only an explored point: the current position of the robot. The robot starts walking ahead until the explored area in front of the robot (area which is about one square meter in our tests) is occupied by a wall. This situation can be seen at the snapshot 1 of figure 1. Each step the robot gives is memorized in the database (only to make an approximate evaluation of the distance the robot walks in free walking). When the robot detects a wall, it enters into the wall following mode. 2.2. Wall following When the robot detects an unknown wall, it starts following the wall by turning right and following the wall. While it is walking along a straight wall, it stores the distance it walks as an approximate measure of the length of the wall. When it reaches a corner, it labels the corner and it stores the approximate turning angle of the corner before start following a new wall. This process can be seen at snapshots 2, 3 and 4 of figure 1. Nevertheless, the process is not as simple as this. The imprecise information about robot position and orientation given by odometry, makes impossible to generate a map of the scenario as the one shown in figure 1. In fact, the real map is more or less as the one shown in figure 2.
214
D.A. Graullera et al. / Map Building Including Qualitative Reasoning for Aibo Robots
Figure 1. An example of scenario
b
Figure 2. The scenario map as built by the wall following process alone The map information stored in the database when the robot discovers point d is the following: point(nonreal,p01). %this is the point where the robot first detects the wall. point(real,a,95). %this is the first corner detected, with a measure of 95 degrees (instead of 90). line(p01,a, 3). % this is the wall between p01 and a, with a measure of 3 robot steps (around 60cm). point(real,b,100). line(a,b,3). point(real,c,105). line(b,c,1). point(real,d,-85). % external turn line(c,d,1).
D.A. Graullera et al. / Map Building Including Qualitative Reasoning for Aibo Robots
215
point(nonreal,p02). % this point will be erased when the robot discovers real point e. line(d,p02,1). Taking into account the uncertainty in direction and length, we can easily see that it will be impossible to recognize directly the corner i as the corner a, therefore it will be seen as a different one and the robot will give infinite loops around the room. This process will end when the following module, the hybrid shape pattern matching, proposes that corner a is corner i, so the shape is totally explored. 2.3. Hybrid shape pattern matching The hybrid shape pattern matching is continuously monitoring the output of the previous module, which can be seen in figure 3. In this figure, we can see from left to right the movements of the robot, as a straight line for each straight step, an angle down for inner corners, and an angle up for external corners. The pattern matching process tries to detect cycles in the trace shown in figure 3. The first hypothesis which we can make is to suppose that corner a is the same as corner i, but the pattern matching process is only sure when corner d is revisited, as is one of the two only external corners of the scenario, and it is very difficult to misrecognize this. Note that this hypothesis can be wrong, therefore it will be stated and maintained only if subsequent measures are compatible with this hypothesis. If not, the hypothesis must be revised through a truth maintenance process which we implement thanks to the backtracking mechanism of prolog, the language we have implemented this algorithm.
obstacle
a
b
c
f
g
h
i
cicle i=a Fig. 3. The hybrid pattern matching process Once the hypothesis has been stated, the scenario map is corrected under the assumption that corner a is corner I, and the position and orientation error between i and a, is cancelled by splitting it in several minor corrections to angles and distances in orden to achieve that point a and point i will be the same, resulting in a map as the one seen in figure 4.
216
D.A. Graullera et al. / Map Building Including Qualitative Reasoning for Aibo Robots
a=i
h
d
e
f
b=j
c
Fig. 4. The scenario map corrected by the hypothesis a=i 2.4. Inner/Outer area exploration Once the shape of the scenario has been resolved, it is necessary to see if it is a shape enclosing the robot, or if it is a shape surrounded by the robot. This can be seen by simply seeing if the total angle is +360 degrees, in which case the robot is inside the shape, or -360 degrees, if the robot has surrounded a shape from the outside. Now it is necessary to split all the unexplored area by using the two-dimensional qualitative orientation model of Freksa and Zimmerman [10]. The model defines a Reference System (RS) formed by two points, a and b, which establishes the left/right dichotomy. The fine RS includes two perpendicular lines by the points a and b. This RS divides the space into 15 qualitative regions (Figure 5a). An iconical representation of the fine RS and the names of the regions are shown in Figure 5b).
lf l lm ibl bl
a)
sf idf sm ib sb
rf r rm ibr br
b)
Figure 5. a) The fine RS and b) its iconical representation
The information which can be represented by this RS is the qualitative orientation of a point object, c, with respect to (wrt) the RS formed by the point objects a and b, that is, c wrt ab. The relationship c wrt ab can also be expressed in other five different ways: c wrt ba, a wrt bc, a wrt cb, b wrt ac and b wrt ca which are the result of applying the inverse (INV), homing (HM), homing-inverse (HMI), shortcut (SC) and shortcut-inverse (SCI) operations, respectively. The composition of relationships for this model, what we call Basic Step of the Inference Process (BSIP), on qualitative spatial orientation is defined such that "given the relationships c wrt ab and d wrt bc, we want to obtain the relationship d wrt ab".
D.A. Graullera et al. / Map Building Including Qualitative Reasoning for Aibo Robots
217
The idea is to split the unexplored area in several subareas where the position of the robot does not vary with respect to any pair of corners of the scenario shape. As there are many subdivisions, secondly, we select not all, but only a few, which makes all the subareas convex, and which maximizes the probability of recognizing the corner (for instance, corners d and e will be preferent). In our case, the whole area in splitted in three convex regions: abcd, dahe, and fehg. Then the robot starts walking along the border of these regions (lines a-d and e-h). If the robot found a wall while exploring the area, the shape is explored by using the same procedure of wall following and hybrid pattern matching. If not, the subareas are finally explored until the map is complete.
3. Self-Localization Process Once the robot has a map of the scenario, it is kidnapped and positioned in a random point of the same scenario. It will first start walking until it reaches a wall. Then it will follow the wall in a similar way to the process of map building, but it is doing the hybrid pattern matching procedure of the distances and corner of the wall it is following until it found a mach with the map stored in its memory. In this point the qualitative position of the robot can be given with respect to all pair of points of the map. A simple procedure of testing that this has been done by making the robot visit point a, once it has localized itself in the map. It proves it has achieved the goal.
4. Results This algorithm has been implemented on a four legged AIBO Sony robot, connected by wireless with a main host, where the program, implemented in prolog and with a sensor interface build in C++, reads the camera image and the infrared sensor, processes them to detect walls as straight lines with orientations, and order movements of head and body. The scenario has been build by boxes on a room, and the robot has achieved to build correctly the map, although we found that several times it appears phantom corners, with angles very close to zero, due to sensor recognition errors, which introduces errors in the map. A future work is to solve this problem. Another problem appears when the AIBO is exploring the inner area, walking without detecting any walls, due to the fact that the AIBO is not able to walk straight ahead without an external reference. It walks following an unpredictable arc and can miss the direction in angles as wide as 30 degrees. Although the algorithm seems able to detect the arrival point at the opposite side of the room, it spends a lot of time walking up and down the walls until it discovers where it is, so that the exploration can continue. Perhaps a model to treat with the deficiencies of the robot, which tries to detect the errors and predict them in future actions, can reduce this behaviour.
218
D.A. Graullera et al. / Map Building Including Qualitative Reasoning for Aibo Robots
References [1]
F. Lu, E. Milios. “Globally consistent range scan alignment for environment mapping”, Autonomous Robots, 4:333-349, 1997.
[2]
T. S. Levitt, D. T. Lawton. “Qualitative Navigation for Mobile Robots”. AI, Vol. 44 (1990), 305-360.
[3]
B. Kuipers. “Modelling Spatial Knowledge”. Cognitive Science, 2 (1978), 129-153.
[4]
C. Freksa, R. Moratz and T. Barkowsky. “Schematic maps for robot navigation”. In C. Freksa, W. Brauer, C. Habel, & K. F. Wender (Eds.), Spatial Cognition II - Integrating abstract theories, empirical studies, formal models, and practical applications (pp. 100-114), Berlin: Springer, 2000.
[5]
H. P. Moravec and A. Elfes. “High Resolution Maps from Wide Angle Sonar”. Proc. IEEE Int’l. Conf. Robot. and Automat., St Louis(1985), 116-121.
[6]
S. Thrun, D. Fox, W. Burgard, F. Dellaert. “Robust Monte Carlo localization for mobile robots”. In Artificial Intelligence 128 (299-141), Elsevier 2001.
[7]
S Thrun. “Robotic mapping: A survey”. In G. Lakemeyer and B. Nebel, editors, Exploring Artificial Intelligence in the New Millenium. Morgan Kaufmann, 2002.
[8]
S. Thrun. “Learning Metric-Topological Maps for Indoor Mobile Robot Navigation”. Artificial Intelligence, Vo. 99, No. 1 (1998), 21-71.
[9]
Escrig, M.T., Peris, J.C, “The use of a reasoning process to solve the almost SLAM problem at the Robocup legged league”, IOS-Press, Catal. Conference on Artificial Intelligence, CCIA’05, Oct. ‘05.
[10] K. Zimmermann and C. Freksa. “Qualitative spatial reasoning using orientation, distance, and path knowledge”. Proc. of the 13th. Inter. Joint Conf. on AI. Workshop on Spatial and Temporal Reasoning, ‘93.
219
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Cognitive Vision based on Qualitative Matching of Visual Textures and Envision Predictions for Aibo Robots (1)
(1)
(2)
David A. GRAULLERA , Salvador MORENO , M. Teresa ESCRIG (1) Dpto. de Informática, Universitat de València Paterna, Valencia, (Spain) (2) Dpto. Ingeniería y Ciencia de los Computadores, Universitat Jaume I Castellón (Spain) Abstract. Up to now, the Simultaneous Localization and Map Building problem for autonomous navigation has been solved by using quantitative (probabilistic) approaches at a high computational cost and at low level of abstraction. Our interest is to use hybrid (qualitative + quantitative) representation and reasoning models to fulfil these drawbacks. In this paper we present a novel cognitive vision system to capture information from the environment for map building. The cognitive vision module is based on a qualitative 3D model which generates the qualitative textures, through a temporal Gabor transform, which the robot should be seeing starting from the qualitative-quantitative map of the environment, and compares them with the real textures as seen from the robot camera. Different hypothesis are generated to explain these differences, which can be classified in errors of the textures obtained from the camera, which are ignored, and errors in the hybrid map of the environment, which are used to propose modifications of this map. The advantages of our model are: (1) we achieve a high degree of tolerance against visual recognition errors, because the input from the camera is filtered by the qualitative image generated by our model; (2) the continuous matching of the qualitative images generated by the model with the images obtained for the camera allows us to understand the video sequence seen for the robot, offering the qualitative model as the cognitive interpretation of the scene; (3) the information of the cameras is never directly used to control the robot, but only once it has been interpreted as a meaningful modification of the current hybrid map, allowing sensor independence. We have implemented the cognitive vision module to solve the Simultaneous Localization and Map Building problem for autonomous navigation of a Sony AIBO four legged robot on an unknown labyrinth, made of rectangular walls with a homogeneous but unknown texture on a floor with a different texture.
Keywords. Cognitive vision, qualitative modeling, visual textures
220
D.A. Graullera et al. / Cognitive Vision Based on Qualitative Matching
1. Introduction The problem that an autonomous mobile robot navigates through an unknown environment, builds its own map and localizes itself in the map, is still an open problem. It is known as the SLAM problem [1]. Up to now, most of the approaches used for solving this problem are based on quantitative (probabilistic) approaches at a high computational cost and low level of abstraction. The first approach which solves the SLAM problem by using qualitative spatial reasoning is [2], [3]. In the case of robots whose main external input is video, it implies to use traditional visual recognition techniques, which have also a high computational cost and a low level of abstraction, being unable to understand the image they see and to neglect erroneous visual behaviour as recognition errors, due to the lack of a cognitive model to understand the video sequence as a whole. The discrimination of different kinds of textures is one of the main tasks of a visual system. Most natural images area composed by patches of textures that the human visual system perceives as uniform within a zone. Previous approaches for representing or discriminating texture features can be divided into: - Global statistical approaches [4]: co-occurrence matrixes, second-order statistics, numbers of local extreme, autoregressive moving average (ARMA), and Markov random fields. - Spatial/spatial-frequency approaches [5]: including features obtained by computing Fourier transform domain energy, local orientation and frequency, spatial energy, Wigner distribution and wavelets transforms. The global statistical approaches have several disadvantages: changes in the brightness along the spatial axis, or contrast changes due to conditions of nonuniform illumination may hinder classification when global features are used. For this reason, our approach is based on one of the spatial/spatial frequency methods, the Gabor wavelet transform. The analysis of textures by Gabor functions was proposed, firstly, by Turner [6], and subsequently by other authors [7]. In this paper we present a cognitive vision module, based on an hybrid qualitativequantitative 3D model of unknown but structured environments, which generates the qualitative textures which the robot should be seeing starting from an hybrid model of the environment that the robot builds while exploring the environment. The main idea is to compare the generated qualitative description of the video input with the real one, as seen from the robot camera after a preprocessing based on the Gabor transform. The differences are analyzed by a specialized module which uses a qualitative envision graph to discriminate among camera errors, and problems due to the lack of knowledge of the hybrid model of the environment. The deficiencies of the hybrid model are used as the main source of information to update the model, and are interpreted as event with high level of abstraction and cognitive meaning.
D.A. Graullera et al. / Cognitive Vision Based on Qualitative Matching
221
2. Architecture The overall architecture we are developing to solve the SLAM problem and testing with AIBO robots is mainly composed of four modules: The Qualitative Sensor Preprocessors (QSPs), the Qualitative model of the environment (QM), the Temporal Database (TDB) and the Control Knowledge (CK), as it can be seen in figure 1. Sensors
QSPs
S1 S2
QSP1
S3 TDB
CK
Control Actions
S4 S5
QSP2
S6
QM
Figure 1. General Architecture The main idea of our robot architecture is to represent by means of an hybrid qualitative-quantitative approach the environment of the robot in the Temporal Database (TDB). The QSPs are modules which relate each one of the robot sensors with the hybrid model, by means of a qualitative model which tries to predict the output of each sensor starting from the hybrid model and the knowledge of the actions of the robot (which are generated by the CK and stored also in the hybrid model). The QSPs predict the inputs of the sensors and compares the predictions with the real inputs, analyzing the differences. While the differences are negligible, the QSPs make nothing, but if the difference is great, the QSP tries to explain it as a sensor fault or an error in the hybrid model. In this last case, the QSP proposes a change to the hybrid model to make consistent again the predictions with the observed inputs. The main objective of this paper is to introduce the QSP for cognitive vision we propose, hereinafter referred to as Cognitive Vision Module, although the robot itself has three main QSPs: the vision, the infrared, and the accelerometers. 2.1. Cognitive Vision Module The architecture of the Cognitive Vision Module can be seen in figure 2. It is composed of four modules: the Video Transform module, the Qualitative Recognizer module, the Difference hypothesis generator module, and the Qualitative Model module.
222
D.A. Graullera et al. / Cognitive Vision Based on Qualitative Matching
update
Modified Gabor transform
Video input
Qualitative Recognizer
Gabor transform
Hybrid model of the environment
Difference Hypothesis Generator
Qualitative segmentation
Qualitative prediction
wall texture
wall texture
floor texture
floor texture
Figure 2. Cognitive Vision Module Architecture The video transform module (VT) gets the video signal from the robot TV camera and performs a video transform whose coefficients are arranged as a 3D sequence of texture vectors. The Qualitative Recognizer (QR) module performs a pattern recognition process aimed at recognizing qualitative states from the texture vectors. The Qualitative Modeling (QM) module tries to model the qualitative behaviour of the TV input starting from the information on the hybrid model of the environment, and the actions performed by the robot. The Difference Hypothesis generator (DH) tries to analyze the differences among the outputs of the QR and the QM, and does nothing unless it can probe that the most simple explanation of the differences consist in an error of the hybrid model of the environment the QM uses to generate its output. In this case it updates the hybrid model in order to achieve that the QM and the QR outputs agree again. It must be noted that the only information which the QSP gives as output are the updates of the hybrid model stored in the TDB, which are composed of elements with a high cognitive meaning. 2.2. Video Transform The VT we have developed uses 3D Gabor functions as basic functions (Gabor functions defined on both spatial directions and also along time, which can measure both static texture features and dynamic ones), instead of the usual 2D purely spatial
D.A. Graullera et al. / Cognitive Vision Based on Qualitative Matching
223
seen in the literature. The inclusion of time allows us to detect moving textures and moving borders. In this work, we only take advantage of the ability to detect moving borders. The environment chosen for our work is structured, but unknown. It is a labyrinth composed of a floor with a homogeneous and unknown texture, with walls of a different and also unknown texture. The borders between each wall and the floor and among each pair of adjacent walls are straight lines. In these conditions, the output of the Gabor transform is a split of the image space composed of areas splitted by straight lines. 2.3. Qualitative Recognizer The Qualitative Recognizer takes the output of the VT and classifies it in regions with homogeneous texture, splitted by straight lines. Figure 3 explains this process.
Wall Floor
Wall Floor
Figure 3. Transitions among qualitative states in the envision graph.
The envision graph, in qualitative reasoning, represents a graph containing all possible qualitative states in which the system can be as nodes, and all possible transitions among these states as arcs of the graph. In our model, the states of the system can be snapshots or video sequences, and the arcs indicates which states can follow to each state. A general restriction is that a snapshot can only be followed by a video sequence, and a video sequence can only be followed by a snapshot. This is similar to the Point-Interval-Point-… sequence which can be found in qualitative reasoning paradigms like [8],[9], and so on. The output of the Video Transform, which can be seen as a video whose voxels are filled with vectors composed of gaborian moving texture elements, is matched against the envision graph, resulting in an output composed only of the sequence of states of the envision graph which best matches the output of the Video Transform. The output is hybrid because this sequence of states is tagged with the quantitative texture vector for each polygonal area which better represents all the voxels contained in the area.
224
D.A. Graullera et al. / Cognitive Vision Based on Qualitative Matching
Figure 3 shown the structure of the envision graph, although limited to only three nodes and two arcs (the minimal relevant to the example considered). The whole graph is too huge to be shown here, although the structure is exactly the same of the one of the sub graph shown here. The video sequence at the left represents a view of the floor by the TV camera of the robot while it is walking with the head pointing down (about 20 degrees below the horizontal). In this state, the robot can only see an homogeneous texture which should be the floor, so the state is labelled as Floor texture. This state represents, for instance, the image seen at the left, in which the robot sees the floor. This state can lead to a snapshot state in which the wall can be seen appearing at the front of the image, and following this snapshot, a new video sequence will follow in which we can see the wall, the border between the wall and the floor (border which must be moving down), and the floor at the bottom. This state is the one which corresponds to the image at the centre. Finally, if the robot comes too close to the wall, the image on the camera is again homogeneous, and corresponds to the wall, which is the state at the right. Please note that both the floor and the wall are states which can be confounded by using only visual recognition, but the use of the envision graph allow us to determine that a state form which we go out through appearance of a new qualitative region at the upper side, must be floor, and that a state from which we enter, or go out, by disappearance or appearance, respectively, of an area at the bottom, should be a wall state. So the model achieves to give meaning to the video input by means of the envision graph, and not only to the video input. 2.4. Qualitative Modeling The QM module uses the hybrid model of the environment to predict the output of the qualitative recognizer, without having any information of the TV camera. Let’s show its working with the example shown in figure 4.
floor e1a
e1b
e2a
event 1
e2b
t Wall
event 2
Figure 4. Qualitative representation of the point of view of the robot walking towards a wall
D.A. Graullera et al. / Cognitive Vision Based on Qualitative Matching
225
The behaviour shown in figure 4 is generated from the knowledge, extracted from the TDB, that the robot is walking directly towards a wall with the head pointing down. (It must be noticed that this knowledge is absent in the former module, whose only input is the Video Transform on the TV video input, so both models are different). In that case, the QM predicts that the current state is a video sequence floor texture, which will change, in an snapshot which we name event1, to the video sequence state of the wall texture up and the floor texture down, splitted by a line moving down (the line e1a-e1b). During the wall/floor texture video sequence, the position of the border is continuously varying from up to down, until the bottom is reached and the floor state disappears. In this point we have a second snapshot, which we call event 2, where the border which appeared at e1a-e1b disappears at e2a-e2b. The following and last video sequence state will be the wall texture state, in which the visual prediction finish. Additionally, the model predicts collision of the robot with the wall, but this fact does not affect our visual prediction, which is the one we are discussing now. It is important to take notice that the predictions given agree with the output of the QR, but also we have a cognitive explanation of the whole video sequence. We know that the robot is approaching a wall, and we know that an area is the floor, and that another is the wall, and that the border among them must be moving down. We have not only a video prediction to match with the QR, but a video prediction with cognitive meaning, useful for reasoning. 2.5. Difference Hypothesis Generator The difference hypothesis generator (DH) match continuously the QR output and the QM one. It can happen three things: First, both outputs can agree, and in this case the DH will do nothing. Second, the QR can fail, and in that case the QR information in rejected, treating the difference as a faulty video recognition, and third, the QM can fail, so the DH must update with a meaningful event the hybrid model of the environment to take into account the new information. An example of a error in the QR can be seen when the difference among the texture of the floor and the texture of the wall is very small. In this case the system can detect a wall as part of the floor, so giving an erroneous state. But the envision graph tell us that a wall cannot disappear, as there is no direct arc from an state wall/floor with the border moving down to the floor state, so the TV input must be rejected. A more interesting example is when the robot walks towards a unexplored area, from which the QM predicts a continuous video sequence state of floor texture, as it has no knowledge of the presence of a wall. Suddenly, the wall appears in the QR as a correct transition which is not reproduced in the QM. The DH must update the hybrid model of the QM with the information of the new wall discovered, and the QM will immediately predict the right behaviour, thus making the QR and the QM math again. Note that the only output of the Cognitive Vision module towards the hybrid model of the environmente has been an event indicating that a new wall has been discovered, to be add to the model. This is information with a high level of abstraction.
226
D.A. Graullera et al. / Cognitive Vision Based on Qualitative Matching
3. Conclusions In this paper we have developed a cognitive vision module, based on a qualitative 3D model which generates a qualitative-quantitative map of the environment by storing qualitative textures obtained through temporal Gabor transforms. The robot compares the map with the real textures as seen from the robot camera. Different hypothesis are generated to explain these differences, which can be classified in errors of the textures obtained from the camera, which are ignored, and errors in the hybrid map of the environment, which are used to propose modifications of this map. The advantages of our model are: - We achieve a high degree of tolerant against visual recognition errors. - The continuous matching of the qualitative images generated by the model with the images obtained for the camera allows us to understand the video sequence. - The information of the cameras is never used to control the robots directly, but only through the updating of the hybrid model of the environment, allowing sensor independence. - It is possible for a future work to extend the model for moving objects, predicting a uniform straight line movement as normal and focusing on alterations of this movement, easy to detect as differences between predicted and observed textures.
References [1]
F. Lu, E. Milios. “Globally consistent range scan alignment for environment mapping”, Autonomous Robots, 4:333-349, 1997.
[2]
Escrig, M.T., Peris, J.C, “The use of a reasoning process to solve the almost SLAM problem at the Robocup legged league”, IOS-Press, Cat. Conference on Artificial Intelligence, CCIA’05, Oct. 2005.
[3]
Escrig, M.T., Peris, J.C, “The use of Qualitative Spatial Reasoning to solve the Simultaneous Localization and Map Building problem in non-structured environments”, VII Jornadas de Trabajo ARCA, JARCA’05.
[4]
R. M. Haralick.“Statistical and structural approaches to textura”, Proc. IEEE, vol. 67,pp.786-804,‘79.
[5]
T. D. Reed, H. Wechsler. “Segmentation of textures images and Gestalt organisation using spatial/spatial-frequency representations”. IEEE Trans. on PAMI, vol. 12, pp. 1-12. 1990.
[6]
M. Turner, “Texture Discrimination by Gabor Functions”, Biol. Cyber., vol. 55, pp. 71-82. 1986.
[7]
A. K. Jain, F. Farrokhnia. “Unsupervised textura segmentation using Gabor filters”, Pattern Recognition, vol. 24, 1991, pp. 1167-1186.
[8]
Kuipers B: “Qualitative Simulation”, Art. Intelligence 29, pp. 289-338, 1986.
[9]
De Kleer J., Brown J.S. “A Qualitative Physics based on Confluences”, Artif. Intelli. 24,pp. 7-83,‘84
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
227
Assessing the aggregation of parameterized imprecise classification Isabela DRUMMONDa and Joaquim MELENDEZb and Sandra SANDRIc,1 a
LAC-INPE, Brazil b IIA/UDG, Spain c IIIA/CSIC, Spain
Abstract. This work is based on classifiers that can yield possibilistic valuations as output, that may have been obtained from a labeled data set either directly as such, by possibilistic classifiers, or by transforming the output of probabilistic classifiers or else by adapting prototype-based classifiers in general. Imprecise classifications are elicited from the possibilistic valuations by varying a parameter that makes the overall classification become either more or less precise. We discuss some accu-racy measures to assess the quality of the parameterized imprecise classifications, thus allowing the user to choose the most suitable level of imprecision for a given application. Here we particularly address the issue of aggregating parameterized aggregation classifiers, and assessing their performance. Keywords. classification, imprecision, fuzzy sets theory, possibility theory.
1. Introduction Usually a classifier assigns a single class from a subset of classes C to each element of a set of objects Z that one wishes to classify. Some systems are capable of yielding a valuation in a given uncertainty model, e.g., “there is a probability of .3 that z0 1belongs to class c1 and .7 that it belongs to c2”. In between these two extremes we can think of classifiers that yield an order of preference on the set of classes, e.g. “c1 is better than c2 as the class of z” or simply a subset of C, e.g. “the class of z0 is c1 either or c2”. But, in general, the user receives precise results. Presented with an object, the classifier will assign it a single class and when there exists equal evidence favoring a set of distinct classes C’ ⊆ C, a unique class is usually taken completely at random from set C’. The usual means to assess the classification results is also very precise: the rate between the number of objects correctly classified and the total number of objects, called the classifier accuracy, or its counterpart, the error rate. The accuracy rate obtained for the set used to train a classifier can be used as an estimation of the real accuracy that will be obtained once the objects of interest are presented to the trained classifier. 1
Correspondence to: Sandra Sandri. IIIA/CSIC, Campus UAB, 08193-Bellaterra, Spain; Tel.: +34 93 580 9570; Fax: +34 93 580 9661; E-mail: [email protected].
228
I. Drummond et al. / Assessing the Aggregation of Parameterized Imprecise Classification
However, in many real-world applications, it is more useful to trade uncertainty about the results for imprecision. For example, let us suppose that the accuracy of a given training set is .85. If the training and test data sets are closely related, it is then reasonable to expect an error of 15% also in the classification of the test set. If a wrong classification may cause a great loss then it might be more useful to obtain as result a reduced set of classes, containing the correct class, as long as the error expectation decreases significantly. In [4] we described a framework for the elicitation and assessment of imprecise classification. In this framework, the elicitation consists in deriving the level cuts from normalized possibility distributions on the potential classification of a given element of interest. Imprecise classifiers are obtained as we vary the level cut value α ∈ ]0,1]. As the value of parameter αincreases, the classification tends to be more precise but less accurate. Many classifiers have discriminant functions from which possibility distributions can be obtained (see [9] for a review on fuzzy classifiers). The possibility distributions can also be obtained from any classifier based on a set of prototypes (e.g. obtained by clusterizing the training set) by creating the possibilistic distributions as a function of the Euclidean distance to the prototypes (e.g. using a k-nn algorithm). Also, the approach can be used for classifiers that yield probability distributions on the set of classes, applying a transformation that still produce a possibility distribution. In this framework, the assessment is made through the means of a set of quality measures related to the accu-racy of the imprecise classification obtained for the elements in a given data set and the cardinality of the obtained classification. In the present work we focus on the aggregation of imprecise classification, in particular in extending the accuracy measures to deal with the aggregated results. It is organized as follows. In Section 2, we first discuss imprecise classifiers and present indices to assess the quality of their outputs. Then in Section 3 we study a family of parameterized classifiers that, all part from a classifier that is capable, somehow, of yielding a possi-bility distribution on the classes. Section 4 addresses the aggregation of parameterized imprecise classifiers and Section 6 brings the conclusion. 2. Assessing imprecise classification Let C = c1, …,cm be a set of classes. Let Z be a labeled data set: each z is a vector of size n, whose value zi in the i-th position is related to an attribute Ai defined over a domain Ωi. For simplicity, we assume ∀i , Ωi=R. The class label of z is denoted by l(z) ∈ C or more simply by c* ∈ C. A classifier is defined as a mapping D: Rm → C [10]. In the canonical model [1], there exist m discriminant functions gi: Rm → R, each of which computing a score relative to class ci. Usually, ∀z ∈ Rm, D(z) = arg max gi(z), i.e., the classifier assigns z to the class with the highest score and ties are broken randomly. Here we shall call valuation to the distribution v(z): C→R, that contains the class scores calculated to a given point z, defined as v(z)(ci) = gi(z). When there is no ambiguity, we shall use v(ci) instead of v(z)(ci). m
Here we will address classifiers that assign a subset of C to any given z ∈ R . An imprecise classifier is a mapping class: Rm → 2c: when |class(z)| = 1, we say the class is
I. Drummond et al. / Assessing the Aggregation of Parameterized Imprecise Classification
229
pure, and when |class(z)| > 1we say the class is compound2. In other words, an imprecise classifier is a classifier that yields the class of z as a crisp subset of the total set of classes C. Table 1. Example of imprecise confusion matrices for precise (D1) and imprecise (D.4) classifiers.
A confusion matrix brings a summarization of the results obtained by a classification: each entry ai,j denotes the number of elements in a data set that belong to class ci that have been classified as belonging to class cj. For an imprecise classification we need an imprecise confusion matrix; we still have m rows, one for each class in c, m but 2 columns, one for each set in the powerset of C, including the empty set. Table 1 brings two examples of an imprecise confusion matrix; the one from classifier D1 can be put in the form of a “precise” confusion matrix, since it corresponds to a precise classification, whereas the one from classifier D4 cannot. These classifiers were derived from the training set of an example described in [9], containing 800 points divided in 3 classes, with half of the data set used for training and the other half for test. Let us denote an entry on the row corresponding to class ci and classification class(z )= {cj,ck} as ai,jk. Let cij denote the set {c1,c2}. Entries ai,i,i = 1,m thus contain the number of elements that are perfectly classified: the classification is both correct and precise. For class c3 in the table for D.4 we see, for instance, that 115 elements in the data set are correctly, and precisely, classified as belonging to c3 and that 1 element was incorrectly classified as belonging to c1. The entries ai,j,i = j bring the errors that correspond to what is called overconfidence in the expert judgement literature [3,12]: the assessment is as precise as possible but completely wrong. We see also that 12 elements of c3 are correctly, albeit imprecisely, classified as belonging to either class c1 or c3. Here we call useful the set of elements in Z that are correctly classified but whose classification does not encompass the whole class domain. The last column corresponds to class(z) = C and brings the number of elements whose classification is correct but completely imprecise. The first column, Ø, brings the number of elements for which the classifier could not assign any classification. Note 2
In tree classifiers, compound classes are usually known as impure[10].
230
I. Drummond et al. / Assessing the Aggregation of Parameterized Imprecise Classification
that the first and last columns are not exactly the same: the first one models total conflict, whereas the last one models equal evidence for all possibilities. However, in a given application, one might choose to compute them as being the same. Let Z*k denote the elements Z* in whose classification has cardinality = k. Formally we have:
The useful elements of Z are thus
Figure 1. Example of precise (D1) and imprecise (D.4) classification (original data set from [9]).
The cardinality of the perfectly classified elements of D.4 is thus Z*1 (D.4) = 19 + 14 + 115 = 138 and that of D1 is Z*1 (D1) = Z* = 336. The usual means to verify the quality of a precise classifier Din relation to a data set Z is its accuracy, taken as the number of elements correctly classified, given by
Using Z* and Z*k we define the following quality indices for an imprecise classifier D:
We also define indices err and imp, that measure error and complete imprecision:
In our example, for D.4 we have qual*1 (D.4,Z) = 148/400 = .37, qual*2 (D.4,Z) =
I. Drummond et al. / Assessing the Aggregation of Parameterized Imprecise Classification
231
238/400 = .59, err(D.4,Z) = 1/400 = .002 and imp(D.4,Z) = 13/400 = .032, and for D1 we have qual*1(D1,Z) = 336/400 = .84, err(D1,Z) = 64/400 = .16 and imp(D1,Z) = 0. We see that with imprecise classification we trade precision for accuracy, in the sense that even though the classifier’s output is less precise, it commits less errors. Figure 1 shows the result of the application of classifiers D1and D.4in the training set. The dark symbols indicate incorrect classification; points, circles and triangles respectively indicate precise, somewhat imprecise and completely imprecise classification. 3. Parameterized imprecise classifications Since from a probabilistic valuation one can always derive a possibilistic one (see [8]), our framework can be used straightforwardly in probabilistic classifiers. Yet more generally, the framework can be used on any set of data that has a set of prototypes associated to each class. Using the prototypes one can generate either probabilistic (e.g. using k-nn classification and a metric on the attribute space such as the Euclideandistance) or a possibilistic classifier such as presented in [13] or [6], based on similarity-relations. Our work is focused on discriminant functions that are (normalized) possibility distributions: each gj is a mapping gi: Rm → [0,1], such that, ∃ z ∈ Rm, gi = 1, i.e., each class has at least one element on the attribute space that is completely compatible with it. The possibilistic valuation v(z ): C → [0,1] associated to z is not necessarily a normalized possibility distribution, i.e., there may not exist a class with which z is completely compatible. In this work, we denote by π the possibility distribution obtained by normalizing valuation v, calculated as π(z)(ci) = v(z)(ci)/K, where K = maxiv(z)(ci). Whenever possible, we shall denote v(z)(ci)(resp. π(z)(ci)) as v(ci)(resp. π(ci)). When a classifier yields a valuation, it usually yields as output the class that has highest support in the valuation. Let us call the procedure that chooses the (usually precise) output out of a valuation as the classifier’s decision-maker. What we propose here is to take one such classifier and create a parameterized family of decision-makers. Given a valuation π(z) yielded by a classifier L to an element z ∈ Z we create a decision-maker Dα,α ∈ ]0, 1] that yields as output an imprecise class for z as
i.e. the class of z is given by the level-cut with degree α[7]. We applied fuzzy c-means clustering [2] on each class and generated the possibilistic valuations using the similarity-relations-based classifier from [6,5]. A classifier Dα means that we have used the level-cut with degree α as the output of classifier D. We will denote by a classifier with ti prototypes for class ci. The classifications depicted in Figure 1 are from D1 (precise) and D.4 (imprecise) based on a possibilistic classifier D with a <5,5,10> prototype configuration. Figure 2.a) brings the indices qual*, qual*1 , qual*2 , imp and err obtained from classifier D for the training data set. Parameter α varies in ]0,1], thus between the most imprecise classification and the most precise one that we can obtain with that configuration.
232
I. Drummond et al. / Assessing the Aggregation of Parameterized Imprecise Classification
Figure 2. Imprecision-based performance indices for Dα, with prototype configuration <5,5,10>, for a) training and b) test data.
Note that a = acc(D,Z) = qual*(D1,Z) = qual*1(D1,Z) and qual*k(D1,Z) = 0, ∀k ≠1. Value b is related with the number of points that are perfectly classified no matter the value of α, i.e., those points for which the possibilistic valuation is a precise fuzzy set. Value c is the lowest value of αin which all correctly classified points in the data set that are useful. Value d is the value of α that yields the highest value for qual*. These seem to be good parameters to base an imprecise classification. In our example, a = .84, b = .1775, c = .76 and d = .52; with qual*(D.76,Z) = .9725 and qual* (D.52,Z) = .993. Of course, we need a leap of faith to use the indices obtained from the training data to guide us to classify the data of interest. But that happens for any algorithm that uses parameters derived by learning from examples. However, if the training data reflects the overall behavior of all the data, then the results become more trustful. In Figure 2.b) we show the results for the test data set. The data is synthetic and the training and test data sets follows the same distributions, and thus, as expected, in this case the indices for the test data set behave closely to those of the training data set. However, the curves for the training and test data set may be very different from each other, depending on how the training and data sets relate to each other. Nevertheless, they are consistent in the sense that, for αi ≤ αj, the classifier Dα i is more accurate and less precise than Dα j
in both data sets.
4. Aggregating distinct valuation-based classifiers For a given data set, the aggregation of two classifiers can yield better results than the individual classifiers. The combination of classifiers is usually called ensemble, and can be done using the classifer’s “final” results, i.e. the precise classification, or, in the case of valuation-based classifiers, inside the valuation uncertainty model (see [9],[10] for a through study in both themes). In this section we discuss the aggregation of parameterized imprecise classifiers.
3
Values c and d depend on the number of digits used for α.
I. Drummond et al. / Assessing the Aggregation of Parameterized Imprecise Classification
233
Let Dα11 and Dα22 respectively denote two parameterized classifiers. The aggregation of Dα11 and Dα22 on a data set Z is given as:
where f is a set aggregation function. One such function is f = ∩ and such that, if Dα121 ,α 2 ( z ) = Ø we assign the compound class C to z, i.e. z obtains a completely imprecise classification. Figure 3 brings Dα11 and Dα22 for a data set with two input attributes and two classes, concerning the classification of the origin of sags registered in electricity distribution substations (see [11]). We used f = ∩ as described above and obtained the classifiers Dα121 ,α 2 , with α1,α2 ∈ ]0,1]. Figure 4.a brings performance quality indices for D012.6,α 2 , i.e., we fixed α1 = .6 and varied α2. Figure 4.b brings the indices for
Dα121 , 0.58 . Figure 5 brings the results for classifier Dα120.6 ,α 0.58 for both the training and test data sets. The overall results obtained, for both the training and test data set, are such that the aggregated imprecise classification is always better than the individual ones in the sense that imprecision decreased without an increase in the error.
Figure 3. Imprecision-based performance indices on training data sets for classifiers
Dα11 and Dα22 .
234
I. Drummond et al. / Assessing the Aggregation of Parameterized Imprecise Classification
Figure 4. Imprecision-based performance indices on training data sets for classifiers
Dα121 , 0.58 and
D012.6,α 2 .
Figure 5. Results of
Dα120.6 ,α 0.58 on the a) training and b) test sets.
5. Conclusion We have described a general classification formalism that aims at extracting the most useful results from a chosen valuation-based classifier. Usually, when valuation-based classifiers are used, only the accuracy of the precise classification is considered, and valuable data is simply disregarded. Using a possibilistic-parameter, we can obtain the most adequate trade-off between accuracy and precision according to the decisionmaker’s needs. We have presented some indices to help find the most suitable imprecise classifier for one’s needs. Here we have addressed the question of combining imprecise classifiers. We have tested the approach using a data set with only 2 classes and with a single aggregation function, obtaining good results. Note that in this case, the answer is either completely precise or completely imprecise, which is not very useful in many applications. However, for this particular application (the origin of sags registered in electricity
I. Drummond et al. / Assessing the Aggregation of Parameterized Imprecise Classification
235
distribution substations) a reduction of the number of errors, even at the cost of complete imprecision, makes the system more acceptable to human operators, since its assessments are trustable. In the future we envisage to study the performance of other aggregation functions and with data sets with a large number of classes. We also intend to include the means to assess imprecise classifications taking cost (or gain) into account. References [1] R. O. Duda, P. E. Hart and D. G. Stork, Pattern classification, John Wiley & Sons, Inc., New York, 2000. [2] J. Bezdek, R. Ehrlich and W. Full, FCM: The fuzzy c-means algorithm, Computers & Geosciences 10, 2-3 (1984), 491–263. [3] R. M. Cooke, Experts in uncertainty, Oxford University Press, 1991. [4] I. Drummond and S. Sandri, Imprecise classification: elicitation, assessment and aggregation, Accepted for SBIA’06 to be held on October 2006 in Ribeirão Preto, Brazil. Proceedings by LNAI. [5] I. Drummond and S. Sandri, A clustering-based fuzzy classifier, in Vuité Congrés Catalá d’Intel.ligência Artificial (CCIA2005), October 2005, Sardegna, Italy. Artificial Intelligence Research and Development and Frontiers in Artificial Intelligence and Application. Berlin: IOS Press, 2005, v.131, p. 247-254. [6] I. Drummond and S. Sandri, A clustering-based possibilistic method for image classification, Advances in Artificial Intelligence, LNAI 3171, Springer Verlag (2004), 454– 463. [7] D. Dubois and H. Prade, Possibility Theory, Plenum Press, 1988. [8] D.Dubois, H. Prade and S. Sandri, On possibility/probability transformations, in Fuzzy logic:state of the art R. Lowen and M. Roubens, Eds, Kluwer (1993), 103–112. [9] L. I. Kuncheva, Fuzzy classifier design, Springer Verlag, 2000. [10] L. I. Kuncheva, Combining pattern classifiers: methods and algorithms, Wiley, 2004. [11] J. Melendez, S. Herraiz, J. Colomer and J. Sanchez. Decision trees to discriminate between HV and MV origin of sags registered in distribution substations., submitted. [12] S. Sandri, D. Dubois and H. Kalfsbeek, Elicitation, assessment and pooling of expert judgments using possibility theory, IEEE Transactions on Fuzzy Systems (1995), 3(3), 313– 335. [13] J. M. Keller, M. R. Gray and J.A. Givens A fuzzy k-nearest neighbor algorithm, in IEEE Transactions on Systems, Man and Cybernetics, (1985), 15(4), 580–585. [14] J.C. Bezdek , J. Keller, R. Krisnapuram and N. R. Pal Fuzzy models and algorithms for pattern recognition and image processing, Springer, 1999.
This page intentionally left blank
6. Multiagent System
This page intentionally left blank
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
239
Dynamic Electronic Institutions for Humanitarian Aid Simulation Eduard MUNTANER-PERICH1, Josep Lluis DE LA ROSA1, Claudia Isabel CARRILLO FLÓREZ 1, Sonia Lizzeth DELFÍN ÁVILA1, Araceli MORENO RUIZ1 Agents Research Lab, University of Girona, Catalonia, Spain Abstract. In this paper we introduce how to use Dynamic Electronic Institutions for Operations Other Than War simulation. OOTW involves different types of humanitarian aid operations, where a large number of different agents have to collaborate in fast-changing environments. In our opinion Dynamic Electronic Institutions arise from the convergence of two research areas: electronic institutions and coalition formation. We believe that this research topic is well suited to OOTW requirements: decentralised control, knowledge management, short-term associations, flexible negotiations, etc. In this paper we comment on different approaches for OOTW simulation, we present a brief summary of our previous work on dynamic institutions, we explain our ideas on how to use dynamic institutions for OOTW simulation, and we show our first experiments and results in this direction. Keywords. Humanitarian Aid Operations, OOTW, Multi-Agent Systems, Coalition Formation, Electronic Institutions, Multi-Agent Simulation
Introduction The task of planning humanitarian aid operations is a challenging problem because it means that a large number of different types of organisations have to collaborate in order to solve problems in fast-changing environments. Operations Other Than War (OOTW) are characterized by non-hierarchical decision making and decentralized control [13]. OOTW involves different types of operations: non-combatant evacuations, disaster relief operations, food distribution, etc. The agents involved in these operations can be: NGOs, volunteers, official governmental initiatives, etc. At present, this domain requires different tools: simulation tools for training the different actors involved, and decision support systems for coalition planning. Multi-agent systems are a research topic well suited to these requirements: decentralised control, knowledge management, short-term associations, flexible negotiations, etc. Multi-agent systems have communities of heterogeneous, autonomous, proactive and collaborative agents which can be a good solution to OOTW problems. At this moment there are different research groups and different approaches that try to solve the OOTW issues by using multi-agent systems. CplanT [13] is a multi-agent system for planning OOTW. This approach combines classical negotiation algorithms (such as Contract Net Protocol, CNP) with acquaintance models techniques [14]. In their approach each agent is a complex, organized entity playing an active role in the operations. They differentiate between coalitions and alliances: coalitions are sets of agents who agree to fulfil a well-specified 1 University of Girona. Campus Montilivi (Edif. PIV) 17071. Girona, Catalunya, Spain. Tel. + 34972418478, E-mail: {emuntane, peplluis, ccarill, slizzeth, amr}@eia.udg.es
240
E. Muntaner-Perich et al. / Dynamic Electronic Institutions for Humanitarian Aid Simulation
goal, and alliances are sets of agents that share general objectives. Alliances are an effective way to reduce the search space in the coalition formation process. In [13], the authors describe their multi-agent architecture. There are three classes of agents: Resource Agents (e.g. roads, airports, etc.), In-Need Agents (e.g. Villages, etc.), Humanitarian Agents (e.g. NGO’s, volunteers, etc.). Another interesting approach is [4]. The authors discuss a model of coalitions in the OOTW domain based on the application of holonic principles. This research hypothesises that agent-oriented holonic behaviour could realise the next generation of decentralised knowledge-based coalition systems. In [5] the authors describe the use of Soar agents in OTTW scenarios. Soar is a pure production system and provides an integration of knowledge, planning, reaction, search and learning. Their work aims to provide a simulation environment for running real-time and dynamic computer-aided exercises. A related research field is logistics planning. There are some interesting approaches [15,17] that deal with e-logistics management, decentralised agent planning, and other OOTW related issues. It is important to mention that there is a project called RoboCup Rescue [6] (quite related to OOTW), which tries to promote research and development in the rescue domain at various levels. Specifically, the RoboCupRescue Simulation Project has the main purpose of providing emergency decision support by integration of disaster information, prediction, planning, and human interface. In our opinion, these multi-agent approaches still have some problems, most of them because the approaches are mainly based on coalition formation. The problems we have identified are the following four: x Teamwork problem: a serious problem in coordinating distributed agents that have formed a coalition, is how to enable them to work together as a team towards a common goal. This is a strategy coordination problem. x The Work distribution problem: once a coordination strategy has been chosen there is a problem of how the work should be distributed among the agents. Some of the approaches we have comment on are studying only one task per coalition, however, that is not a realistic approach. x Emergent behaviour problem: coalition formation approaches have an agentcentred view so the emergent behaviour of the global system can become unexpected. In critical applications (such as OOTW) this can be a significant problem, and it is evident that some regulatory measures must be introduced. x Dissolution problem: At times, it may be necessary to provide means for the agents to dissolve the coalitions they have formed. We have to detect when the association is no longer needed. We argue that these problems can be studied and possibly solved by turning the coalitions into temporary electronic institutions (Dynamic Electronic Institutions: DEIs), which is the main aim of our research [8,9]. From our point of view, DEIs result from combining two research lines: electronic institutions and coalition formation. In this paper we explain how to use DEIs in OOTW scenarios, and we present our first experiments and results in this domain. This paper is organized as follows. In section 1 we explain our dynamic electronic institutions approach, the concept of DEIs and their lifecycle. In section 2 we describe our OOTW scenario and our framework. Next, section 3 shows our first experiments and results. Finally section 4 presents conclusions and future research.
E. Muntaner-Perich et al. / Dynamic Electronic Institutions for Humanitarian Aid Simulation
241
1. Dynamic Electronic Institutions 1.1. Electronic Institutions From a social point of view, it is easy to observe that the interactions between people are often guided by institutions that help and provide us with structures for daily life tasks. Institutions structure incentives in human exchange, and establish laws, norms and rules to respond to emergencies, disasters, et cetera. Somehow we could say that institutions represent the rules of the game in a society or, more formally, are the humanly devised constraints that shape human interaction [12]. The study of electronic institutions is a relatively recent field, the first approach was [11]. In this thesis, Noriega introduced an abstraction of the notion of institution for the first time. Noriega is also the first to use the term agent-mediated electronic institution, which he describes as: computational environments which allow heterogeneous agents to successfully interact among them, by imposing appropriate restrictions on their behaviours. As a first impression, it could seem that these restrictions are a negative factor which adds constraints to the system, but in fact they reduce the complexity of the system, making the agents’ behaviour more predictable. From these first approaches to this area, to the actual lines of research, there have been different European research groups working on similar subjects, each one with its particular perspective and approach to the problem [2,3,16,18]. These different approaches to electronic institutions have demonstrated how organisational approaches are useful in open agent systems, but in our opinion, they still have several problems and limitations. We have summarized these problems in the following list: x All the approaches to electronic institutions are based on medium to longterm associations between agents. This characteristic is useful in some application domains but it is a significant problem in other domains, where changes in tasks, in information and in resources make temporary associations necessary. x Electronic institutions require a design phase. It is necessary to automate this design phase in order to allow the emergence of electronic institutions. x Agents can join and leave institutions, but how do these entrances and exits affect the institutions’ norms and objectives? x When an institution has fulfilled all its objectives, how can it dissolve its components (agents)? In our opinion, these problems and limitations can be studied and possibly solved with a coalition formation approach to electronic institutions, in order to develop dynamic electronic institutions. We have presented the notion of dynamic electronic institution and our first exploratory work in our recent works [8, 9]. There is little previous work on dynamic electronic institutions: this idea has just recently been introduced as a challenge for agent-based computing. It first appeared when the term dynamic electronic institution appeared in a roadmap for agent technology [7].
242
E. Muntaner-Perich et al. / Dynamic Electronic Institutions for Humanitarian Aid Simulation
1.2. Dynamic Electronic Institutions We argue that Dynamic Electronic Institutions (DEIs from now on) can be described as follows [9]: emergent associations of intelligent, autonomous and heterogeneous agents, which play different roles, and which are able to adopt a set of norms in order to interact with each other, with the aim of satisfying individual goals and/or common goals. These formations are dynamic in the sense that they can be automatically formed, reformed and dissolved, in order to constitute temporary electronic institutions on the fly. This type of institution should be able to adapt its norms and objectives dynamically in relation to its present members (agents). There are several application domains that require short-term agent organisations or alliances: Digital Business Ecosystems, Mobile Ad-Hoc Networks, and Operations Other Than War (we will study this application domain in the next section). In our opinion DEIs should have a lifecycle made up of by three phases: Formation, Foundation and Fulfilment (we call this lifecycle “3F cycle” [8]). Figure 1 depicts this cycle. 1. Formation phase: this is the coalition formation phase. Associations between agents which have the same (or similar) goals emerge. Other notions such as trust between agents should also be considered as important factors. 2. Foundation phase: the process of turning the coalition into a temporary electronic institution. This phase is the real challenge, because the process of turning the coalition into a temporary electronic institution is not trivial. 3. Fulfilment phase: when the institution has fulfilled all its objectives, the association should be dissolved. This phase occurs because the association is no longer needed, or because the institution is no longer making a profit. One of these three phases has been poorly studied in the past: the foundation phase. At this moment, we are focusing our work on this phase. We define foundation as the process of turning a coalition into a temporary electronic institution. This phase requires the agents to adopt a set of norms that regulate their interactions. This must be an automated process, without any human intervention, so agents must be able to reason and negotiate at a high level.
Figure 1. DEI construction phases (3F cycle).
E. Muntaner-Perich et al. / Dynamic Electronic Institutions for Humanitarian Aid Simulation
243
Our perspective on this problem is that to construct an institution from zero without human intervention may be too difficult, so we argue that an approach based on using knowledge from previous cases, such as CBR, could be interesting and useful for solving this issue. Presently, we are directing our efforts in this direction. In our approach we are using Case-Based Reasoning. CBR is based on the idea that new problems are often similar to problems that have been encountered previously and that past solutions may be of use in the current situation. With a CBR approach to the foundation process, when a coalition has been formed and needs to turn itself into an institution, agents should consult their case databases in order to find the stored institution’s specification that adapts best to the present situation, and should then make the pertinent reforms to the selected specification in order to obtain an institution that works correctly. Our first exploratory work is focused on OOTW, which, as we have said, is a challenging problem. In the next section we describe our OOTW scenario and the tools and technologies which we have used. 2. Our Scenario and Framework We have used the JADE/Agent-0 framework [10]. Our agents have a BDI architecture with a mental state composed of three mental categories: Beliefs, Commitments and Capabilities. Within our CBR mechanism, when a coalition has been formed and needs to turn itself into an institution, agents consult a centralized case database in order to find the stored institution’s specification that adapts best to the present situation. Then the agents have to adopt the norms specified in the selected institution in order to turn the coalition into an institution. In our system, norms are adopted by taking on new commitments. This work is exploratory, and we are using a simplified concept of institution: groups of agents that adopt several norms. In these first experiments we are also assuming that the agents agree with the institution selected by the system. The JADE/Agent-0 framework [10] has been developed by our research group, by continuing our work introduced in [1]. The framework extends the Agent-0 language, improving it substantially and modernizing it. Some of the extensions improve problems in its own language, whereas other extensions are based on integrating Agent-0 with the JADE platform. In order to test our system we have started with a very simple scenario which involves three types of agents: 1. Volunteers (V): every agent represents a group of volunteers. 2. Doctors Without Borders (D): every agent is a unit of this international humanitarian aid organisation. 3. Hospitals (H): every agent represents a hospital. In our scenario there is a succession of disasters, each of which causes a different number of wounded. The agents attempt to form coalitions in order to aid and heal the wounded, and these coalitions can attempt to form institutions with the aim of improving their teamwork. Each agent has been implemented with the Agent-0 language and has a mental state with beliefs, commitments and capabilities. The agent’s behaviour is governed by commitment rules, which define the agent’s character in relation to the environment and other agents. At the moment we assume that agents share an OOTW ontology. Within a coalition, each agent acts in its own way in order to fulfil the coalition’s objectives, whereas within the institution there are norms that regulate the
244
E. Muntaner-Perich et al. / Dynamic Electronic Institutions for Humanitarian Aid Simulation
agents’ actions and interactions in order to improve the teamwork and reduce the complexity of the system. This means the agents’ behaviour is more predictable. To initialize the system some institution specifications (examples of typical configurations among the different actors) must be introduced into the case base before starting up the system. Therefore, in the first CBR iterations the coalitions can reuse previous institution cases. 3. Experiments and Results In this section we explain two experiments that have been carried out with the system described in the previous sections. The first experiment uses the JADE/Agent-0 framework in the OOTW scenario. The system starts with a population of 12 agents: 5 Volunteer groups, 5 DWB units, and 2 Hospitals. There is a succession of disasters and agents can attempt to form coalitions (but not institutions). The coalition formation mechanism is very simple and based on grouping together the agents that share the objective: heal wounded. The coalition formation process has time limits, and every disaster causes a different and random number of victims. Each execution iteration implies that a disaster has happened. We study the first 20 iterations, and we have introduced certain incidents during the execution: x Between iteration 10 and 11 a hospital stops being operative (the hospital agent leaves the multi-agent system). x Between iteration 15 and 16 two DWB units stop being operative (the agents leave the multi-agent system). With these parameters we obtain an average of 55.38% saved victims, and there is an average of 9.8 agents per coalition. We cannot make conclusions with only this experiment; we need to compare the results with experiment 2, which includes the CBR foundation approach, and therefore institutions. The second experiment is the same as the previous one, except that institutions can be founded. The system also starts with a population of 12 agents: 5 volunteer groups, 5 DWB units, and 2 Hospitals. Then, there is a succession of disasters, and agents can attempt to form coalitions. In this experiment coalitions can turn themselves into institutions using a centralized CBR mechanism (the foundation phase mechanism). The coalition formation mechanism is the same, and each disaster also causes a different and random number of victims. We have introduced 4 institution cases (examples of typical configurations) into the centralized base to initialize the system and facilitate first institution adoptions. Table 1 shows 20 iterations of the execution. Each iteration implies that a disaster has happened. Each row presents the initial number of victims, the percentage saved, number of agents in the coalition formed, number of agents for each role/type, the reused institution case, and the institution case added to the case base (only if one is added). We have also introduced the same incidents during the execution, as in the previous example (between iteration 10 and 11 a hospital stops being operative, and between iteration 15 and 16 two DWB units stop being operative).
E. Muntaner-Perich et al. / Dynamic Electronic Institutions for Humanitarian Aid Simulation
245
Table 1. Results of experiment 2
As we can see in table 1, there is an average of 56.7% saved victims (59.3% without the outlier), and there is an average of 9.2 agents per coalition. This is a small improvement with respect to the previous example, but it is not significant. We have worked with our own framework, our own scenario, and our own examples, so we need to compare our results with other systems and platforms in order to validate them. The importance of these experiments is to show that the foundation phase is feasible, and that the DEI lifecycle can be fully implemented. In table 1, iterations 13 and 17 are highlighted because they produce the best results. Iteration 11 produces the worst results (because the coalition is turned into an institution that needs hospitals, and the coalition does not have hospital members). As we have said before, there is a project called RoboCup Rescue [6]. This project has some similarities to our research scenario, but for now, we still cannot compare them because Robocup Rescue has a very complete and exhaustive simulator, whereas ours has been implemented with the purpose of starting our experiments, and it is still too simple. 4. Conclusions and Future Work In this paper we have explained the notion of dynamic electronic institutions (DEIs), which we have presented in our recent works [8,9]. We have introduced and
246
E. Muntaner-Perich et al. / Dynamic Electronic Institutions for Humanitarian Aid Simulation
conceptualized their lifecycle: 3F cycle (Formation, Foundation and Fulfilment). We believe that this research topic can be important in some domains that require shortterm agent organisations or alliances. In this direction, we are working on the OOTW scenario (coalition planning for humanitarian aid operations), which is a challenging problem. We are focusing our work on the foundation phase, by using CBR and the JADE/Agent-0 framework [10]. We have presented our first experiments and results, which are encouraging and show that that the DEI lifecycle can be fully implemented. There are several open issues in DEIs. These include works on the institutions' adaptivity and on the dissolution process (fulfilment phase). References [1] Acebo, E., Muntaner, E., Una plataforma para la Simulación de Sistemas Multiagente Programados en Agent-0. Workshop de Docencia en Inteligencia Artificial, Congreso de la Asociación Española para la Inteligencia Artificial CAEPIA-TTIA, Gijón (2001). [2] Dignum, V. A model for organizational interaction. Based on Agents, Founded in Logic. Ph.D. Thesis, Utrecht University (2003). [3] Esteva, M. Electronic Institutions: From specification to development. PhD thesis, Universitat Politècnica de Catalunya (2003). [4] Fletcher, M., Jack: A system for building holonic coalitions. In Proceedings of the Second International Conference on Knowledge Systems for Coalition Operations, pages 49–60, Toulouse, France, (2002). [5] Kalus, T., Hirst, T., Soar Agents for OOTW Mission Simulation. 4th International Command and Control Research and Technology Symposium, Näsby Slott, Sweden. September (1998). [6] Kitano, H.; Tadokoro, S.; Noda, I.; Matsubara, H.; Takahashi, T.; Shinjoh, A.; and Shimada, S., Robocuprescue: Search and rescue for large scale disasters as a domain for multi-agent research. In Proceedings of IEEE International Conference on Systems, Man and Cybernetics, (1999). [7] Luck, M., McBurney, P., Preist, C. Agent Technology: Enabling Next generation Computing. A Roadmap for Agent Based Computing. AgentLink II (2003). [8] Muntaner-Perich, E. Towards Dynamic Electronic Institutions: from coalitions to institutions. Thesis proposal submitted to the University of Girona in Partial Fulfilment of the Requirements for the Advanced Studies Certificate in the Ph.D. program in Information Technologies. Girona (2005). [9] Muntaner-Perich, E, de la Rosa, J.Ll., Dynamic Electronic Institutions: from agent coalitions to agent institutions. NASA/IEEE Workshop on Radical Agent Concepts (WRAC'05), Greenbelt MD, September 2005. Springer LNCS (Volume 3825), (2006, to appear). [10] Muntaner-Perich, E., del Acebo, E., de la Rosa, J.Ll., Rescatando AGENT-0. Una aproximaciónmoderna a la Programación Orientada a Agentes. II Taller de Desarrollo en Sistemas Multiagente, DESMA'05. Primer Congreso Español de Informática, CEDI. Granada, 13-16 Septiembre (2005). [11] Noriega, P. Agent Mediated Auctions. The Fishmarket Metaphor. Ph.D.Thesis, Universitat Autònoma de Barcelona (1997). [12] North, D. C. Economics and Cognitive Science. Economic History 9612002, Economics Working Paper Archive at WUSTL, (1996). [13] Pechoucek, M., Barta, J., Marík, V., “CPlanT: Coalition Planning Multi-Agent System for Humanitarian Relief Operations”. Multi-Agent-Systems and Applications: 363-376 (2001). [14] Pechoucek, M., Marik, V., Barta, J., A Knowledge-Based Approach to Coalition Formation.IEEE Intelligent Systems. 2002, vol. 17, no. 3, p. 17-25. ISSN 1094-7167 (2002). [15] Perugini, D., Wark, S., Zschorn, A., Lambert, D., Sterling, L., and Pearce, A., Agents in logistics planning - experiences with the coalition agents experiment project. In Agents at work: Deployed applications of autonomous agents and multi-agent systems, AAMAS’03, Melbourne, Australia, (2003). [16] Rodríguez-Aguilar, J. A. On the design and construction of Agentmediated Electronic Institutions Ph.D. thesis, Universitat Autònoma de Barcelona (2001). [17] Ulieru, M., and Unland, R., Emergent e-Logistics Infrastructure for Timely Emergency Response Management. In Engineering Self-Organizing Systems. Di Marzo Serugendo et.al. (Eds.) Springer Verlag, Berlin 2004, ISBN 3-540-21201-9, pp. 139-156 (2004). [18] Vázquez-Salceda, J. The role of Norms and Electronic Institutions in Multi-Agent Systems applied to complex domains. The HARMONIA framework. PhD thesis, Universitat Politècnica de Catalunya. Artificial Intelligence Dissertation Award, ECCAI, (2003).
247
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Extending the BDI architecture with commitments a1
a
Dorian GAERTNER , Pablo NORIEGA and Carles SIERRA a Institut d’Investigació en Intelligència Artificial, IIIA -CSIC
a
Abstract. In this paper, we describe a novel agent architecture for normative multiagent systems which is based on multi-context systems. It models the three modalities of Rao and Georgeff’s BDI agents as individual contexts and adds a fourth one for commitments. This new component is connected to all other mental attitudes via two sets of bridge rules, injecting formulae into it and modifying the BDI components after reasoning about commitments. As with other normative approaches the need for methods to deal with consistency is a key concern. We suggest three forms of dealing with the truth maintenance problem, all of which profit from the use of multi-context systems. Keywords. BDI agents, normative multi-agent systems, logics for agents
1. Introduction Autonomous agents are an important development towards the achievement of many of AI’s promises. Among the many proposed agent architectures are Rao and Georgeff’s well-known BDI agents [19] that model mental attitudes of an agent, concretely beliefs (representing the state of the environment), desires (representing the state of affairs the agent wants to bring about) and intentions (representing the currently selected goals). Multi-context systems, devised by Giunchiglia and Serafini [12] to structure knowledge into distinct theories, allow us to define complex systems with different formal components and the relationships between them. Parsons et al. [18] use these systems to model the three BDI modalities as individual components (contexts or units) with bridge rules to describe the dependencies between them. We propose to extend the BDI agent model with a fourth component that keeps track of the commitments an agent has adopted. We view a commitment as a triple consisting of the entity that commits, the entity that the commitment is directed at and the content of the commitment. These entities can be individual agents, groups of agents or institutions. In this paper, we will follow the approach taken by Parsons et al. and model agents as multi-context systems. We describe how the commitment component is connected to the other three contexts via instances of two basic bridge rule schemata and suggest approaches to handle arising inconsistencies. In the next section we are going to formally define the use of the terms Commit1
Correspondence to: Dorian Gaertner, IIIA, Spain. Tel.: +34 935809570; E-mail: [email protected].
248
D. Gaertner et al. / Extending the BDI Architecture with Commitments
ment and Norm that we are employing in this paper. Subsequently, we summarise multicontext systems and explain how we extended them. We show how our architecture lends itself to modelling normative MAS and propose a novel way to operationalise norms. Section 4 is concerned with truth maintenance and consistency issues. Finally, we contrast our architecture with existing ones, present our future work and conclude. 2. Commitments and Norms Norms, normative agents and normative multi-agent systems have received a lot of attention in recent years. Lopéz y Lopéz et al. [15] proposed a formal model of these concepts using the Z specification language. García-Camino et al. [11] have analysed the concept of norms in a society of agents and how norms are implemented in an electronic institution. In [8], Dignum et al. extend the BDI architecture to handle norms. They are using PDL, a deontic logic, to formalise obligations from one agent to another. Norms, in their view, are obligations of a society or organisation. They explicitly state that a norm of a society is a conditional (p should be true when q is true). Finally, Cohen and Levesque in their paper ‘Intention is choice with commitment’ [6] talk about internal commitments as a precursor to the social commitments that we concern ourselves with. We consider a commitment from one entity to be directed at another entity. With respect to these entities, one needs to distinguish between individual agents and groups of agents -or electronic institutions [1]. For example, an agent can be committed to an (electronic) institution to behave in a certain way. The institution on the other hand, may be committed to the agent to reward or punish him, depending on his behaviour. Note, that this is different from the case where one agent is responsible for norm enforcement. Commitments may also exist between agents or between different electronic institutions. The content of a commitment can be a certain contract (e.g. an intention to deliver ten crates of apples once the agent believes to have been paid) between two agents or it can be a norm (e.g. you should not desire your neighbour’s wife). In this paper, we will focus on the latter. The BNF description of our commitment language is hence (where WFF stands for a well-founded formula in an appropriate language, see e.g. [13]):
D. Gaertner et al. / Extending the BDI Architecture with Commitments
249
We consider a norm to be a conditional, first-order logic formula that relates mental attitudes of an agent. All variables are implicitly universally quantified. In this paper, we are using beliefs (B), desires (D) and intentions (I) to model an agent’s mental state. For example,
is a norm which can be read as “for any two agents, if they have the same gender, they should not intend to dance together" (example taken from the social ballroom of Gaertner et al. [10]). The argument to the mental literals can be any term and the implication arrow of the norm can always informally be translated with the English word should. Although, in the above formula the modalities B and I should have a subscript x indicating that we are talking about beliefs and intentions of agent x we drop these subscripts for readability whenever it is clear from the context which agent is referred to. Furthermore, we never need distinct subscripts in the same norm formula, since it does not make sense to say a belief of one agent causes an intention for another agent. A norm, for us, is a social phenomenon, in that it applies to all agents in a given society or institution. Each agent is then committed to the institution to obey the norm. We therefore stipulate that if ϕ is a norm in an institution Π, the following must hold:
where [ϕ] is the codification of a norm as a term in Gödel’s sense. Contrast this with the notion of a contract, which in most cases affects only two parties (or agents):
where the notion of a joint commitment can be defined in arbitrarily complex ways. In what follows, we will mostly talk about agents who are committed to the institution they belong to. We therefore drop this information (i.e. the first two parameters) for brevity’s sake. Unless otherwise stated, a commitment to ϕ of the form: Commit([ϕ]) should be read as: Commit(self, myInstitution, [ϕ]).
3. Multi-context Architecture: BDI+C Multi-context systems (MCS) have first been proposed by Giunchiglia and Serafini in [12] and were subsequently used in a generic agent architecture by Noriega and Sierra in [16]. Individual theoretical components of an agent are modelled as separate contexts or units, each of which contains a set of statements in a language Li together with the axioms Ai and inference rules Δi of a (modal) logic. A unit is hence a triple of the form: uniti = {Li,Ai, Δi }
250
D. Gaertner et al. / Extending the BDI Architecture with Commitments
Not only can new statements be deduced in each context using the deduction machinery of the associated logic, but these contexts are also inter-related via bridge rules, that allow the deduction of a formula in one context based on the presence of certain formulae in other, linked contexts. An agent is then defined as a set of context indices I, a function that maps these indices to contexts, another function that maps these indices to theories (providing the initial set of formulae in each context) together with a set of bridge rules BR as follows:
The BDI+C agent architecture we are proposing in this paper extends Rao and Georgeff’s well-known BDI architecture with a fourth component which keeps track of the commitments of an agent. Below we describe a BDI+C agent as a multi-context system being inspired by the work of Parsons, Sierra and Jennings (see [18]). Each of the mental attitudes is modelled as an individual unit. Contexts for communication and planning (a functional context) are present in addition to the belief-, desire-, intentionand commitment-context but in this paper we will focus on the latter four. For the belief context, we follow the standard literature (see for example, [17] and [19]) and choose the modal logic KD45 which is closed under implication, provides consistency, as well as positive and negative introspection. However, it does not have veridicality, which means that the agent’s beliefs may be false. However, in such a situation the agent itself is not aware that its beliefs are false. Like Rao and Georgeff [19] we also choose the modal logic KD to model the desire and intention components. For the commitment context, the logic consists of the axiom schema K, closure under implication, together with the consistency axiom D. This does allow for conflicting commitments, but prohibits to be committed to something and not be committed to that something at the same time. That means, that we do not allow both Commit([ϕ]) and ¬Commit([ϕ]) to be present at the same time. However, it is perfectly possible to have both Commit([ϕ]) and Commit(not[ϕ]) in the commitment context . Due to the existence of schema K in this context, one can derive Commit([ϕ]and not[ϕ]) which is different from Commit(false). The argument is just a term—we could assign any semantics to it. The beauty of MCS is that it allows us to embed one logic as terms into another logic (the logic of the component). Therefore, Commit(and([ϕ],not [ϕ])) evolves to Commit(false) only if we apply propositional logic to the language modelled in this context. The two introspection axioms do not apply, since it does not make sense to say that once an agent is committed to some cause, it is also committed to be committed to this cause; similarly for negative introspection. All units also have modus ponens, uniform substitution and all tautologies from propositional logic. 3.1. Bridge Rules There are a number of relationships between contexts that are captured by so-called bridge rules. A bridge rule of the form (sometimes written on one line for convenience):
D. Gaertner et al. / Extending the BDI Architecture with Commitments
251
can be read as: if the formula ϕ can be deduced in context u1 and ψ in u2 then the formula θ is to be added to the theory of context u3. It allows to relate formulae in one context to those in another. In [18] three different sets of bridge rules are described which model realistic, strongly realistic and weakly realistic-minded agents. Figure 1(a) shows the model of an strongly realistic agent. Note, that in these figures, the C represents the communication unit and CC the commitment unit. One of its bridge rules, for example, states that something that is not desired should also not be intended. In addition to the information-propagating bridge rules in the figure, there are more complex rules related to awareness of intention and impulsiveness between the belief and intention units (see [17]). These are common to all strongly realistic agents. Finally, there are domain specific rules, which link the contexts to the communication unit and control the impact of interaction with the environment on the mental state of an agent. An example of this is the bridge rule that stipulates that everything that is communicated to an agent to be done is believed to be done.
Figure 1. (a) Multi-context description of a strongly realist BDI agent taken from [18]. Note, that C stands for a standard communication context as the authors did not concern themselves with commitments. (b) Commit-ment overlay for normative agents. In this figure, square brackets represent Goedel codification.
We are proposing to add an extra layer of bridge rules to existing BDI multicontext agents that controls the content of the mental contexts via norms. Remember, we earlier stated, that an adopted norm becomes a conditional commitment. The default normative personality of an agent is expressed as follows: an agent commits to believe everything it believes, commits to desire everything it desires and commits to intend everything it intends; and an agent beliefs what it is committed to believing, desires what it is committed to desiring and intends what it is committed to intending. This is modelled via two sets of bridge rules where Φ stands for any of B, Dor I:
252
D. Gaertner et al. / Extending the BDI Architecture with Commitments
Two examples of this can be seen in Figure 1(b) that depicts the normative layer of bridge rules we propose. Formulae (restricted to mental literals) from any of the three standard contexts are injected into the commitment context via a bridge rule of the form (*), where they encounter norms (first-order logic implications). Since the commitment context is closed under implication, the deduction machinery inside this context can be thought of as applying the norms. The resulting formulae of the local reasoning in the commitment unit are then injected back into the appropriate context via a bridge rule of the form (**). The six arcs in the figure represent the default normative personality of an agent. It is perfectly reasonable to imagine agents with different attitudes towards norms. A rebellious agent for example, may not desire or intend everything that it is committed to desiring or intending. Modelling agent types can therefore proceed on two levels. At the standard level between the belief-, desire-and intention context, personality traits like strong realism can be modelled, whereas character traits related to norms and norms adoption can be mimicked by modifying the overlay net of bridge rules involving the commitment context. This proposed architecture is operationally speaking very simple. The complexity of norm execution is dealt with in the commitment context, whose logic is easily modifiable. Our modular, layered approach is a natural, clean extension that provides BDI agents with a new capability, namely norm compliance.
4. Truth Maintenance Problem Adopting a norm and hence adding a commitment is in some way like opening a channel, linking different parts in the agent’s ‘brain’. For example, a commitment of the form Commit([B(ϕ)→I(ψ)]) causes I(ψ) to be deduced in the intention context if B(ϕ) is deducible in the belief context. The reasoning is as follows: •
• •
a bridge rule from the normative layer (see (*) in Section 3) adds Commit([B(ϕ)]) to the commitment context since B(ϕ) is deducible in the belief context the adopted norm together with an instance of schema K allow us to deduce Commit([I(ψ)]) another normative bridge rule (see (**)) injects I(ψ) into the intention context
One can therefore think of this as having a bridge rule linking the belief and the intention context, which is only activated, once Commit([B(ϕ)→I(ψ)]) is present in the commitment context. What happens however, if B(ϕ) is removed from the belief context? Should one also remove I(ψ) from the intention context? What impact does the revocation of the commitment have? In any case, one has to ensure the consistency of all the mental attitudes (since their respective logics contain schema D). Generally, adopting a norm has extensive ramifications with respect to consistency. This dilemma
D. Gaertner et al. / Extending the BDI Architecture with Commitments
253
is known as the truth maintenance (TM) problem. Artificial Intelligence has seen many different approaches to the TM problem. For our purposes, the most promising ones are: 4.1. Standard truth maintenance systems Once a bridge rule has fired and tries to inject a formula into a context, it is the responsibility of the context to maintain consistency. In the simplest case, it checks, whether the formula to be inserted is inconsistent with the existing theory of the context and if it finds this to be the case, rejects the proposed injection. This is a very simplistic approach that only allows monotonic updates. More sophisticated truth maintenance systems can handle non-monotonic updates or belief revision. A formula which contradicts the existing theory in a context can still be inserted, but some machinery must then revise the theory to make it consistent again (by removing some of the causes of the contradiction). Two main approaches whose use we are investigating currently are justification-based truth maintenance systems like the one proposed by Doyle [9] and assumption-based truth maintenance systems following the work by deKleer [7]. 4.2. Argumentation A traditional way of resolving conflicts is to consider the arguments in favour and against a decision and choose those that are more convincing. The area of argumentation studies this process and gives tools, mostly based on logical approaches, to automate this decision process (see for example the work detailed in [3] and [18]). In these works the decision is made considering that arguments are proofs in a logical formalism and that the proofs attack one another by deducing opposite literals or rebut one another by deducing the opposite of a literal used to support a proof. In our case we need to have a notion of argument that bases the attack not only on some logical relationships between the proofs used to support two opposite literals, but also the fact that some of the proofs are based on the application of norms. Therefore, beyond the logical attack one has to consider the strength of the argument in how supported it is by the norms of the institution. In that sense, we suggest to include in the proof the set of norms applied to generate a commitment and use a measure over them when the content of the commitment challenges a pre-existing intention, belief or desire. This measure can be based on specific reasoning with respect to the willingness that the agent has to respect a given norm, or the degree of adoption of the norm by the agent. 4.3. Decision Theory and Graded Mental Attitudes Decision theory on the other hand is based on utilities. When faced with two conflicting intentions, one dictated by a norm and the opposite dictated by the agent’s desires, it may decide to violate the norm, if this violation and the fulfilment of its desire are more satisfying then conforming to the norm. In order to decide what is more satisfying, we propose to use graded mental attitudes similar to the work done by Casali et al. [4]. In their work, the atomic formulae in contexts are no longer of the form B(term) but instead are enriched with a weight ε to give B(term,ε). This weight represents the
254
D. Gaertner et al. / Extending the BDI Architecture with Commitments
degree of belief. Similarly, for desires, it represents the degree of desire allowing us to attach priorities to certain formulae. In the case of intentions, the weight can be used to model the cost/benefit trade-off of the currently intended action. Finally, a weight on a commitment indicates the degree of adoption. Using these graded modalities, one can compute the utility of each of the conflicting atoms and act accordingly.
5. Related Work We have referred to related work throughout the paper, in particular in the first half. In this section, we aim to contrast our proposal with two particular lines of work, namely a modified BDI interpreter by Dignum et al. [8] and the BOID architecture by Broersen et al. [2]. Dignum and his colleagues add one step to the main loop of the BDI interpreter in which selected events are augmented with deontic events by repeatedly applying the introspective norms and obligations [8]. They distinguish between norms (that hold for a society) and obligations (that hold between two agents). They rank obligations based on the punishments associated with their violation and norms based on their social benefit. Our view of commitments is broader in that we allow the committed entities and the subjects of the commitment to be agents, groups of agents or entire societies. The architecture we propose is more flexible, too, since each component has its own logic and the relationships between components can be varied dynamically. The BOID architecture by Broersen and his colleagues has many similarities with our work. It contains four components (B, O, I and D) where the O component stands for obligations (as opposed to commitments in our case) and the other components have the usual meaning. They suggest feedback loops that feed the output of every component back to the belief component for reconsideration. The order in which components are chosen for rule selection, determines the kind of character the agent possesses. For ex-ample, if obligations are considered before desires, the agent is deemed to be social. One drawback is, that they only consider orders in which the belief component overrules any other modality [2]. Furthermore, these orders are fixed for each agent. Using our agent architecture, agent types can be modified dynamically. Also, the relationship between mental attitudes can be controlled at a finer level of granularity (e.g. domain-specific rules connecting multiple contexts rather than the strict ordering of components required in the BOID architecture).
6. Future Work We are currently working on implementing the BDI+C agent architecture using QuP++ [5] an object-oriented extension of QuProlog. The advantage of this particular Prolog variant is its multi-threadedness and support for reasoning. We are implementing every context as an individual thread and use separate threads for bridge rules to synchronise between the contexts. Another line of research is concerned with generalising both the architecture and the implementation to handle graded mental attitudes. Casali et al. [4] have formalised the notion of uncertainty for the BDI model and we believe it can be employed in the BDI+C model in order to tackle the truth maintenance problem.
D. Gaertner et al. / Extending the BDI Architecture with Commitments
255
Furthermore, it will allow us to represent the character or type of an agent more closely. We even envisage the ability to express the mood of an agent via dynamically changing the degree to which it believes, desires, intends and sticks to its commitments. Furthermore, our interest lies in investigating temporal aspects of norms and norm adoption. In [20], Sabater et al. extend the syntax of bridge rules by introducing the notions of consumption and time-outs. We intend to make use of these extensions in order to allow for more expressiveness in the formulation of normative commitments. Lopéz y Lopéz, in her doctoral thesis [14], describes different strategies for norm adoption ranging from fearful, rebellious and greedy character traits to reciprocation and imitation of other agents. All her strategies are based on potential rewards or punishments. Broersen et al. in [2] define agent characters based on the fixed order of the belief, obligation-, intention-and desire-component (though they do not use multicontexts, one can think of their components as such). They also give names such as ‘super-selfish’ to some of these orderings. Using the extended bridge rule layer of our architecture combined with graded versions of the mental attitudes, we can define different agent characters more formally and on a much finer level of granularity. The notions of release from commitment and norm evolution are also very interesting in this context. We intend to stretch the applicability of our proposed agent architecture to find out its limitations and possibly expand it.
7. Conclusions In this paper we have outlined a conservative extension of BDI agent architectures to grasp the notion of commitments. We have proposed how to make these extensions operational in terms of multi-context logics and illustrated them with an example of dance negotiations following the etiquette conventions of a ballroom (more details about social norms and etiquette can be found in [10]). We found that our proposed extension of a BDI architecture—to incorporate commitment—is easy to formalise and make operational and has the following features: 1. It may be readily added on top of a given BDI model by simply including a new context and bridge rule schemata linking it to each of the other modalities. 2. Although we have proposed a schema that is uniform for all modalities, it is easy to fine-tune any given formalisation of the features of the commitment unit and the underlying BDI architecture in order to capture alternative formalisations, shades of meaning or the character or personality of an agent. 3. Our BDI+C model appears to be general enough to explore with it the complex aspects of legal consequence; especially in its concrete aspects of individual norm compliance with respect to the attitude of an agent towards authority, utility, selfishness and other features that have been addressed by the MAS community. 4. The notion of norms as an initial theory for the commitment context and the commitment-dependent bridge rules provide convenient ways to study para-normative aspects like norm adoption, compliance, blame assignment,
256
D. Gaertner et al. / Extending the BDI Architecture with Commitments
violation, reparation or hierarchical normative sources. Likewise, the notion of contract could be modelled as joint commitments and added to the commitment context. 5. In a similar fashion, we have only pointed out a straightforward translation of norms as commitments between individuals and an institution, although it should be evident that other notions of authority (hierarchies of norms, issuers of norms, contingent applicabilities of norms) may be modelled along the same lines. 6. The evolution of the belief-, desire-, intention-and commitment theories as interaction proceeds and associated consistency issues may be addressed with the type of tools that have been applied to other dynamic theories, although in this paper we only hinted at three mechanisms: standard truthmaintenance systems, graded versions of the modalities and argumentation. Acknowledgements The authors would like to thank the Spanish Ministry of Education and Science (MEC) for support through the Web-i-2 project (TIC-2003-08763-C02-00). The first author is also grateful for a student grant from the Spanish Scientific Research Council through the Web-i(2) project (CSIC PI 2004-5 0E 133).
References [1] Josep Ll. Arcos, Marc Esteva, Pablo Noriega, Juan Antonio Rodríguez, and Carles Sierra. Environment engineering for multiagent systems. Journal on Engineering Applications of Artificial Intelligence, 18(2):191–204, 2005. [2] Jan Broersen, Mehdi Dastani, Joris Hulstijn, Zisheng Huang, and Leendert van der Torre. The BOID architecture: conflicts between beliefs, obligations, intentions and desires. In AGENTS ’01: Proceedings of the fifth international conference on Autonomous agents, pages 9–16, New York, NY, USA, 2001. ACM Press. [3] Marcela Capobianco, Carlos I. Chesñevar, and Guillermo Simari. Argumentation and the dynamics of warranted beliefs in changing environments. Intl. Journal on Autonomous Agents and Multiagent Systems (JAAMAS), 11:127–151, September 2005. [4] Ana Casali, Lluis Godo, and Carles Sierra. Graded BDI models for agent architectures. In 5th International Workshop on Computational Logic in Multi-Agent Systems (CLIMA V), pages 126–143, Lisbon, Portugal, 2004. [5] Keith L. Clark and Peter J Robinson. Computational logic: logic programming and beyond: essays in honour of Robert A. Kowalski, Part I, chapter Agents as multi-threaded logical objects, pages 33–65. Springer, 2002. [6] Philip R. Cohen and Hector J. Levesque. Intention is choice with commitment. Artificial Intelligence, 42(2-3):213–261, 1990. [7] Johan de Kleer. An assumption-based TMS. Artificial Intelligence, 28(2):127–162, 1986. [8] F. Dignum, D. Morley, E. Sonenberg, and L. Cavendon. Towards socially sophisticated BDI agents. In E. H. Durfee, editor, ICMAS ’00: Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000), pages 111–118, Washington, DC, USA, 2000. IEEE Computer Society. [9] Jon Doyle. A truth maintenance system. Artificial Intelligence, 12(3):231–272, 1979. [10] Dorian Gaertner, Keith L. Clark, and Marek J. Sergot. Ballroom etiquette: a case study for normgoverned multi-agent systems. In Proceedings of the First International Workshop on Coordination, Organization, Institutions and Norms (COIN) 2006 at AAMAS, Hakodate, Japan, 2006.
D. Gaertner et al. / Extending the BDI Architecture with Commitments
257
[11] Andrés García-Camino, Pablo Noriega, and Juan Antonio Rodríguez-Aguilar. Implementing norms in electronic institutions. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems (AAMAS 2005), pages 667–673, New York, NY, USA, 2005. ACM Press. [12] Fausto Giunchiglia and Luciano Serafini. Multilanguage hierarchical logics or: How we can do without modal logics. Artificial Intelligence, 65(1):29–70, 1994. [13] John Knottenbelt and Keith Clark. Contract related agents. In Sixth International Workshop on Computational Logic in Multi-Agent Systems (CLIMA VI), 2005. [14] Fabiola Lopéz y Lopéz. Social Powers and Norms: Impact on Agent Behaviour. PhD thesis, University of Southampton, 2003. [15] Fabiola Lopéz y Lopéz, Michael Luck, and Mark d’Inverno. A normative framework for agent-based systems. In Proceedings of the 1st International Symposium on Normative Multiagent Systems, 2005. [16] Pablo Noriega and Carles Sierra. Towards layered dialogical agents. In ECAI ’96: Proceedings of the Workshop on Intelligent Agents III, Agent Theories, Architectures, and Languages, pages 173–188, London, UK, 1997. Springer-Verlag. [17] Simon Parsons, Nicholas R. Jennings, Jordi Sabater, and Carles Sierra. Agent specification using multicontext systems. In Selected papers from the UKMAS Workshop on Foundations and Applications of Multi-Agent Systems, pages 205–226, London, UK, 2002. Springer-Verlag. [18] Simon Parsons, Carles Sierra, and Nick Jennings. Agents that reason and negotiate by arguing. Journal of Logic and Computation, 8(3):261–292, 1998. [19] Anand S. Rao and Michael P. Georgeff. BDI agents: From theory to practice. In Proceedings of the First International Conference on Multiagent Systems (ICMAS), pages 312–319, San Francisco, California, USA, 1995. [20] Jordi Sabater, Carles Sierra, Simon Parsons, and Nicholas R. Jennings. Engineering executable agents using multi-context systems. J. Log. Comput., 12(3):413–442, 2002.
258
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Recommendations Using Information from Selected Sources with the ISIRES Methodology a,1
a
a
Silvana Aciar , Josefina López Herrera and Josep Lluis de la Rosa a Agents Research LaboratoryUniversity of Girona{saciar, peplluis}@eia.udg.es, josefi[email protected] Abstract. Recommender systems have traditionally made use of the all variety of sources to obtain the suitable information to make recommendations. There are costs associated with the use of information sources those costs are an important determinant in the choice of which information sources are finally used. For example recommendation can be better if the recommender knows where is the suitable information to predict user’s preferences to offer products. Sources that provide in-formation that is timely, accurate and relevant are expected to be used more often than sources that provide irrelevant information. This paper shows how the precision of the recommendations using either Collaborative Filtering (CF) or Content-Base Filtering (CBF) increases by selecting the most relevant information sources based on their intrinsic characteristics. Keywords. Recommender Systems, Information Integration, Negotiation
1. Introduction Information overload is one of the most important problems met by the Internet’s users nowadays. The information overload phenomena is determined by the lack of methods to compare and process the available information, Recommender Systems address this problem filtering the most relevant information for the user’s purpose. These systems receive as input the preferences of the users, analyze them and deliver recommendations. In such a case, the search of specific information for recommender systems is a difficult task. The literature in the field of recommender systems is focused towards the methods that are used to filter the information to make the recommendations. Methods such as Content-Based Filtering (CBF) [4] [5] and Collaborative Filtering (CF) [8][9] are significant examples in this field. This paper presents the results to apply both methods Collaborative Filtering (CF) and Content-Based Filtering (CBF) with information from selected sources based on their intrinsics measures. The ISIRES (Identifying, Selecting, and Integrating information for Recommender Systems) methodology [2] has been used to select the most relevant information sources. ISIRES identifies information sources based on a set of intrinsic characteristics and selects the most relevant to be recommended. The paper is organized as follows. Section 2 presents the framework ISIRES to implement the recommender methods. In Section 3, the application of the recommender methods is showed. In Section 4 a case study with some result is described and finally, conclusions are drawn in Section 5. 1
Correspondence to: University of Girona, Campus Montilivi Tel.: +34 972 41 8478; Fax: +34 972 418 976
S. Aciar et al. / Recommendations Using Information from Selected Sources
259
Figure 1. Functional blocks of the methodology ISIRES
2. Framework : ISIRES A methodology for the identification of information sources, comparing the sources and selecting most relevant information to make recommendations has been proposed and has been described by Aciar et.al. in [2]. Four blocks compose the methodology which are shown in Figure 1. 1.
2.
3.
4.
Description: A set of intrinsic characteristics of the sources has been defined for the identification of relevant information to make recommendations [3]. The intrinsic characteristics that describe the information contained in the information sources to use in a recommender system are: Completeness of the information contained in the sources, Diversity of the user groups, Timeliness of the information, Frequency of the transactions made by the users and the quantity of the user information available in the sources. These characteristics are an abstract representation of the information contained in the sources and criteria to compare and to select the sources. Selection: The selection of the information sources is made based on the intrinsic characteristics and a value of relevance. [3]. Given a requirement of recommendation the sources with the most relevant characteristics to reply this requirement are selected. For more detail see Aciar et.al. [2]. Integration: Mapping between the ontologies is made to integrate the information from the selected sources which are distributed and heterogeneous. The mapping has been made defining the semantic relationships between the ontologies [2]. Recommendation: Once the most relevant information sources have been selected is possible to apply either of recommender methods. This paper is focussed in the application of both recommender methods: CBF and CF.
260
S. Aciar et al. / Recommendations Using Information from Selected Sources
Next sections show the results obtained in the application of both methods with information sources selected using the ISIRES methodology vs the application of the methods using the information from all sources. 3. Applying the Recommendation Methods Two main methods have been used to compute recommendations: Content-Based Filtering and Collaborative Filtering. These methods are implemented in this paper with information from the selected sources using a methodology ISIRES and with information without use the methodology to evaluate our approach. 3.1. Content-Based Filtering (CBF) The system processes information from some sources and tries to extract useful features and elements about its content [5]. In this method the attributes of products are extracted and they are compared with a user profile (preferences and tastes). Vectors are used to represent the user profile and products. The cosine function based on the vectorial space proposed by Salton [7] has been used to establish the relevance that has a product for a user (see equation 1). The products that have a higher value of relevance are recommended to the users.
Where P and U are vectors representing a product and an user respectively. 3.2. Collaborative Filtering (CF) The information provided by users with similar interests is used to determine the relevance that the products have for the present user. Similarity between users is calculated for this purpose and the recommendations are made based only in this similarity, the bought products are not analyzed as it is made in the FBC. Also the cosine vector similarity [7] is used to compute the distance between the representation of the present user and the other users (see equation 2). All users are represented by vectors.
Where U and V are the user vectors.
Table 1. Intrinsic characteristics and relevance of the information sources for CBF recommender
S. Aciar et al. / Recommendations Using Information from Selected Sources
261
3.3. Evaluating Recommendations The purchases made by the users after the recommendations are used to evaluate the precision of the system. The precision represents the probability that a recommendation will be successful and it is calculated using equation 3 [6].
Where RP is the number of recommended products that have been bought and R is the total number of recommended products.
4. Case-study Eight data bases in the consumer package goods domain (retail) have been used. The data bases are related tables containing information of the retail products, 1200 customers and the purchases realized by them during the period 2001-2002 in the Caprabo supermarket. For each data base has been calculated the intrinsic measures to obtain the relevance. The relevance is computed as:
Where ∫i is the weight given for each characteristic i. The most important characteristic for the recommendations will have the highest weight. V is the value of the characteristic i and n is the number of characteristics. See Aciar et. al [1] for more detail about the selection of the sources. Table 1 and Table 2 show the value of the characteristics, the weight ∫of each characteristic and the relevance value of each sources calculated to apply the CBF and CF respectively. The priority value ∫ of the characteristics is different depending of the method of recommendations so the relevance value of each source is different in the CBF and CF.
Table 2. Intrinsic characteristics and relevance of the information sources for CF recommender
262
S. Aciar et al. / Recommendations Using Information from Selected Sources
4.1. Content-Based Filtering (CBF) The relevant attributes of the product have been defined by an expert of a supermarket. The user preferences have been established based on these attributes and are represented by a vector:
The weight ui has been obtained using the tf-idf method (Term Frequency Times Inverse Document Frequency) [6], which calculation has been made based on previous purchases of the user.
ti is the frequency of attribute i in the purchases, n iis the number of users who have been bought a product with attribute i and N is the total number of users. Product vector is composed by the weight pi which has been assigned by the expert in the supermarket.
The relevance of each product for the users has been established with equation 1 using the vectors representing the users and the products. The products with a value of relevance > 6 have been recommended to the users. 4.2. Collaborative Filtering (CF) The users vector representing their preferences are composed as following:
The weight ui has been obtained from previous purchases of the user using the tf-idf method (Term Frequency Times Inverse Document Frequency) [6] as in the CBF
Figure 2. Precision of the recommendations using CBF with all sources
S. Aciar et al. / Recommendations Using Information from Selected Sources
263
Where ti is the frequency of attribute i in the purchases, ni is the number of users who have been bought a product with attribute i and N is the total number of users. The similarity between users has been established with equation 2 using the vectors representing the users. The products bought by other users with a value of similarity > 6 have been recommended to the user. 4.3. Evaluating Recommendations The experiments have been made implementing both methods: CBF and CF with the in-formation of all the sources (8 data bases) without the methodology and with information from the selected sources using the methodology. The precision of the recommendation has been evaluated using equation 3. Figure 2 and Figure 3 show the precision of the recommendation with all sources. In Figures 4 and 5 can be observed the precision of the recommendations made using the information of the relevant sources. The relevance values have been obtained using ISIRES methodology [2]. The sources with higher value of the relevance have been used in the recommendations. In the CBF the sources S3, S7 and S8 have smaller values than the other sources. These sources have not been used for the recommendations. In the CF the sources S1, S5 and S6 have the smallest values so these sources are not enough relevant for the recommendations. The results show that the recommendation is improved by using only the sources with the most relevant information. More information does not mean better results. The knowledge is relevant and organized information. With the ISIRES methodology we can select the most relevant information sources to obtain knowledge about users to offers the most relevant products. 5. Conclusions The large amount of information available nowadays on Internet makes the process of detecting user’s preferences and selecting recommended products more and more diffi cult.
Figure 3. Precision of the recommendations using CF with all sources
The recommender systems traditionally trend to use all information available but this fact does not assure that the recommendations improve. We use the ISIRES methodology to select the most relevant information sources for a recommender
264
S. Aciar et al. / Recommendations Using Information from Selected Sources
system. The se-lection of the sources is based on their intrinsic characteristics. In this paper is compared the precision of the recommendation using both ways of information aggregation. One of them is aggregated the information selecting the sources randomly without considering criterion and the other one is using the intrinsic characteristics to select the most relevant sources. The results obtained in a case-study show how the precision of the recommendation made with the integration of the data sources generates superior results integrating only the selected sources based on their intrinsic characteristics.
Figure 4. Precision of the recommendations using CBF only with the selected sources in the methodology
Figure 5. Precision of the recommendations using CF only with the selected sources in the methodology
S. Aciar et al. / Recommendations Using Information from Selected Sources
265
6. Acknowledgements This work was supported by grant DPI2005-09025-C02-02 from the Spanish government, “Cognitive Control Systems”. References [1] S. Aciar, J López Herrera, and J. Ll. de la Rosa, ‘Fc en un sma para seleccionar fuentes de información relevantes para recomendar’, XI Conferencia de la Asociación Española para la Inteligencia Artificial (caepia’05). Workshop de Inteligencia Computacional: Aplicaciones en Marketing y Finanzas. Santiago de Compostela, Espana, (Noviembre 2005). [2] S. Aciar, J López Herrera, and J. Ll. de la Rosa, ‘Integrating information sources for recom-mender systems’, Artificial Intelligence Research and Development -IOS Press. VIII Catalan Association for Artificial Intelligence Congress (CCIA2005),Alghero, Sardinia, Italy, (October 2005). [3] S. Aciar, J López Herrera, and J. Ll. de la Rosa, ‘Sma para la búsqueda inteligente de información para recomendar’, I Congreso Español de Informática,(Cedi 2005), Simposio de Inteligencia Computacional, SICOŠ2005 (IEEE Computational Intelligence Society, SC)Granada, Espana, (Septiembre 2005). [4] M. Balabanovic and Y. Shoham, ‘Fab: Content-based, collaborative recommendation’, Communications of the ACM, 40(3), 66–72, (March 1997). [5] H. Lieberman, ‘Letizia : An agent that assists web browsing’, Proceedings of the International Joint Conference on Artificial Intelligence, Montreal, (1995). [6] G. Salton and C. Buckley, ‘Automatic text processing’, he Transformation, Analysis and Re-trieval of Information by Computer, (1989). [7] G. Salton and M.J. McGill, ‘Introduction to modern information retrieval’, McGraw-Hill Publishing Company, NewYork, (1983). [8] U. Shardanand and P. Maes, ‘Social information filtering: Algorithms for automating word of mouth’, SIGCHI Conference, (1995). [9] B. Smyth and P. Cotter, ‘A personalized television listings service’, Communications of the ACM, (2000).
266
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Social Currencies And Knowledge Currencies Claudia Carrillo, Josep Lluis de la Rosa, Araceli Moreno, Eduard Muntaner, Sonia Delfin (1), and Agustí Canals (2) (1) Agents Research Lab – EASY xarxa IT CIDEM University of Girona, Catalonia (2) Open University of Catalonia [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]
Abstract This paper describes the state of the art for social currencies and cognitive capitalism, which is used to found a new currency based on knowledge. This new currency better reflects the wealth of nations throughout the world, supports more social economic activity and is better adapted to the challenges of digital business ecosystems. The basic premise of the new currency is a knowledge measurement pattern that is formulated as a new alternative social currency. Therefore, it is a first step contributing to the worldwide evolution towards a knowledge society. It is a currency to facilitate the conservation and storage of knowledge, its organization and categorization, but chiefly its transference and exploitation. The proposal turns recommender agents into the new currency. An example is presented to show how recommender agents receive a new use, transforming them into objects of transaction.
Keywords: Social Currencies, knowledge, cognitive capitalism, social exchange, recommender systems, wiki, DBE
1. Introduction A novel business method which requires new technology development and research will be defined in this paper: the Bank of Wits. This utopia is defined as follows: "a bank where juridical and physical persons dispose their knowledge, what we call the ‘wits’, and withdraw the knowledge with extra knowledge compensation". The benefits of such a Bank of Wits are the same of the banks of "money", which in last centuries have facilitated the flow of money to empower trade and business. Nowadays not only money is necessary to do business but knowledge is. For instance, the knowledge about markets and consumers is highly valuable, but difficult to achieve, and even more difficult to store, transport or trade. The fact is that knowledge and money tend to depreciate through time, and therefore to store neither knowledge nor money is not the worth any longer, what makes better to
C. Carrillo et al. / Social Currencies and Knowledge Currencies
267
generate knowledge and money and "make them move". Following the comparison, knowledge may be converted in a new currency as money is. This paper will show the first step of this project, “the state of the art” in five sections as follows. Section 2 describes the state of the art in social currencies. It describes how, in the past, resources such as CO2 have become objects of transaction to enhance sustainability, and to generate new services and resources. In addition, this section enumerates the theories of cognitive capitalism and shows that the behaviour of the economy is based on knowledge; the relationship between knowledge and wealth and, described the theory of the Monetarization in the world. The first formulation of our proposal is described in section 3, and illustrative examples of a potential application are given in section 4. And finally, our conclusions and proposed future work conclude the paper in the section 5.
2. State of the art This section has three subsections: Social Currencies, which lists different global experiences about the development of alternatives for payment and how these differ from traditional payments; Relationship between Knowledge and Wealth, in which we show that knowledge can become a unit pattern of measurement; and Monetarization, which contains its past and future with knowledge as a currency. 2.1. Social Currencies From the work of Bernard Lieater [20], who was a member of the Central Bank of Belgium, it is known that within the present monetary system, complementary monetary systems have existed and will continue to exist. Because these systems with their respective currencies are not replacements for the national currencies, they are known as complementary currencies. These began in the 1980s and were strongly developed during the 1990s from many other preceding experiences of alternate economic exchange-based systems. Diverse means exist to organize an exchange network. In all of these, a social currency of local character is created to measure what is exchanged. The use of this currency allows the exchange to stop being something occasional between friends and to become a stable and organized economic system. The social currency offers a new model of the economy to the communities that have decided to complement their normal economic activities with a local economic system based on solidarity and local abundance. [15] Solidarity: In the world where there are many shared experiences, a common approach exists and this is one of the main contributions that we want to have included in our proposal. Solidarity exists in diverse parallel currencies, mainly social currencies, for example, the diverse banks of time 1 [23], and local exchange systems that operate in 1
Banks of Time seeks to build local economies and communities that reward decency, caring, and a passion for justice by developing, testing, and assisting experiments with a new medium of exchange liked "Time" (also known as service credits or time banking.) One hour helping others equals one Time Currency.
268
C. Carrillo et al. / Social Currencies and Knowledge Currencies
several countries. Within this category falls the controversial Kyoto Protocol [13]. This is explained in some detail below. The Kyoto Protocol is an international effort to control the injurious effects of human action on the environment and in which ecological aspects are interrelated with economic and global justice. The goal of this protocol is the reduction of six polluting gases that cause global damage to the earth’s environment. Europe set a term of seven years from 2005 for companies to change their technologies and compensate for the pollution they cause by cleaning other parts of the world. This means that a developed country, such as Spain, can approach a developing economy, such as Panama, and invest in clean projects there to balance the European emissions, and, therefore, the contamination. In the carbon market, two types of products are negotiable: certificates of emission reduction related to projects, and quota of emission. The World Bank administers the basic contributions from companies and governments to buy certificates of emissions and to finance projects that reduce emissions in emerging countries. This is achieved by means of a system called the Mechanism of Cleaner Development; the Kyoto Protocol promotes the investment by industrialized countries in these sustainable projects. Exchange Networks [15]: These use something similar to money: cards, bonds, tickets similar to those of children’s games, but done more systematically and this is reflected in what is termed heavy or solid paper. These networks, offer the communities many benefits because they: Regenerate local abilities; Create jobs and greater comfort for all; Enliven the local economy without external capital; Create local credit where needed; Reduce outward money flows so that more money remains in the community; Create a direct and indirect support to companies and local commerce; People return to buy in stores that offer the alternative of paying part of the price in local currency; in addition they can spend money that is saved using this form on more purchases of local goods and services; Stimulate local production for local needs; Remove the necessity to leave smaller settlements; Small business and old local services are losing custom to the large service and transnational companies; Empower manufacturers with limited money, or means, without generating distrust or susceptibility; Offer a friendly economic system where there are no financial costs, interest, expenses or penalties. Neither devaluation nor inflation exists 2.2. Relationship between Knowledge and Wealth The advance of knowledge has been intimately bound to the nature of the capitalist system of marketing. Commonly, in the capitalist system, the industrialists propose, the institutions facilitate, the markets decide and knowledge increases. Thus, even though societies progress to a great extent, inequalities arise because of the disparity in access to knowledge. Features of Knowledge: There are basically three features [3]: x The knowledge is personal, in the sense that it originates and resides in people, who assimilate it as a result of their own experience; x Knowledge can be used repeatedly without being consumed as happens with physical goods. And;
C. Carrillo et al. / Social Currencies and Knowledge Currencies
269
x Knowledge serves as a guide for people’s actions, in the sense of deciding what to do at every moment because, in general, that action must improve the consequences, for each individual, of the perceived phenomena (even changing them, if it is possible). There is someone who states that under today’s capitalism, knowledge has become as necessary a factor as work or capital [24],[4]. For that reason, the contribution of Antonella Corsani [5], [6] who states that work generates knowledge as well as knowledge generates value, is important. 2.3. Monetarization However, the relationship between the economy and knowledge is not new. It has existed since the Industrial Revolution, when production systems began to use machines—that is, incorporated science and technology when building machines; and later, as discussed by Taylor, began organizing the work scientifically [4]. Knowledge would not have any influence on the theory of value if it were more than a species of half-finished good that does not increase but conserve and transmit to the processes the value of capital and the work used to produce it. Nevertheless, things do not happen that way. Neither the theory of value, the history of Marxism, nor freedom, currently dominant, can account for the process of transformation of knowledge into value. The value in use of knowledge [4] is not the fixed point on which to base the value of change, as happens to a marginal utility in the neoclassical theory of value. In effect, independent of the value to the users, in a regime of free competition, the value of change of merchandise whose cost of reproduction is nil tends inevitably to zero. The exchange value [4] of knowledge is then entirely bound to the practical capacity to limit its free diffusion, that is, to protect by legal means the rights of the author, licensees, or contractors, or to limit the possibility of copying, imitating, recreating, or learning from the knowledge of others. In other terms, the value of knowledge is not the fruit of its shortage, which would be natural, but that it has stable limitations, institutionally or of access to the knowledge. As regards, transition to cognitive capitalism, the main thesis of Yann Moulier [4] is that the self-nature of value, its form, the place and the modalities of extraction are a repeating of the above. The transformation is located in a change of the regime of growth or a technical paradigm or socio-technical regime; it is located in a change of the regime of capitalist accumulation, the school of regulation, and a change of the relations of production; that is, a transition in capitalism.
3. Formulation of the new currencies Throughout history, knowledge has been generated exclusively by humans, but now machines (e.g., by data mining and through intelligent infoagents) are beginning to generate knowledge; to become vectors of knowledge [19].
270
C. Carrillo et al. / Social Currencies and Knowledge Currencies
We initiated this project relying on the knowledge-like pattern of measurement in the new alternative social currency, and to contribute to the evolution of the world towards a society of knowledge. Our currency will be denoted by ¢. In order to characterize our currency and to develop an appropriate economic system, the following stages of the management of knowledge are considered: to capture, to conserve, to organize, to process and to spread. Because these stages fit perfectly with each of the levels of the evolutionary process of currencies in the history of economics, a parallel with each stage will stabilize to each will be applied the techniques and tools necessary to represent each stage; these stages are: generation, storage, transport and distribution. Matching these last stages, the project will be carried out as follows. 1. Generation: The generation of knowledge is an economic activity like any other. This generation can be performed by reasoning, discovery, scientific research, the Internet, books, experts, etc. Certain attitudes exist, in addition, that stimulate knowledge; these are reflection and critical thought. At the level of techniques for machine learning, processes for data mining, such as an interactive model for the extraction of knowledge from a great amount of data, can be used. We turn to the generating machines and vectors of knowledge and forget that previously only humans could generate it. We understand that with knowledge, the creation of this money knowledge (as in the case of CO2) can be regulated so that it is related to the single Gross Domestic Product (GDP), with the added complexity of hundreds million currencies that must be handled. 2. Storage: When finding a method so that knowledge is generated by machines, we try to replicate a technological bank to manage the new currency. This is made using expert systems, having established characteristics and rules that allow interactive and innovative operation, to have a site for storing the knowledge generated in the previous stage. From this stage, a System of Interconnection is in charge of transporting the knowledge to the bank safely. 3. Transport: To transport the knowledge, a common language is required so that understanding is possible, and therefore common ontologies are required. During this stage, we adapt knowledge transport, within the bank, according to some of the protocols and characterizations of a traditional bank, and other protocols according to the specifications of our new alternative social currency. 4. Distribution: In this stage, knowledge will move, because it is in this stage that the knowledge loans and later collection of interests, of an activity and traditional banking transactions occur.
4. Illustrative Examples 4.1. Recommender Systems To show the operation of the new currency and the new Bank of Wits, we apply them in the area of Database Marketing (Fig. 1), which can have two options. Nowadays, there is a robust market for Customer Intelligence, where companies buy databases containing lists of potential customers, market studies, and other knowledge useful for guiding marketing and
C. Carrillo et al. / Social Currencies and Knowledge Currencies
271
commercial activities of companies. Therefore, assume that a company wants to release a new product to the market and it requires knowledge to be gathered on a number of people and thereby to be able to determine the number of potential customers. In this case: 1. Generation: There are recommender agents who study every customer. Each monetary unit corresponds to a Recommender Agent, which knows what every customer likes and dislikes. Company A controls this knowledge. This company is considered to be the Author of the knowledge. For example, envisage that company A accounts for 2000 recommender agents who have 2000 units of knowledge, that is 2000 ¢. 2. Storage: Company A deposits its knowledge (its money) in Bank B to be appraised and to receive interest of 5%. 3. Transport: At this stage transport enters to form the Interconnection System, that is, it is in charge of the safe transport of the knowledge towards Bank B. 4. Distribution: Company C, asks Bank B for knowledge about 2000 potential customers, that is, 2000 recommender agents, or 2000¢, to be used in a campaign of direct marketing (database marketing). The company obtains a yield (it gains customers); and by the use of the knowledge it is committed to return the 10% of interest.
Fig 1. Recommender Systems
Company A requests the return of its capital, in knowledge units, together with its 5% interest. The Bank, after giving the percentage, retains a margin. This corresponds to its money, in this case, the extra knowledge generated by moving the knowledge between companies A and C, which will be reused to give more added value to other company customers in future transactions with the Bank. This is an oversimplified example: there is a sequence of actions and concepts such as depreciation, the turnaround time of the knowledge, Rates of Return and other indicators that are not taken into account. However, we maintain that it is sufficiently representative
272
C. Carrillo et al. / Social Currencies and Knowledge Currencies
of the future use of recommender agents as a new currency and how a future dedicated bank, which we name the Bank of Wits, may assist in providing the necessary knowledge to companies, while increasing knowledge in parallel with the increasing wealth of the world. 4.2. Wiki-Bank Another possible application is the Wiki-Bank, (Fig. 2) in which the contributions of users will be valued to compensate them with cites to his contributions, as authors of knowledge. Therefore, two types of values would exist: the first "contributions" that would consider the initial capital of the Wiki-Bank users, and a second value, "the citation" that correspond to the interests that users hope to receive. 1. Generation: In a Wiki-Bank, the knowledge is generated with the initial contribution of users, which would be valued. For example, let a user A enter his knowledge, which may be valued as 100 units of knowledge, that is to say, 100¢. 2. Storage: The user hopes to receive in return the same amount of units of knowledge with a surplus of “citations” that correspond to the interests that the contributed knowledge will generate. In the example, let us consider 5% of the initial contribution. 3. Transport: A determined number of users enter into the Wiki-Bank, make their consultations and are interested in the contribution made by A. They use the knowledge and make “citations”. In this case, the A’s knowledge generates 5 citations, that is 5% of its initial contribution (100¢), in other words 5¢. 4. Distribution: At the moment that the user wants to retire his knowledge of the WikiBank, it will receive his initial contribution, plus the "received citations"; in the example, it will receive 105¢.
Fig. 2 Wiki-Bank
The previous example of Wiki-Bank, is considered like "Deposit with Risk", in which the users deposit their knowledge, hoping that they generate some citations, and when
C. Carrillo et al. / Social Currencies and Knowledge Currencies
273
finalizing the deposit period of time, the knowledge maybe could receive citations or not. In addition, there is the possibility of "Saving Deposit ", in which a user contributes his knowledge and the Wiki-Bank offers a specific interest back to him, that is to say, the contributing user, knows in advance which will be the interest that will receive at the end of the period of time he deposited his knowledge.
5. Conclusions and future work The new currency will assist business development because it will make knowledge move. Instead of having knowledge in their brains or systems, people will invest their knowledge ('wits') in businesses expecting to gain more knowledge. The knowledge that does not move will suffer, similar to money, and suffer severe depreciation. Businesses need not only money but also knowledge to survive. That is why, in 20 years’ time, there will be a huge market of knowledge parallel to money, where companies will need to be provided with knowledge to survive (knowledge about their clients, employees, partners, etc.), and where companies will generate extra knowledge that they will wish to invest in other companies, in the same way as money is used currently. Therefore, the results of this project will be the basic, but crucial, research to set the foundations of a new business model that requires new technology and new social thinking. In particular, minor contributions to the state of the art will be the formation of electronic institutions, the deployment of breakthrough methods for data and knowledge privacy, and the fulfilment of more than 10 years of real results in more than 30 companies with around 50 million users, ready to be used all over the European Union and the world. Acknowledgements This work was supported by grant DPI2005-09025-C02-02 from the Spanish government, “Cognitive Control Systems” References [1] [2] [3] [4] [5] [6] [7] [8] [9]
Entrevista a PAUL SAMUELSON realizada por MARIA JOSÉ RAGUÉ ARIAS. El sistema monetario internacional. Biblioteca Salvat de grandes temas. Barcelona. 1973. ALBERT L. MEYERS. Elementos de Economía Moderna. Renta Nacional, Ahorro, Inversión. Plaza y Janés, S.A., Editores. Barcelona.1973. Andreu, R.; Sieber, S. (2000), "La Gestión Integral del Conocimiento y del Aprendizaje", pendiente de publicación en Economía Industrial. Yann Moulier Boutang. "Capitalismo Cognitivo, Propiedad Intelectual y Creación Colectiva". Traficantes de Sueños. April of 2004. Antonella Corsani. «Les logiques de la división internationale du travail dans l’economie de la connaissance» en C. Vercellone (ed.), Le crépuscule du capitalisme industriel?, Paris, La Dispute, 2002 Antonella Corsani. Changement technique et división internationale du travail, Economica, Paris, 1992 M. Mouhoud, "Les logiques de la división internationale du travail dans l'economie de la connaissance" en C. Vercellone (ed.), Le crépuscule du capitalisme industriel?, Paris, La Dispute, 2002. M. Mouhoud, Changement technique et división internationale du travail, Economica, Paris, 1992 R. Herrera y C. Vercellone, "Transformations de la división du travail et general intellect", en C. Vercellone (dir.), Le crépuscule du capitalisme industriel?, Paris, La Dispute, 2002.
274
C. Carrillo et al. / Social Currencies and Knowledge Currencies
[10] K. Marx, Elementos fundamentales para la crítica de la economía política (Grundrisse) 1857-1858, Madrid, Siglo XXI, 1997. [11] F. Maxwell Harper, Xin Li, Yan Chen. An Economic Model of User Rating in an Online Recommender System. [12] Universal Information Ecosystems (UIE). http://www.cordis.lu/ist/fet/uie.htm [13] Protocolo de Kyoto de la Convención Marco de las naciones unidas sobre el cambio climático. [14] Jérôme Blanc. Las Monedas Paralelas: Evaluación y Teorías del Fenómeno. Ciencias Económicas, Universidad Lumière, Lyon [15] Calvo, Guillermo A. Rodriguez, Carlos A. A Model of Exchange Rate Determination under Currency Substitution and Rational Expectations , Journal of Political Economy, vol. 85, june, pp. 617-625. 1977. [16] Girton, Lance. Roper, Don., Theory and Implications of Currency Substitution, Journal of Money, Credit and Banking, vol. 13, n°1, February, pp. 12-30. 1981.1. [17] Agustí Canals. The role of Information in Economics. UOC (Universitat Oberta de Catalunya). Barcelona. Catalonia. Spain. April. 2002. 2. [18] Agustí Canals. The Strategic Management of Knowledge Flows in the Spatial Economy. An Agent-Based Modeling Approach. PhD Thesis. ESADE- Universitat Ramon Llull. November 2004. [19] Alfons Cornella. Infonomía. Barcelona. Catalonia. Spain. 2005 [20] Más Allá de la Codicia y la Escasez: El Futuro del Dinero. Entrevista a Bernard Lietaer por la periodista Sarah van Gelder, editora del periódico Yes, periódico de futuros positivos, EUA, 1998. [21] Gediminas Adomavicius, and Alexander Tuzhilin. Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. IEEE Transactions on Knowledge and Data engineering, vol. 17, No. 6, June 2005 [22] Specific targeted research project. IST call 5. FP6-2004-IST-5. ONE - Open Negotiation Environment [23] Time Dollar Institute. www.timedollar.org [24] Max Boisot and Agustí Canals. Data, information and knowledge: have we got it right?. Journal of Evolutionary Economics (2004) 14: 43–67
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
275
Improving the Team-work in Heterogeneous Multi-agent Systems: Situation Matching Approach Salvador IBARRAa,1 , Christian QUINTEROa, Didac BUSQUETSa, Josep RAMÓNa, Josep Ll. DE LA ROSAa and José A. CASTÁNb a Agents Research Lab, University of Girona, Spain b Facultad de Ingeniería, Universidad Autónoma de Tamaulipas, México
Abstract: This paper presents a method called “Situation Matching” that aids to improve cooperative tasks in heterogeneous multi-agent systems. The situation matching (SM) above represent a match between the system requirements and the agents’ capabilities. In this sense, the agents have a set of information denoted “Agent Situation” by means of three parameters: Proximity, Introspection and Trust. Thus, the agents are represented by autonomous mobile cooperative robots. In our approach, the agents have different controllers to generate dynamic diversity (heterogeneity) in the system. So, these systems can be considered as a team of heterogeneous agents that must work together to fulfill some cooperative tasks. In particular, this paper studies how the heterogeneous agents’ performance improves by means of the “situation matching”. Conclusions show the advantages of our proposal in the improvement of intelligent agents’ performance in the robot soccer testbed. Keywords. Heterogeneous Intelligent Agents, Cooperative Tasks, Team-Work.
1. Introduction Multi-agent Systems (MAS) are computational systems in which two or more agents work together to perform some set of tasks or satisfy some set of goals. So, research in MAS is carried out by the assumption that multiples agents are able to solve problems more efficiently than a single agent does [1]. In fact, the agents would have different capabilities for which an agent can perform the same task but in a different way that the others. Hence, these systems can be represented by means of the “physical agent” paradigm. Such systems are able to consider their physical features (e.g. dynamics). In this approach, such physical agents have different automatic controller to generate dynamic diversity (heterogeneity) in the system. In this sense, the multi-agent systems can be considered as a team of heterogeneous intelligent agents with different capabilities that work together to fulfill some cooperative task. Therefore, the coordination mechanisms among agents are necessary to improve the performance of the systems above. Such mechanisms allow to the agents to perform cooperative task improving their interactions and make sure decisions in the team1 Correspondence to Salvador IBARRA, University of Girona, Campus Montilivi, Building P4, E-17071. Tel.: +34 972 41 8478; Fax: +34 972 41 8976; E-mail: [email protected]
276
S. Ibarra et al. / Improving the Team-Work in Heterogeneous Multi-Agent Systems
work. Our approach is mainly based on Electronic Institutions (e-Institutions) Foundations [2]. Some e-Institutions features have similarities with human relationships providing to the agents the rules that allow cooperating through roles previously defined [3]. In this sense, each scene has an agent-behavior within the multiagent system. The proactive agent-behavior in the scenes allows to the system to know the goals of each scene and facilitates the coordination between the agents within a specific scene. Such agent-behavior takes into account three parameters (proximity, introspection and trust) in its decision-making structure. So, with the proximity the agents can take into account their position on the environment regarding other agents to know its capability to perform any task. The introspection allows to the agents are able to analyze their physical bodies and to know the tasks that they can perform according to their physical capabilities [4]. Such knowledge can be extracted by the agents using introspective reasoning techniques [5], [6], [7], [8]. Finally, trust gives to the agents the ability to make their decisions based on the results of past interactions with other agents. According to the above parameters, we have established eight situation matching and some study cases in the robot soccer testbed. Currently, robot soccer is considered a good research platform for cooperative multi-agents systems at both simulation and real environments. It emulates a soccer game where the agents must interact among them in a dynamic, cooperative and competitive environment [9], [10]. This approach shows how intelligent agents can use the proposed situation matching to exploit their heterogeneous skills for improving the team-work performance. Section 2 presents our approach to generate dynamical diversity from a controloriented perspective. Section 3 shows the main idea of our coordination mechanism. Section 4 explains the situation marching approach. Section 5 shows experimental result on the robot soccer simulation platform. Finally, conclusions are show in section 6.
2. Heterogeneous Intelligent Agents According to [11], an agent is an entity that is situated in some environment, and that is capable to solve problems by autonomous actions to achieve its goals in this environment. In this sense, MAS can be integrated by a group of intelligent agents with different capabilities that have the characteristic to communicate among them to perform cooperative tasks. In our approach, the heterogeneity is related with the agents’ dynamic diversity in a control systems background. 2.1. Agents’ Skills Our tests used the robot models of the SimuroSot simulator [12]. The simulator facilitates extensive training and testing of this proposal. We design four different PID controllers with suitable control laws to put into a practice our team. Thus, the following classification to each agent has been obtained: Precise, Disturbed, Fast and Fast-disturbed. Table 1 shows the dependence of each designed physical agent with four selected control design criteria: speediness, precision, persistence and control effort.
S. Ibarra et al. / Improving the Team-Work in Heterogeneous Multi-Agent Systems
277
speediness: is the velocity response of the agents to reach any desired goal. precision: capacity of the agents to achieve their goals with a minimal error. persistence: is the capability of the agents to follow the set point when there are external signals affecting the aims’ value of the agents. control effort: represent the energy consumption of each agent when attempting its goals. Table 1. Definition of Physical Agents according to designed controllers. (n:great dependence; p: minor dependence) definition
speediness
precision
persistence
control effort
precise
p
n
n
p
disturbed
p
p
p
n
fast
n
p
n
n
fast-disturbed
n
p
p
n
Figure 1. Spatial evolution of each physical agent
3. Coordination Mechanisms The coordination among agents is a challenge to improve the performance of heterogeneous team-agent in cooperative environments. For that, we have developed a method based on the Electronic Institution Foundations [2] where the agents have meetings (scenes) in specific zones of the environment. Such scenes have then a set of well-defined goals to perform any cooperative tasks. The agents within the scene must interact and coordinate to work jointly. The scene’s activation is given by means of target’s location in the environment. Then, scenes perform an agents’ selection by means of a match processes based on the three parameters proposed (Proximity, Introspection and Trust). This process is called “Situation Matching” (SM) which is a consideration of the parameters above. In this sense, these parameters attempt provide a higher level of awareness to the scenes. So, a scene can combine these parameters
278
S. Ibarra et al. / Improving the Team-Work in Heterogeneous Multi-Agent Systems
between them according with its necessities (e.g. task requirements) to perform the match process with the agents. In particular, such match process consists in a match between the scene’ parameter selected (SM) and the “Agent Situation” (AS). Additionally, an agent can be represented its AS by its physical perception of the three values (proximity, introspections and trust) regarding to the environment. In this sense, the scene performs the SM selection and sends this information to the team-agent. Every agent calculates its AS and sends the response to the scene. Then, the scene analyzes every AS and selects the most suitable agents. Once inside the scene, the agents must use the parameter proposed by the scene to coordinated among them and perform the proposed task to achieving the scene’s goals. A review of the three parameters calculation is shows as follow. 3.1. Proximity The Proximity is a value related to the distance between the current location of each agent and the location of the desired target. Such knowledge is regarding to the environment, and represents the physical situation of each agent in the environment. Proximity Coefficient (PC) is the representation of the distance between a determinate agent and the present goal in the study zone. Eq. (1) shows the PC calculation. PCk ,i
§ d k ,i · ¨1 d max i ¸¹ ©
PC >0,1@
(1)
Where the agent with the major PC represents the better agent, and d(k,i) is the distance between the agent k with the target in the scene i and dmax(i) is the maximal distance between the agents and the target in the scene i. Eq. (2) shows the dmax(i) calculation where p is the total number of agents present in the system.
d max i
max d1,i ,..., d p ,i
(2)
3.2. Introspection The knowledge about the physical agents’ bodies (introspection) is obtained through the representation of them on a capabilities base. All this enclosed information can be extracted by the agents using introspective reasoning techniques [4], [5], [6], [7], [8] and handled using capabilities management techniques [4], [5], [7]. These approaches get to guarantee sure commitments and improve of this way the multi-agent system achievements, because the physical agents can develop a better self-control improving its performance in coordinate tasks. Introspection Coefficient (IC) represents the knowledge of the agents about their physical capabilities to perform a proposed task. In particular, the introspection process is performed by using neural networks taking to account the environment conditions (e.g. agents’ locations, target’ locations) and tasks requirements (e.g. achieve the target, avoid obstacles). Considerer that IC [0,1] the greater IC represent a good agent’ performance. Particularly, a neural network’s structure with two neural networks with an intersection between them has been proposed. Thus, the training of the first neural
S. Ibarra et al. / Improving the Team-Work in Heterogeneous Multi-Agent Systems
279
network takes into account the agent’s initial position and the position of a proposed point to obtain the agent’s time to achieve the proposed target. The second neural network uses the time rate calculated by the first neural network and the desired final angle to know the capability of each agent to perform any proposed task within the determinate scene. 3.3. Trust Trust represents the social relationship among agents. Trust Coefficient (TC) takes into account the result of the past interactions of an agent with other agents. The agents decide how perform the proposed task based on the TC. Eq. (3) shows the TC calculation if the aim is reached. Otherwise, Eq. (4) shows the TC calculation if the aim is not reached.
TC k ,i
TC k ,i 'Ai ,a
(3)
TC k ,i
TCk ,i 'Pi ,b
(4)
Where the TC [0,1] and a great TC represent the better agent; ¨A(i,a) and ¨P(i,b) are the awards and punishments given by the scene i respectively and a is from 1 to Q(i) that is the number of awards in the scene i and b is from 1 to R(i) that is number of punishments in the scene i. Hence, the Agent Situation (AS) is obtained by the Eq. (5). In this sense, the agent is able to considerer only the coefficients according to the systems necessities. For thus, the parameter no considerer has a neutral value (represented in the calculation by 1).
AS k ,i
PC * IC * TC k ,i
k ,i
k ,i
(5)
4. Situation Matching Approach In our approach, a scene is defined as a meeting (scenes) among agents to perform coordinated tasks in the robot soccer testbed. This meeting allows to fulfill the aims of a determinate zone. Thus, each zone has a defined amount of actions which determine the role of each agent within the active scene. We have defined the scenes activation by means of the target’s location (in our case, the current ball position) in the environment. So, the roles are the behaviors of the agents in every scene. These roles have been designed depending on the features of the active scene (e.g. agents’ locations, amount of agents, etc.). Studies about the changes between zones are necessary to update the agents’ roles according with the Agent Situation values of every agent in each zone. In this sense, the changes between zones are realized when the goals of the scene active are achieved and the systems identifies a new target in another zone of the environment. This process is performing through the coordination between the definite zones, due to the fact that the agents must know so much the zone to which they will go and their physical situation in this new zone.
280
S. Ibarra et al. / Improving the Team-Work in Heterogeneous Multi-Agent Systems
Then, from combination among the three decision parameters, eight situation matching (SM) have been abstracted. Table 2 we show the defined SM. Table 2. Classification of the Situation Matching (SM) (0: is not considered; 1: is considered) SM
proximity introspection trust
SM0 0
0
0
SM1 0
0
1
SM2 0
1
0
SM3 0
1
1
SM4 1
0
0
SM5 1
0
1
SM6 1
1
0
SM7 1
1
1
Therefore, the scenes use the SM selected to perform the agents’ selection process. In this processes, the active scene analyze its goals and proposes its SM. So, agents calculate their AS and communicate this information to the scene. The active scene allows the entry to the agents with a higher AS according to the SM. Inside the scenes the agents use the value of the Agent Situation that they have used to entry to the scene to perform task allocation to adopt a roll and fulfill the goals of the active scene. The agent-behavior of the scene allows coordinating with the teamagent to perform the current tasks according to its needs and the agents’ capabilities. In addition, the scene chooses the most suitable agent to execute the main task. The other agents perform another set of less transcendent task.
5. Experimental Results By using the robot soccer testbed, we have used each one of the agents designed in the section 2 to form a team of four heterogeneous agents. In particular, we have segmented the environment in three zones (defense, mid-field, attack). So, the team was tested in two different experiments. In the first experiment, our team competed against an opposing team of homogeneous agents in thirty games for every situation matching shown in table 2. In addition, the initial values of the coordination parameters for each case in all the experiments were change of a random way at the beginning of each game. In the second experiment, a league of twenty-eight games was performed. In this league teams with different situation matching competed among them. In particular, in all the games for the two experiments the scenes used only one of the SM at the same time. Table 3 shows the obtained results of the experiment phases. In the first case (a) the system performance is improved when our agents take into account the three proposed parameters as coordination mechanisms. Specifically, the SM7 (all the parameters considered) shows a better average (improvement rate: +81% better) than the SM0 (any parameter considered). On the other hand, the second case (b) presents the obtained results of the second experiment. In these results is shown how when our team-agent use jointly all parameters to competed against other parameter’ selection
281
S. Ibarra et al. / Improving the Team-Work in Heterogeneous Multi-Agent Systems
the system performance is increased. For this case, the SM7 (the best case) shows a better average (improvement rate: + 92%) than the SM0 (the worst case). Table 3. a) results of the first experiment; b) results of the second experiment; (WG: Won Games) Rank
Situation Matching
WG
Average (%)
Rank
Situation Matching
WG
Average (%)
1
SM7
21
70
1
SM7
25
89,3
2
SM3
16
55.3
2
SM6
21
75,0
3
SM6
14
46.7
3
SM3
19
67,9
4
SM5
12
40
4
SM5
16
57,1
5
SM4
12
40
5
SM2
13
46,4
6
SM2
10
33.3
6
SM4
9
32,1
7
SM1
9
30
7
SM1
7
25,0
8
SM0
4
13.3
8
SM0
2
7,1
a)
b)
6. Conclusions This work presents a new coordination mechanism that aids to improve the coordination among heterogeneous multi-agent systems by means of intelligent behaviors in the physical agents. This approach consists of three parameters (proximity, introspection and trust) to situate the coordination mechanisms for heterogeneous team-agents. We emphasize the agent-behavior given to the scenes with which it is possible to consider them like a part of the team-agent. Besides, our contribution described combinations of the three parameters above to generate the Situation Matching and show how the cooperative team-agent performance is different according to the parameters. In particular, the best performance is obtained when our team-agent took into account all the parameters in its decision-making structure. At moment, in all the experiments the scenes used only one of the SM at the same time. Then, the mixture among the SMs in the same experiment is something interesting to study how to take advantage when the system is able to select the best behavior in a determine scene according to the current goals. It is a new challenge to consider and to try to develop in the future. In addition, some approaches in multi-agent systems include a behavior-oriented environment to achieve cooperative tasks. For instance, in [1] an architecture for behavior-based agents is presented. So, this architecture provides a distributed planning capability with task specific mechanisms to perform cooperative joint-planning and communication in heterogeneous multi-robots systems. In particular, the architecture above expresses the behavior of a system by the implementation of two modules which can be related to our Situation Matching Approach. We considered that the architecture afore mentioned can be fitted in the SM6. In this sense, the consideration of the
282
S. Ibarra et al. / Improving the Team-Work in Heterogeneous Multi-Agent Systems
remaining parameters (Trust) with SM7 (or sometimes SM3) will improve the teamwork performance of that architecture in cooperative environments. Finally, we plan to implement the main contribution of this paper in a real robot soccer testbed to extrapolate and corroborate the obtained results. It is also interesting to compare this coordination mechanism approach with other techniques, in order to evaluate usefulness and advantages of our proposal. However, at present we perform studies about how take advantages of this approach. Acknowledgements This work was supported by grant DPI2005-09025-C02-02 from the Spanish government, “Cognitive Control Systems”
References [1]
D. Jung and A. Zelinsky, An Architecture for Distributed Cooperative-Planning in a Behaviour-based Multi-robot System, Journal of Robotics & Autonomous Systems (RA&S), special issue on Field & Service Robotics, vol 26, 1999, pp. 149-174. [2] M. Esteva, J.A. Rodríguez, C. Sierra, J.L. Arcos, On the Formal Specification of Electronic Institutions, In. Agent Mediated Electronic Commerce, The European AgentLink Perspective, Spring-Verlag, 2001, pp. 126-147 [3] M. Luck, P. McBurney, C. Preist, Agent Technology, Enabling Next Generation Computing, Roadmap for Agents Based Computing, ISBN 0854 327889, 2003. [4] C.G. Quintero, J.Ll. de la Rosa, J. Vehí, Studies about the Atomic Capabilities Concept for Linear Control Systems in Physical Multi-Agent Environments, 6th. IEEE International Symposium on Computational Intelligence in Robotics and Automation, Catalog Number: 05EX1153, ISBN 0-78039355-4, 2004, pp. 727-732. [5] C.G. Quintero, J.Ll. de la Rosa, J. Vehí, Physical Intelligent Agents’ Capabilities Management for Sure Commitments in a Collaborative World, Frontier in Artificial Intelligence and Applications, IOS Press, ISBN I 58603 466 9, ISSN 0922-6389, 2004, pp. 251-258. [6] C.G. Quinero, J. Zubelzu, J.A. Ramon, J. Ll. de la Rosa, Improving the Decision Making Strucutre about Commitments among Physical Intelligent Agents in a Collaborative World, In. Proc. of V Workshop on Physical Agents, ISBN 84-933619-6-8, Girona, Spain, 2004, pp. 219-223. [7] C.G. Quintero, J. Ll. de la Rosa, J. Vehí, Self-knowledge based on the Atomic Capabilities Concept. A Perspective to Achieve Sure Commitments among Physical Agents, 2nd International Conference on Informatics in Control Automation and Robotics, Barcelona, Spain, 2005. [8] J. Zubelzu, J. Ll. de la Rosa, J.A. Ramon, C.G. Quintero, Managing Heterogeneity in a Robot Soccer Environment, Frontiers in Artificial Intelligence and Applications, IOS Press, ISBN I 58603 466 9, ISSN 1922-6389, 2004, pp. 317-322. [9] J. Johnson, J. Ll. de la Rosa, J.H. Kim, Benchmark Tests in the Science of Robot Football, In. Proc. IEEE of Mirosot-98, R.J. Stonier (ed), Univ. Central Queensland, Paris, 1998. [10] J. Johnson, Robotics in the Evolution of Complexity Science, Department of Design and Innovation, The Open University, Milton Keynes, UK, 2004. [11] M. Wooldrisge, An Introduction to MultiAgent Systems, John Wiley & Sons Ltd (ed), ISBN 0-47149691-X, August, 2002. [12] Federation of International Robosoccer Association (FIRA). SimuroSot simulator available from the web: http://www.fira.net/soccer/simurosot/overview.html.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
283
WIKIFAQ: Obtaining Complete FAQs Araceli MORENOa1 , Claudia CARRILLOa, Sonia DELFINa, Eduard MUNTANERa, Josep Lluis DE LA ROSAa Agents Research Lab – EASY xarxa IT CIDEM, University of Girona, Catalonia
Abstract: In this paper we present research into obtaining complete FAQs that provide users with satisfactory information. To achieve that we join FAQ technology with the emerging wiki technology to obtain a tool we call WikiFAQ which includes all the basics elements of a FAQ combined with the properties of wikis, and in which we include a set of Knowledge Contribution Methods (KCM) based on bidding and auctioning systems, and a reward mechanism (RM) based on points as in trust and reputation systems. We use Citizen Information Services (CIS) in local governments as an application domain for WikiFAQ because they are a rich source of diverse and dispersed users with direct and relevant experience.
Key words: FAQ, Wiki, KCM, bids, auctions, trust and reputation, compensation, recognition, intelligent agents, citizen information service.
1. Introduction The goal of FAQs is to provide information to users in a certain domain. They are traditionally organized as lists of responses provided and checked by experts. We believe users are the real domain experts, and that is why we value the knowledge they have gained from experience. Through this approach we attempt to obtain information from the users themselves to create more complete FAQs to satisfy their information needs. To achieve our objective, we propose a tool called WikiFAQ that combines FAQ technology with the emerging wiki technology. Maintaining one of the inherent properties of a wiki, we introduce a set of Knowledge Contribution Methods, which we call KCMs, based on bidding and auctioning systems which support trust and reputation mechanisms using extrinsic motivation wherever compensation based on points and rewards is used. To demonstrate the feasibility of our proposal, we present a case study in which we apply intelligent agents to the domain of local government Citizen Information Services (CIS). This article is divided into 6 sections. The first section deals with FAQs and their weaknesses and characteristics, provides the advantages and disadvantages of wikis, and 1 Correspondence to: Araceli MORENO, University of Girona, Campus Montilivi, Building PIV, E-17071 Tel: +34 972 41 8482; Fax: +34 972 41 8976; e-mail: [email protected]
284
A. Moreno et al. / WIKIFAQ: Obtaining Complete FAQs
presents the virtual assistants created to work with government Citizen Information Services. Section 2 provides details about the WikiFAQ tool, the functioning of KCMs and types of users. In section 3 the case study and the results obtained are explained. Section 4 presents conclusions and, finally, section 5 is concerned with future work.
2. FAQ’s, Wikis and CIS Virtual Assistants 2.1. Weaknesses and characteristics of FAQs The purpose of a FAQ in this domain is to record group consensus about a shared question and make the answer available to everyone [1]. FAQs are traditionally organized as ordinary lists of answers with certain deficiencies [2]. Their weaknesses stem from them not allowing the user any opportunity to ask a question, the information provider not knowing the actual questions, the list being “poorly” organized and the possibility of a list of FAQs within a FAQ. In [3] the following characteristics are mentioned: they are designed to recover very frequent, popular and extremely reusable question-answer pairs, called QA pairs; the QA pairs are generally maintained and periodically posted on the Internet; and the QA pairs are usually provided and checked by domain experts. 2.2. Strengths and Weaknesses of Wikis We have obtained the following definition of a wiki in [4]: “A wiki is a collection of hypertext pages that can be visited and edited by anybody (although in some cases user registration is required) at any moment.” Wikis are tools constructed with emerging technology and, therefore, only the behavior of the users and the content of their information have been studied. In agreement with [5] about the best known wiki websites, we study Wikipedia to show the strengths and weaknesses of the vast majority of wikis. Its strengths are rooted in it being an encyclopedia whose contents are free [6] and noncommercial, offering point-to-point communication components within a working framework, creating a new way of acquiring knowledge (these 4 characteristics are cited in [7]), promoting active participation among its members [8, 7, 9], allowing participants to assume different roles [5, 10, 9], new forms of cooperative work [11], low resource costs [12, 7, 13, 9] and links to internal and/or external articles [5]. Among its weaknesses we can mention the differences in writing style [14], its unstable and erratic maintenance [15], inappropriate, offensive or incomplete content provided by participants whose actions might or might not be intentional [16], legal problems of authorship resulting from participants who are free to post information [5] and conflicts among the Wikipedia editors themselves [16, 10].
A. Moreno et al. / WIKIFAQ: Obtaining Complete FAQs
285
2.3. Study of the CIS Virtual Assistants To study virtual assistants related with local government CIS we take a look at Sara, who is working in the city of Malmö 2 (Sweden) on HOPS 3 , a project to implement a public platform controlled by voice that takes advantage of the latest Information and Communications Technologies and that provides European users with access to nearby governments, and on iSAC 4 whose main objective is to provide technology capable of improving government citizen information services.
3. WikiFAQ Our proposal is to create complete FAQs in which the information providers are the users themselves, who are knowledgeable about the domain through their own experience. The emerging wiki technology provides a powerful tool with which information can be obtained from users, considered by us to be the real domain experts, and can be treated for subsequent use by other users who need it. This approach has encouraged us to develop WikiFAQ. In Figure [1] we present a conceptual diagram of the elements involved in its creation.
Figure 1. WikiFAQ conceptual structure.
WikiFAQ is made up of emerging wiki and FAQ technologies. An interface presented as part of the wiki technology is divided into two sections: 1. the Questions area, where the users ask questions; and 2. the Contributions area, where responses to the users are posted. If the answer cannot be found in the FAQ, the user who knows the answer uses one of the KCMs to respond to it; if the answer is in the FAQ, the user is able to edit the response, if appropriate. The part concerning the FAQ technology is made up of a spelling dictionary, a collocation dictionary and the search engine. Finally, there is the context database and the WikiFAQ community database, whose information is related with user information and the WikiFAQ settings.
2 3 4
(http://www.malmo.se/] http://www.bcn.es/hops/index.htm http://130.206.126.131:8088/isac/
286
A. Moreno et al. / WIKIFAQ: Obtaining Complete FAQs
3.1. Knowledge Contribution Methods (KCMs) 3.1.1. Simple Interaction Method This can be used by all users. Only one user at a time can have access to the method. The Anonymous user is the only one that does not receive the points offered for responding to the question. The rest of the users must offer a certain quantity of points when responding. 3.1.2. Bidding Method Only the Registered and Expert users are allowed to access the method. The participants must offer their answer only once, bidding points. After a certain amount of time determined by WikiFAQ has passed, the winning participant will be chosen at random. 3.1.3. Auction Method This functions like an English auction, where the participants raise bids to obtain the item being auctioned. This method has been divided into two types: Individual Auctions where the participants individually raise bids as many times as they think necessary within a period of time determined by WikiFAQ; and the Group Auction which, while similar to the Individual Auction, differs from it in that some participants can combine their points and bid for the same answer. The winning group of participants divide the points won equally. 3.1.4. The users x x x
Anonymous. They are not registered in WikiFAQ. Their activities are consulting, asking and answering. They do not receive rewards for their participation. They can only use the Simple Interaction method. Registered. They are registered in WikiFAQ. Their activities are consulting, asking and answering. They obtain rewards for consultations of their answers and for being the winner when participating in a KCM. They can use any KCM. Expert. They possess the same permission as registered users and, in addition, are allowed to edit. They also receive rewards for editing.
There are WikiFAQ Administrators, but they are not considered to be users for the following reason: an Administrator is in charge of maintaining the WikiFAQ and of conferring permits for the registered users to become expert users. The Administrator is elected by the WikiFAQ users by vote. In order to be candidates, they must be registered or expert users. Figure [2] shows the relation between users and the KCM.
A. Moreno et al. / WIKIFAQ: Obtaining Complete FAQs
287
Figure 2. Relation between users, KCM and WikiFAQ.
4. Case study and results In each KCM 10 questions were asked randomly. In [Table 1] we show the elements involved in the case study. To assess their contributions, we consider all the agents to be registered and expert users. The 4 agents participate in any KCM under the same conditions. Table 1. Elements of the case study in the SAC. Quantity
Elements
Description
4 agents
Agent1, Agent2, Agent3, Agent4
Number of agents that will emulate the citizens
5 questions
A, B, C, D, E
Identifier of the questions asked by a citizen
5 answers
a, b, c, d, e
Identifier of the answers given by the citizens
4 scenarios
The negotiation interface
One of the KCMs: Simple Interaction, Bids, Individual Auction and Group Auction
In Figure [3] we show the scenario where the question and the points offered to the winning agent, as well as the contributions (answer and points) of the agents, are presented. We consider it valid that users can initiate their contributions with 0 points.
Figure 3. Scenario in which the agents participate in a KCM.
288
A. Moreno et al. / WIKIFAQ: Obtaining Complete FAQs
Each agent began with the amount of points shown in [Table 2]. The results of the questions are presented in Figure [4]. Table 2. Amount of initial points of each agent. Agent 1
Agent 2
Agent 3
Agent 4
154
63
798
210
The first chart of Figure [5] corresponds to the Simple Interaction method. In it we can see that the agents increased their amount of points because their negotiations were successful. Agent 2 increased his point amount 4 times because he participated in 50% of the questions. In the bidding method chart, Agent 2 continues as the most participative with a consequent increase in points. In the Individual Auction method chart, 2 agents have approximately the same number of participations and the same increase in points. Finally, in the Group Auction chart, we can observe that Agent 3 was more participative and his points increased. Consequently, the results indicate to us that a greater participation of the agents in the WikiFAQ methods leads to an increase in points. Nevertheless, we can see that the participation of Agent 1 in the Group Auction method was entirely in support of Agents 2 and 4, resulting in a decrease in his point amount. Agents 4 and 3 also made helping contributions towards other agents, but their points did not decrease to a considerable degree.
Figure 4. Record of the participations of Agents in the KCM.
Figure 5. Behavior of the agents in each KCM.
A. Moreno et al. / WIKIFAQ: Obtaining Complete FAQs
289
5. Conclusions According to the results obtained, the behavior of the agents was positive in each negotiation except in the Group Auction where one agent decreased his initial amount of points by supporting other agents and not receiving any compensation. This demonstrates that some agents will support another agent, in the hope of posting the answer that they consider correct in WikiFAQ. We are aware that wikis are novel tools with very little scientific research behind them, and that their main controversial aspect is, at the same time, their most appealing property: the freedom to edit. This is due to the belief that the users themselves, in different situations, can include information that is not reliable. Although such a situation cannot be solved completely, according to research carried out into the behavior of users and the information contained in wikis from different domains, favorable results have been obtained. The wiki we chose as a point of reference in our research work is Wikipedia, which has in turn provided a reference for all the research work into the behavior of this new tool. In addition, we cannot forget the Linux operating system, which has a very similar philosophy to that of Wikipedia. Regarding the knowledge contribution methods in which bids and auctions are used, and the reward method based on points, we have also taken into account the disadvantages of their application resulting from their possible incorrect use by WikiFAQ users. There are, however, companies that are considered to be successful [17] currently using these systems on the Internet with positive results. In the work carried out by [15], three hypotheses are presented to explain the success of Wikipedia. Finally, we conclude that the WikiFAQ tool provides more complete FAQs through user participation.
6. Future work Trust and reputation mechanisms. Establish appropriate trust and reputation models in WikiFAQ to obtain a guide to assist users’ in their decision making during interaction in the Group Auction model and, in that way, obtain more and more correct and complete FAQs and provide support for users when they vote to elect the WikiFAQ administrators. Citation auction. WikiFAQ can be configured to be used as a tool in calls for conference papers. The users are those who are interested in participating in the conference and the points will be the number of citations they have obtained for their published articles. Those interested in participating in the conference can auction off a number of their citations as a conference guarantee. With this mechanism the conference directors avoid article reviews.
290
A. Moreno et al. / WIKIFAQ: Obtaining Complete FAQs
Acknowledgements We are grateful to the Autonomous University of Tamaulipas for the support granted to our project and to the Spanish government for their collaboration (DPI2005-09025-C02-02 “Cognitive Control Systems”).
References [1]
[2]
[3] [4] [5]
[6]
[7] [8] [9] [10] [11]
[12] [13] [14] [15] [16] [17]
Burke, R., Hammond, K., Kulyukin, V., Lytinen, S., Tomuro, N., and Schoenberg, S. (1997). Natural Language Processing in the FAQ Finder System: Results and Prospects. In Papers from the 1997 AAAI Spring Symposium on Natural Language Processing for the World Wide Web, Stanford University, California. Sneiders, E.: Automated FAQ Answering: Continued Experience with Shallow Language Understanding. In: Question Answering Systems. Papers from the 1999 AAAI Fall Symposium, November 5-7, North Falmouth, Massachusetts, USA. Technical Report FS-99-02. AAAI Press (1999) 97-107. Moldovan, D., Pacca, M., Harabagiu, S., and Surdeanu, M. 2003. Performance issues and error analysis in an open-domain question answering system. ACM Trans. Inf. Syst. 21, 2 (Apr. 2003). Wikipedia http://es.wikipedia.org/wiki/Wikis. Aronsson, Lars (2002). Operation of a Large Scale, General Purpose Wiki Website: Experience from susning.nu's first nine months in service. Paper presented at the 6th International ICCC/IFIP Conference on Electronic Publishing, November 6 - 8, 2002, Karlovy Vary, Czech Republic. Lih, Andrew. Wikipedia as Participatory Journalism: Reliable Sources? Metrics for evaluating collaborative media as a news resource. 5th International Symposium on Online Journalism. University of Texas, Austin, United States, April 16-17, 2004. Ma, Cathy (2005). WHAT MAKES WIKIPEDIA SO SPECIAL? The Social, Cultural, Economical Implications of the Wikipedia. Paper submitted to Computers and Writing Online 2005 hosted by Kaironews. Cedergren, Magnus. Open content and value creation, First Monday, volume 8, number 8 (August 2003), Copyright ©2003 First Monday, Copyright ©2003, Magnus Cedergren. Stvilia, B., Twidale, M. B., Gasser, L., Smith, L. C. (2005). Information quality discussions in Wikipedia. Technical Report ISRN UIUCLIS--2005/2+CSCW. Voss, J. Measuring Wikipedia. In Proceedings 10th International Conference of the International Society for Scientometrics and Informetrics, 2005. Bryant, S., Forte, A. and Bruckman, A. (2005). Becoming Wikipedian: Transformation of participation in a collaborative online encyclopedia, To appear in: Proceedings of GROUP International Conference on Supporting Group Work. Ciffolilli, A. (2003): Phantom authority, self–selective recruitment and retention of members in virtual communities: The case of Wikipedia. First-Monday. 8 (12) December 2003. Stalder, F. and Hirsh, J. Open Source Intelligence. First Monday, volume 7, number 6 (June 2002), Copyright ©2002, First Monday. Emigh W. and Herring S.C. Collaborative Authoring on the Web: A Genre Analysis of Online Encyclopedias System Sciences, 2005. HICSS '05. Proceedings of the 38th Annual Hawaii International Conference on (2005), pp. 99a-99a. Viegas B., Fernanda, W. M., Kushal, D.. (2004). Studying cooperation and conflict between authors with history flow visualizations. Paper presented at the Human Factors in Computing Systems, Vienna, Austria. Kolbitsch, J. and Maurer, H. (2005): Community Building around Encyclopaedic Knowledge. In: Journal of Computing and Information Technology, Volume 13. ebay: http://www.ebay.com/ , qxl.com: http://www.qxl.co.uk/, epinions.com: http://www.epinions.com/, BizRate: http://www.bizrate.com/, Amazone: tripadvisor: http://www.tripadvisor.com/, http://www.amazone.com.
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
291
Designing a Multi-Agent system to simulate scenarios for decision-making in river basin systems Thania RENDÓN-SALLARDa, Miquel SÀNCHEZ-MARRÈa , Montserrat AULINASb, Joaquim COMASb a
Knowledge Engineering and Machine Learning Group, Universitat Politècnica de Catalunya c/Jordi Girona 1-3, E08034, Barcelona, Spain {trendon,miquel}@lsi.upc.edu b Laboratori d’Enginyeria Química i Ambiental (LEQUIA), Universitat de Girona, Campus de Montilivi s/n, Girona, Catalonia 17071 Spain [email protected], [email protected]
Abstract. Environmental problems tend to be very intricate, thus traditional software approaches cannot cope with this complexity when developing an environmental system. Multi-Agent systems (MAS) have the ability to deal with complex problems, thus we propose the development of a MAS for supporting the decision-making in a river basin system. With this proposal, we intend to provide feasible solutions at catchment scale throughout modelling and simulation of different scenarios in a river basin system. Keywords. Multi-Agent Systems, Decision Support, River Basin management
Introduction River basins are important social, economical and environmental units. They sustain ecosystems, which are the main source of water for households, agriculture and industry. Due to population growth, industry and overexploitation, the demands made on the river basin are increasing while the capacity of the basin to meet these demands is decreasing. Therefore the protection of all surface waters and groundwaters must be assured in their quality and quantity. The best way to fulfill these requirements is with a management system at catchment scale that integrates all the water systems involved (Sewer system, Waste Water Treatment Plants and River) [1, 2]. The management of river basins involve many interactions between physical, chemical and biological processes ergo these systems become very intricate. Some of the problematic features found in the river basin domain are intrinsic instability, uncertainty and imprecision of data or approximate knowledge and vagueness, huge quantity of data, heterogeneity and different time scales to name a few [3]. This paper is organised as follows. In §1 we portray the case study for the river basin management. In §2 the river basin Multi-Agent architecture model is introduced. In §3 the functionality of the Multi-Agent system is detailed. A prototype
292
T. Rendón-Sallard et al. / Designing a Multi-Agent System to Simulate Scenarios
model of the River Basin MAS is illustrated in §4. Finally, in §5 we state the conclusions and outline future work.
1. River Basin Case Study Description The Besòs basin is located on the North East of the Mediterranean coast of Spain. The catchment area is one of the most populated catchments in Catalonia, having more than two million people connected. The scope of the study area is around the final reaches of the Congost River. The river sustains, in an area of 70 km2, the discharges of four towns which are connected to two Waste Water Treatment Plants (WWTP) [1]. The water system has three main elements which are depicted in Figure 1: sewer system, WWTP and river.
Figure 1. Elements of the environmental system
Sewer system. There are two sewer systems, one that drains the area of the town La Garriga and another one that drains the area of Granollers and some small surrounding villages. WWTP. There are two WWTP, one for each sewer system. The two plants have a biological treatment. The average flows are 6000m3/d for La Garriga (WWTP1) and 26000 m3/d for Granollers (WWTP2). River. The studied reach of the Congost River has a length of 17 km. The Congost is a typical Mediterranean river with seasonal flow variations. Before the two WWTP, the average flow is about 0.5 m3/s, but can easily reach a maximum punctual flow of 200 m3/s. Other considered elements are rain control stations, river water quality control stations, flow retention and storage tanks. Yet the most essential element is the sewer channel that joins the two WWTPs, allowing to by-pass the flow from the La GarrigaWWTP to the Granollers-WWTP [1].
T. Rendón-Sallard et al. / Designing a Multi-Agent System to Simulate Scenarios
293
2. River Basin Multi-Agent Architecture Model Multi-agent systems are based on the idea that a cooperative working environment comprising synergistic software components can cope with problems which are hard to solve using the traditional centralized approach to computation. Smaller software entities – software agents – with special capabilities (autonomous, reactive, pro-active and social) are used instead to interact in a flexible and dynamic way to solve problems more efficiently [4]. For further reading see [5,6]. Multi-Agent Systems are able to cope with complex problems such those related to river basin system management. Thus, we propose the design of a MAS to simulate different alternative scenarios for decision-making in river basin systems management. The authors are not aware of any related work using the MAS-approach that provides the same functionalities that we are attempting to achieve in our proposal. 2.1 Type of agents Following are described the type of agents defined for our MAS proposal. The agent’s graphic representation and the dependences among them are depicted in Figure 2. Sewer agents: La Garriga Sewer system, Granollers Sewer system. These agents are responsible for the management of the sewer systems. They are aware of the rainfall, the runoff produced by industrial discharges or rainfall incidence and the level of the water flow in the sewer systems. WWTP agents: Data Gathering, Diagnosis, Decision support, Plans and actions, Connectors. They receive information from the sewer system agents and the storage tanks to start working on the water flow. The agents perform various processes like data gathering, diagnosis of the water using a Casebased reasoning system and a Rule based system, formulating an action plan, user-validation of the plan, etc. River agents: Data gathering (data required: meteorological, physical, kinetic, water quality). They collect valuable data in order to monitor the state of the river Storage tanks agents: Industrial parks, Rainfall. There are two types: the industrial parks control the flow from the industrial area and the rain retention can manage the rain flow: Supervisor agent. It is responsible for the coordination between all the elements of the system. This agent interacts and supports the user in the decision-making. It starts and terminates all other agents and handles the communication between them.
294
T. Rendón-Sallard et al. / Designing a Multi-Agent System to Simulate Scenarios
Figure 2. River Basin MAS architecture
3. Functionality of the River Basin MAS The aim of the MAS is to simulate various scenarios in order to draw conclusions and help in the decision-making for the river basin management. There goals to be fulfilled are: To manage critical episodes To minimize discharge of poorly treated wastewater To maximize the use of the installations treatment capacity To minimize the economical costs of new investments and daily management To maintain a minimum flow in the river guaranteeing an acceptable ecological state In order to accomplish these objectives each intelligent agent will be provided with domain knowledge by means of Case Based Reasoning (CBR) and/or Rule Based Reasoning (RBR) or other reasoning models. Hence they can perform several tasks including: By-passing the water flow from La Garriga WWTP to Granollers WWTP. Storage tanks management Sewer system control Monitoring the river basin system For instance, the user could simulate various scenarios for a critical episode such excessive rainfall producing a peak flow in La Garriga WWTP. The River Basin MAS would take into account the consequences that could affect any element of the water system. Thus, it would provide a set of actuation alternatives (no actuation, bypass the water flow, storage tank retention) and it would propose the best strategy to deal with the given situation.
T. Rendón-Sallard et al. / Designing a Multi-Agent System to Simulate Scenarios
295
4. River Basin MAS Prototype In order to design and develop our River Basin MAS we have revised the state of the art of MAS platforms [4,7], as well as Agent-Oriented software methodologies [8,9,10]. The purpose was to select the most appropriate agent platform for our goals, and afterwards, to extend it with the new required functionalities. The most essential extension will be the ability of some kinds of agents to incorporate some knowledge and reasoning models. Since the selection of a right methodology is crucial for any software project [10] we put especial emphasis on the choice of a suitable methodology for the agent platform selected. A great number of agent platforms can be found in the literature, among these Jadex [11,12] stands out as a prominent choice. Jadex is a Java based framework that allows the creation of goal oriented agents and provides a set of development tools to simplify the creation and testing of agents. An evaluation for the suitability of three agent-oriented methodologies, MaSE[13], Tropos[14] and Prometheus[15,16], to the Jadex agent platform is presented in [10]. The results illustrated that the three methodologies were capable of supporting the development of applications using Jadex. Moreover, Prometheus and Jadex proved to match in the greatest extent according to the used criteria. Thus, the authors concluded to propose the use of Prometheus for development with Jadex.
Figure 3. Roles Diagram for the River Basin MAS
Prometheus is a mature and well documented methodology; it supports BDIconcepts and additionally provides a CASE-Tool, the Prometheus Design Tool (PDT), for drawing and using the notation. The Prometheus methodology consists in three phases that can be done simultaneously: system specification, architectural design and detailed design. Due to lack of space, the following describes only part of the MAS design made in Prometheus. In the system specification phase we modelled the environment by identifying the incoming information through percepts and determined the actions that the agents perform. Additionally the system goals and the basic roles of the system were identified. Figure 3 depicts the roles defined for our MAS. It shows the roles needed to fulfil the system goals, described in §3. Roles are depicted by rectan-
296
T. Rendón-Sallard et al. / Designing a Multi-Agent System to Simulate Scenarios
gles, goals with ovals and actions with an action icon. An example of a Role descriptor is characterized in Table 1 for the Role Storage tank management Table 1. Role descriptor for Storage tank management.
Storage tank management –descriptor Name Description
Storage tank management 1)It retains water during rain peak times and discharges it during low peaks. 2)Laminates waste waterflow. 3)Mitigates pollution episodes due to punctual discharges to the sewer
Percepts
RainfallDetected, StorageTankMeasurement
Actions
WaterDischarge, WaterRetention
Information used
SCADA
Goals
To manage Critical Episodes, To minimize discharge of poorly treated wastewater
The system overview diagram shown in Figure 4 depicts the five types of agents of our MAS along with their percepts and actions. In addition the major data repositories are described as well as some messages that are sent between agents.
Figure 4. System Overview Diagram To illustrate our work, a problematic situation is characterized in Figure 5 along with the message passing between the agents. In this case the problematic situation is an excessive rainfall that is detected by the Sewer Agent La Garriga. There-
297
T. Rendón-Sallard et al. / Designing a Multi-Agent System to Simulate Scenarios
fore, this agent sends a message to the supervisor agent informing about the anomaly. Then, the Supervisor Agent sends a message to WWTP1 Agent to request a diagnosis and the latter agent responds with the diagnosis informing the increasing level of water flow. The supervisor agent makes the proposal of by-passing a percentage of the water flow from WWTP1 to WWTP2. The user, usually the WWTP manager, approves this proposal, thus the WWTP1 agent by-passes the excess flow to WWTP2. Sewer La Garriga
Supervisor
Alarm Signal <Excessive_Rain>
WWTP1
WWTP2
Request Diagnosis Report Diagnosis Propose Action
Perform Action
Figure 5. Message Passing between Agents
5. Conclusions and future work Environmental systems are very complex to manage. Particularly, as mentioned in the introduction, river basin systems are especially difficult to manage in order to obtain a good quality and quantity of water at the river. We have proposed a River Basin MAS to simulate scenarios to support in the decision-making in river basin systems. The design of the river basin MAS has been made using the Prometheus methodology. We chose Jadex as the agent platform for developing the River Basin MAS. CBR and RBR are the two main reasoning models envisioned for our MAS in order to integrate knowledge to some agents. At this stage, we have developed a prototype of the River Basin MAS in Jadex. As a first approach, the prototype has been tested with real data coming from the real case study. The preliminary tests are being evaluated by the experts in Granollers WWTP and we have found promising results using the MAS-approach. As future work, we plan to complete a full development in Jadex with the feedback provided by the WWTP managers and test the MAS in more WWTP. We also want to integrate the specific purpose tools: GPS-X, InfoWorks River Simulation and InfoWorks CS, so we can enrich the simulations.
298
T. Rendón-Sallard et al. / Designing a Multi-Agent System to Simulate Scenarios
Acknowledgement This work has been partially supported by the Spanish project TIN2004-01368 and European project Provenance IST-511085.
References Devesa F., De Letter P., Poch M., Rubén C., Freixó A. and Arráez J., Development of an EDSS for the Management of the Hydraulic Infrastructure to Preserve the Water Quality in the Besòs Catchment. Procc of EU-LAT WORKSHOP on e-Environment, (2005). [2] Rodriguez-Roda I., Sànchez-Marrè M., Comas J., Baeza J., Colprim J., Lafuente J. Cortés U. and Poch M. A hybrid supervisory system to support WWTP operation: implementation and validation. Water Science and Technology Vol 45 No 4-5, (2002). [3] Poch M., Comas J., Rodríguez-Roda I., Sànchez-Marrè M., Cortés U., Designing and building real environmental decision support systems. Environmental Modelling and Software, 19 (9), 857 - 873. ISSN: 1364-8152 (2004). [4] Mangina E., Review of software products for Multi-Agent Systems. Applied Intelligence (UK) Ltd for AgentLink : http://www.AgentLink.org/ (2002). [5] Luck M., Ashri R. and D’Inverno M., Agent-Based Software Development. Artech House, (2004). [6] D'Inverno M., Luck M., Understanding Agent Systems, Springer-Verlag, (2004). [7] Rendón-Sallard T., Sànchez-Marrè M., Devesa F. and Poch M. Simulating scenarios for decision-making in river basin systems through a Multi-Agent system. In Proc. of the 3rd Biennial meeting of the International Environmental Modelling and Software Society IEMSs 2006 (2006). [8] Dam K. H. and Winikoff M., Comparing Agent-Oriented Methodologies, In Proc. of the Fifth Int. Bi-Conference Workshop at AAMAS03, (2003). [9] Dam, K.H., Evaluating and Comparing Agent-Oriented Software Engineering Methodolo gies. Master thesis for RMIT University, Australia (2003). [10] Jan Sudeikat, Lars Braubach, Alexander Pokahr, and Winfried Lamersdorf. Evaluation of Agent-Oriented Software Methodologies - Examination of the Gap Between Modeling and Platform, in: Workshop on Agent-Oriented Software Engineering, AOSE (2004). [11] Alexander Pokahr, Lars Braubach, Winfried Lamersdorf. Jadex: A BDI Reasoning Engine, Chapter of Multi-Agent Programming, Kluwer Book, Editors: R. Bordini, M. Dastani, J. Dix and A. Seghrouchni (2005). [12] Lars Braubach, Alexander Pokahr, Winfried Lamersdorf. Jadex: A Short Overview, in: Main Conference Net.ObjectDays, AgentExpo (2004). [13] DeLoach S. A. Analysis and design using MaSE and agentTool. In Proc. of the 12th MAICS, (2001). [14] J. Mylopoulos, J. Castro, and M. Kolp. Tropos: Toward agent-oriented information systems engineering. In Second International Bi-Conference Workshop on Agent-Oriented Information Systems. AOIS (2000). [15] Padgham L. and Winikoff M.,Developing Intelligent Agent Systems: A Practical Guide. John Wiley and sons (2004). [16] Padgham L., Thangarajah J. and Winikoff M., Tool Support for Agent Development using the Prometheus Methodology. ISEAT (2005). [1]
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
299
Outline of Citation Auctions Josep Lluis de la Rosa i Esteva [email protected] Agents Research Lab, EASY center of xarxa IT, CIDEM, University of Girona Abstract: This paper describes the basis and a forecast of results of citation auctions as a new approach for the refereeing methods most commonly used in the scientific community, based on peer to peer evaluations. Its main idea is to use auctions for publishing papers, where bids consist of the number of citations that a scientist considers the papers will receive. The benefits of the proposed approach will be, first, a reduction of refereeing costs because the process of citations auction does not need of priory understanding of the papers content without loosing quality in contributions, and second, scientists will be much more committed to the quality of papers, focusing much more in networking and detailed explanations of their papers to maximize the number of citations. As a conclusion, this novel approach emphasizes the scientific collaborative work, proactiveness, while reducing the expensive costs of current methods of refereeing and omitting other possible faults like claques and modes. Finally, an analysis of the number of citations collected in papers published in years 19992004, developed by google scholar, and a simple simulation of auctions will outline the behavior of the citation auctions approach. Keywords: auctions, citations, peer to peer referee.
Introduction This work is the result of a deep thought of the refereeing process that scientific community is suffering from very many decades. The analysis of citations — examining what scholars and scientists publish for the purpose of assessing their productivity, impact, or prestige — has become a cottage industry in higher education. And it is an endeavor that needs more scrutiny and skepticism. This approach has been taken to extremes both for the assessment of individuals and of the productivity and influence of entire universities or even academic systems. Pioneered in the 1950s in the United States, bibliometrics was invented as a tool for tracing research ideas, the progress of science, and the impact of scientific work. Developed for the hard sciences, it was expanded to the social sciences and humanities. Citation analysis, relying mostly on the databases of the Institute for Scientific Information, is used worldwide. Increasingly sophisticated bibliometric methodologies permit ever more fine-grained analysis of the articles included in the ISI corpus of publications. The basic idea of bibliometrics is to examine the impact of scientific and scholarly work, not to measure quality. The somewhat questionable assumption is that if an article is widely cited, it has an impact, and also is of high quality. Quantity of publications is not the main criterion. A researcher may have one widely cited article and be considered influential, while another scholar with many uncited works is seen as less useful. Bibliometrics plays a role in the sociology of science, revealing how research ideas are communicated, and how scientific discovery takes place. It can help to analyze how some ideas become accepted and others discarded. It can point to the most widely cited ideas and individuals, but the correlation between quality and citations is less clear.
300
J.Ll. de la Rosa i Esteva / Outline of Citation Auctions
Accordingly [1] the bibliometric system was invented to serve American science and scholarship. Though the citation system is now used by an international audience, it remains largely American in focus and orientation. It is exclusively in English — due in part to the predominance of scientific journals in English and in part because American scholars communicate exclusively in English. Researchers have noted that Americans cite each others’ American work in U.S.-based journals, while scholars in other parts of the world are more international in their research perspectives. American insularity further distorts the citation system in terms of both language and nationality. The journals included in the databases used for citation analysis are a tiny subset of the total number of scientific journals worldwide. They are, for the most part, the mainstream English-medium journals in the disciplines. The ISI was established to examine the sciences, and it is not surprising that the hard sciences are overrepresented and the social sciences and humanities less prominent. Further, scientists tend to cite more material, thus boosting the numbers of citations of scientific articles and presumably their impact. The sciences produce some 350,000 new, cited references weekly, while the social sciences generate 50,000 and the humanities 15,000. This means that universities with strength in the hard sciences are deemed more influential and are seen to have a greater impact — as are individuals who work in these fields. The biomedical fields are especially overrepresented because of the numbers of citations that they generate. All of this means that individuals and institutions in developing countries, where there is less strength in the hard sciences and less ability to build expensive laboratories and other facilities, are at a significant disadvantage. It is important to remember that the citation system was invented mainly to understand how scientific discoveries and innovations are communicated and how research functions. It was not, initially, seen as a tool for the evaluation of individual scientists or entire universities or academic systems. The citation system is useful for tracking how scientific ideas in certain disciplines are circulated among researchers at top universities in the industrialized countries, as well as how ideas and individual scientists use and communicate research findings. A system invented for quite limited functions is used to fulfill purposes for which it was not intended. Hiring authorities, promotion committees, and salary-review officials use citations as a central part of the evaluation process. This approach overemphasizes the work of scientists — those with access to publishing in the key journals and those with the resources to do cutting-edge research in an increasingly expensive academic environment. Another problem is the overemphasis of academics in the hard sciences rather than those in the social sciences and, especially, the humanities. Academics in many countries are urged, or even forced, to publish their work in journals that are part of a citation system — the major English-language journals published in the United States and a few other countries. This forces them into the norms and paradigms of these journals and may well keep them from conducting research and analysis of topics directly relevant to their own countries. Citation analysis, along with other measures, is used prominently to assess the quality of departments and universities around the world and is also employed to rank institutions and systems. This practice, too, creates significant distortions. Again, the developing countries and small industrialized nations that do not use English as the language of higher education are at a disadvantage. Universities strong in the sciences have an advantage in the rankings, as are those where faculty members publish in journals within the citation systems.
J.Ll. de la Rosa i Esteva / Outline of Citation Auctions
301
The misuse of citation analysis distorts the original reasons for creating bibliometric systems. Inappropriately stretching bibliometrics is grossly unfair to those being evaluated and ranked. The “have-nots” in the world scientific system are put at a major disadvantage. Creative research in universities around the world is downplayed because of the control of the narrow paradigms of the citation analysis system. This system overemphasizes work written in English. The hard sciences are given too much attention, and the system is particularly hard on the humanities. Scholarship that might be published in “nonacademic” outlets, including books and popular journals, is ignored. Evaluators and rankers need go back to the drawing boards to think about a reliable system that can accurately measure the scientific and scholarly work of individuals and institutions. The unwieldy and inappropriate use of citation analysis and bibliometrics for evaluation and ranking does not serve higher education well — and it entrenches existing inequalities. Therefore, there are many drawbacks in citation system, but I think we may revitalize it. We are especially concerned about the low participation or discussion in conferences and workshops, together with heavy refereeing workload. So, I guess that if we linked citation and auctions we would solve these problems. Let us develop the idea further: nowadays scientist want to publish their scientific results in congresses and journals (CJ) to gain citations and reputation. Since there are too many scientists trying to publish in the same CJ, then the CJ can select only the best papers, that is, those which will generate the maximum number of citations. Thus, in the citation auctions approach, the CJ will activate an auction to select those papers which the most citations are guaranteed by scientists, i.e. scientists bid. Bids consist in a prediction given by a scientist about the number of citations that a paper will receive. Let us consider the amount of citations that a scientist receives from his published paper as citations wallet, i.e. the amount of “cash” that scientists receive. Therefore, scientists are limited by their citations wallet (CW) in preparing their bids, and must speculate how many citations will receive as a pay-back along the time after the successful communication of the papers. Thus, every CJ winning paper will withdraw the number of citations from every scientist citations wallet and consequently scientist bid with the higher number of citations they can, they think the paper is the worth. Scientists may loose their “cash” if they paid more citations in the winning bid than the number of citations that the paper would generate in the future, and vice versa, they win more citations if the published papers generate more citations than those invested in auctions. And, the ultimate goal of scientist will still be to keep their individual citations wallet up. The benefits of the proposed approach will be, first, a reduction of refereeing costs because the process of citations auction does not need of priory understanding of the papers content without loosing quality in contributions, and second, scientists will be much more committed to the quality of papers, focusing much more in networking and detailed explanations of their papers to maximize the number of citations. As a conclusion, this novel approach emphasizes the scientific collaborative work, proactiveness, while reducing the expensive costs of current methods of refereeing and omitting other possible faults like claques and modes. Section 2 presents an introduction to auctions. Section 3 shows the tools the look up for citations. Section 4 explains an analysis and hypothetic case of citation auctions. Finally, discussions and conclusions are in section 5.
302
J.Ll. de la Rosa i Esteva / Outline of Citation Auctions
2. Auctions Traditional auctioneers recognize the importance of marketing in their trade and discuss advertising principles and ‘‘the psychology of selling’’ as keys to auctioneering success. The growing popularity of Internet auctions is also driving new productmarket and pricing models, revised channel roles, and new market research methods [4]. We set the stage with a brief description of the four major auction mechanisms, outline key concepts and results from the economic analysis of auctions, and summarize the key findings in empirical tests of auction theory. We then identify areas for future research on auction markets, particularly those of interest to marketers in the new contexts created by the Internet. Four basic auction mechanisms (ascending-bid, descending-bid, first-price sealed bid and second-price sealed bid) are commonly discussed in the literature. We briefly describe each mechanism, assuming that a single object is for sale and that a seller and several bidders operate without agents [5]. 2.1. Auction Mechanisms In ascending-bid auctions, the object’s price is raised until only a single bidder remains. This winning bidder pays a price equal to his last bid (usually a small amount more than the second highest bid). The auction is ‘‘open’’ i.e., the participants know the best current bid. In the ‘‘English’’ variant, a participant may hold back or enter bids at any time until a winner emerges (i.e., bidder exit information is not public). In the so-called ‘‘Japanese’’ variant, the price is increased continuously and bidders remain active until they exit publicly and irrevocably. Although rare in practice, many theoretical analyses assume the Japanese variant, in which the observed bids and exits provide information on the number of bidders, their willingness to bid, and their underlying values. An ascending auction is also called an open second-price auction. In descending-bid (“Dutch”) auctions, the auctioneer starts with a high initial price and progressively lowers it. The prevailing price is posted and known to all participants. The first bidder to indicate a willingness to take the object at the prevailing price is the winner. Because winners in open descending auctions pay their bid price, the Dutch auction is also called an open first-price auction. In contrast to open auctions, participants in sealed bid auctions submit their bids without seeing others’ bids. In a first-price sealed bid (FPSB) auction, the highest bidder wins and pays his bid price. In a second-price sealed bid (SPSB) auction, the highest bidder also wins, but pays a price equal to the second highest bid. 2.2. Baseline Theoretical Analyses Auction mechanisms are usually analyzed as non-cooperative, incomplete (both symmetric and asymmetric) information games among competing bidders. The solution concept is the Bayesian Nash equilibrium in which bidders maximize their own expected payoffs from this single auction conditional on their rivals’ strategies and their beliefs about the rivals’ information [6]. The baseline analysis assumes single object auctions and a set of n symmetric and risk neutral bidders. Auction models differ in their assumptions about the bidders’ information sets. In private value models, each bidder is assumed to know his own valuation of the object, but not others’ valuations. In the baseline case, the valuations are assumed to be independent draws from a commonly known continuous distribution. Bidders’ valuations vary, but are assumed to be unaffected by others’ valuations. In contrast,
J.Ll. de la Rosa i Esteva / Outline of Citation Auctions
303
with common value models, the object has the same common or true value for all bidders. However, bidders vary in their private signals (estimates) of the common value, with the signals assumed to be independent draws from the same continuous distribution. Bidders are uncertain of the object’s worth and are influenced by information about others’ signals revealed during the auction. [7] formulated a general model in which each bidder has a private signal of the object’s value. Other value measures (e.g., third party appraisals, etc.) may also be accessible to sellers (but not to bidders). Bidder values are symmetric functions of their own signals, others’ signals, and the other value measures. The private signals and additional value measures are affiliated, i.e., their joint density functions reflect an information environment in which a higher value of some variables makes higher values of the other variables more likely. The independent private values (IPV) model is a special case in which bidders’ signals are independent, and no additional value measures exist. The common value (CV) model is a contrasting case in which there is a single additional value measure with all bidders’ valuations equal to it. In both descending and FPSB auctions, the bidder with the highest bid wins and pays the bid price. Hence, the bidding strategies and the mapping from strategies to outcomes for the two auction mechanisms are identical and they share the same (Nash) equilibrium. The equivalence holds as long as each bidder chooses his best bid given an assessment (correct in equilibrium) of other bidders’ strategies. This strategic equivalence of the descending and first-price sealed bid auctions is a general result for single unit auctions. It holds whether or not buyers know their values and whether they are risk neutral or risk averse. With IPVs, it is a dominant strategy for a bidder in an ascending auction to continue bidding until the price reaches his value. Eventually, the highest-value bidder wins, paying a price equal to the second-highest bid or a small amount more. In an SPSB auction, it is optimal for a bidder who knows his own value to submit a sealed bid equal to that value. At the dominant strategy equilibrium, the highest-value bidder wins at a price equal to the bid of the second-highest-value bidder. Thus, ascending and the SPSB auctions are equivalent if bidders know their private values. In CV auctions, the bidders independently estimate the object’s value. If all bidders use the same monotonic strategy, and they all have unbiased estimates of value, then the high bidder has the highest of the unbiased estimates. However, the highest of the unbiased estimates is biased upwards. If bidders do not allow for this order statistic property in choosing their bids then (on average) the object will be worth less than the winning bid. This bad news implicit in winning is labeled the winner’s curse [3]. It is assumed that rational bidders can avoid the winner’s curse (in equilibrium) by adjusting their bids downwards. Theoretically, this adjustment is costless since a bidder pays nothing if he loses. 3. Tools for looking up for citations There are several tools for tracing the citations of every work, namely Scholar Google http://scholar.google.es/, the science indicators from ISI web of Knowledge from Thomson http://sub3.isiknowledge.com, the Scientific literature digital library http://citationseer.ist.psu.edu and others. The screenshots of these tools are below:
304
J.Ll. de la Rosa i Esteva / Outline of Citation Auctions
As for example, to look up the citations of a scientist, in this case the search of the first author of this paper in 1999-2004, only Lecture Notes on Computer Science and on Artificial Intelligence, we may proceed as follows:
In this search the amount of citations per year is shown in the following table of citations
So this researcher accumulated 8 citations in 6 publications, that gives a rate of 8/6 = 1.3 citations per publication. The international average number of citations per publication in the fields of computer science and artificial intelligence is about 1.5 according to the ISI (International Science Indicators) 4. An hypothetic case: Rethinking the works sent in 1999-2004 to LN In table 1 there is the result of looking up the publications in LNCS, LNAI and their citations of 4 researchers. A3 is the first author of this paper. One can tell several rankings. The first one is the absolute citations ranking, which is clearly leaded by A1 with 18 citations, followed by A2 with 16 citations, and by distant A3 with 8, and ending by A4 with 4. Another ranking widely used in the scientific community is the
J.Ll. de la Rosa i Esteva / Outline of Citation Auctions
305
ratio = citations / #publications, as [2] describes; it gives another order: A4 is leader with 2 citations per publication, followed by A1 with 1.8, then A2 with 1.6, and finally A3 with 1.3. This system gives absolute citations through time. However, since this paper was initially prepared in May 2006, the amount of citations in 2004 was not yet stabilized, but still growing. However, we have chosen the 1999-2004 years because it is a wide enough window for the citations analysis. Those are of course only partial results, as we have not yet represented what number of citations the authors expected their papers to obtain.
Table 1
The following experiment depicted in Table 2 represents a possible alternative scenario in the same period of time with a simulation of 4 auctions behaviors (aggressive, cautious, very cautions, no-risk) of the same researchers.
Table 2
The blue squares mark the row of researchers that did not win the auction of the corresponding column. The column “Auction” tells the number of citations that a researcher Ai expects from his contributions, the column “W” tells the accepted number of citations for every publication after the auction, and the following column “year”
306
J.Ll. de la Rosa i Esteva / Outline of Citation Auctions
tells the number of citations that as of May 2006 the contributions have gathered from the publication. Here, for the sake of simplicity, the method for auction is the FPSB, where every author submits in close envelop the bid, defining how many citations the author offers to the congress or journal. This bid supposes to be lower than the amount of citations that the author will collect in the future, after the publication of his paper. Again several rankings can be discerned. The first one is the citations wallet (CW) ranking, as the number of remaining citations, after auctions. In the discussed case, this ranking is clearly leaded by A2 with 9 citations, followed by A1 with 7 citations, then A3 with 5, and ended by A4 with 4. Note that the CW values are lower than the absolute number of citations received. However, the advantage is that CW reflects the additional number of citations, above the number of the expected citations, that the entire set of publications have generated for the author Ai in a given time period. By regarding the ratio of citations per publication, we define a new content of citation wallet, which is the cumulative number of citations per publication. This new CW content gives another order: led by A4 with 2 CW per publication, followed by A1 with 1.8, then A2 with 0.9 and finally A3 with 0.8. In this case those ratios have not changed from Table 1. The difference between Tables 1 and 2 is that Table 1 shows the outcome of publishing the paper, in terms of citations while Table 2 shows the result of the citations over expectations. The former gives a more accurate measure of the quality of a paper, so the authors have an incentive to attempt to gain the maximum number of citations, at least the same number as the number of citations invested in the bid. Moreover, this citations auction system is self-regulating itself: if an author is repetitively underperforming in citations (that is his contributions receive lower number of citations than those invested) then his CW will eventually drop close to zero and this author will have difficulties in the future to assure the publication of his contributions in congresses or journals as he will be loosing many auctions. Conversely, an author can increase his CW quickly by submitting highly cited papers, therefore making easy for him to publish in the future. 5. Conclusions and Future Work This work introduced thoughts about the current refereeing procedure, about its drawbacks, and suggested a new approach to selecting the papers for competitive congresses and journals, the citation auctions. They may introduce interesting properties such as: reduced pre-refereeing overload, higher motivation to participation, more accurate fitting of works and congresses, more participation in congresses and post-refereeing. The citation auctions are defined in this paper together with their operation, and very preliminary illustrative examples with the analysis of the contributions of the Lecture Notes on Computer Science and Artificial Intelligence (published by Springer) are given where 4 authors were analyzed over the 1999-2004. ACKNOWLEDGMENTS This work was supported by grant DPI2005-09025-C02-02 from the Spanish government, “Cognitive Control Systems”. I want to thank Boleslaw K. Szymanski, from Rensselaer Polytechnic Institute, Troy, NY, USA for his useful insights in this preliminary paper and I long for working together in the future work.
J.Ll. de la Rosa i Esteva / Outline of Citation Auctions
307
References [1] Philip G. Altbach is director of the Center for International Higher Education, at Boston College. http://insidehighered.com/views/2006/05/08/altbach [2] de la Rosa, Pla Estratègic de Recerca de la Universitat de Girona, 2005-2007, http://pserv.udg.es/dnn3/Default.aspx?alias=pserv.udg.es/dnn3/per [3] Capen, Edward, Robert Clapp, and William Campbell. (1971). Competitive Bidding in High Risk Situations, Journal of Petroleum Technology, 23, 641–653. [4] Herschlag, Miriam, and Rami Zwick. (2000). Internet Auctions—A Popular and Professional Literature Review, Quarterly Journal of e-Commerce, 1(2), 161–186. [5] Klemperer, Paul. (1999). Auction Theory: A Guide to the Literature, Journal of Economic Surveys, 13(3), 227– 260. Klemperer, Paul (ed.) (2000). The Economic Theory of Auctions, Volumes 1 & 2, Cheltenham, UK: Edward Elgar. Laffont, [6] McAfee, R., Preston, and John McMillan. (1987). Auctions and Bidding, Journal of Economic Literature, 25 (June), 699–738. [7] Milgrom, Paul, R., and Robert J. Weber. (1982). A Theory of Auctions and Competitive Bidding, Econometrica, 50(5), 1089–1122.
308
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Improving Privacy of Recommender Agents by Means of Full Dissociation Sonia DELFINa1 , Claudia CARRILLOa, Eduard MUNTANERa, Araceli MORENOa, Salvador IBARRAa , Josep Lluis DE LA ROSAa Agents Research Lab, University of Girona, Spain
Abstract. Our approach is to give privacy to the profile and behaviour of the user; the state of the art for privacy consists of encryption algorithms, occultation and temporary association of a user’s information. In all cases these aspects present pros and cons. A primary disadvantage is that within a certain amount of time an attacker is capable of reconstruct the original information (encryptation), finding the information (occultation) and relating it to the original user (temporary association). In our approach, full dissociation is used to impede any relation with the original user. It is the concept of full dissociation which considerably reduces the available time for an attacker to discover the relation with the original user. Keywords: Dissociation, Personalization
Association,
Recommender
Agents,
Privacy,
1. Introduction The aim of the work presented in this dissertation is the demonstration that many, if not most, of the systems can be technically realized in forms that are as protective of users’ individual privacy as one might wish. Therefore, designers of systems who fail to ensure their users’ privacy are making a policy decision, not a technical one: they have decided that their users are not entitled to as much personal privacy as is possible to provide, and are implementing this decision by virtue of the architecture of the system. When a site collects personal information, the user must trust the site to protect the information appropriately. According to [1] there are five ways that this trust might be misplaced: Deception by the recipient, a site could set out to deliberately trick users into revealing personal information; a mission creep refers to a situation in which an organization begins with a clearly defined use for personal data; an accidental disclosure happens when private information is accidentally made available; disclosure by malicious intent (stolen); or even though an organization may take great care to protect personal information, they are obliged under the law to disclose that information when they are subpoenaed. The key issue highlighted in these scenarios is that once merchants have control of users’ personal information, the users can no 1 Correspondence to: Sonia DELFIN, University of Girona, Campus Montilivi, Building PIV, E-17071 Tel: +34 972 41 8482; Fax: +34 972 41 8976; e-mail: [email protected]
S. Delfin et al. / Improving Privacy of Recommender Agents by Means of Full Dissociation
309
longer control the uses to which that information is put. These are exactly the kind of control issues that the users in a privacy study [2] were concerned about. User control of personal information is a key trust issue for recommenders. A second key trust issue is the reliability of the recommendations. Section 2 presents the state of the art about privacy and personalization; section 3 shows our approach to implement disassociation in recommender systems; in section 4 we present an example, finally conclusions and future work are presented in section 5 and 6.
2. State of the Art 2.1. Privacy Privacy is not only about disclosure of dangerous information or disturbing a person at a wrong time or with information on a wrong topic [3]. For example, personalization of advertisements may seem to be beneficial, instead of a bunch of useless and time-robbing advertisements. However, such personalized or targeted advertising is not so innocent or beneficial in a long term because advertising views people as bundles of desires to buy more and more, and this view is partially true. In this sense, we can introduce our theory to preserve the privacy of the user with this concept of total dissociation. Privacy is the skill of supporting privately anything that we want to support in this way. And this implies making at least an attempt to support confidentiality, with a degree of personal responsibility in the practice it is also presented to give value to what we are protecting. The privacy of information is crucial in the electronic society (E-Society). Internet users have important concerns about threats to their privacy [4] and are more anxious about the information provided online [5]. Privacy preferences vary considerably between users so several architectures have been suggested to allow customized adjustments [6]. The platform for privacy preferences (P3P) allows users to set their own personal privacy preferences and if the visited sites that are not of their own profile show warnings about this. Nevertheless, this transference of responsibility to the individual raises questions [7]. In such a complex information environment, how do users establish privacy rules, with respect to the spread of personal information? 2.2. Personalization To implement personalization it is necessary to carry out the following two principal activities: gathering information and analysing it. To be able to present the information in a suitable form to the user, in a way adapted to the particular moment, with this information we might perform previously stored tasks imitating the routines of this user and we might simultaneously offer suggestions of improvement for his or her activity. Developing a profile of the user, we will compile information that will allow us to better know the client, his/her preferences, standards of behaviour, characteristics, technical problems and any other information that could be obtained about his/her transactions. Nevertheless a problem might occur concerning the privacy of the user’s information, because if we have more personal or easily recognized information, we will want to have greater control over it. There can be several levels of
310
S. Delfin et al. / Improving Privacy of Recommender Agents by Means of Full Dissociation
privacy; therefore, we would offer different solutions to this problem depending on whether it was due to anonymity and pseudo anonymity, cryptography or legislation. To assure privacy mechanisms are defined so that the users determine how they can gain information with different levels of access, though the prevailing belief is that sharing information about the users improves the capacity of personalisation. The concept of identity can be approached from a number of standpoints: for example, we can view it from philosophical, psychological, sociological, legal and technical perspectives, which suggest that there is a risk that identity management systems could be over-bearing in their complexity or requirements imposed on the individual. Of even more concern are the possible unforeseen long term consequences of fixing identities, and the impact of such fixedness if identities are reused in different domains (‘function creep’) [8], as is the case when a driver’s licence is used to establish identity.
3. Our Approach According to [9] the first element of a theory of privacy should be “a characterization of the special interest we have in being able to be free from certain kinds of intrusion”. In this sense, privacy is sometimes necessary to protect people’s interests in competitive situations, in other cases someone may want to keep some aspects of his/her life or behaviour private simple because it would be embarrassing for other people to know about it. For us the dissociation is a good privacy method, though we considered the occultation, encrypt, temporal association are good techniques. However there is always a way to reconstruct the purchases’ user path and identify each user. If we observe the behaviour of a user inside a recommender system determined by purchasing profiles in which an agent is assigned to a user and it always recommends according to the initial profile, we can see that the precision of the recommendations made to the user is not always 100% correct. In fact, the precision can diminish because the preferences of the user or other circumstances are changing. Since this theory in which modelling, personalization and identification of a user is not a hundred per cent trustworthy, it is necessary to create dissociation between the user and agent. This would give us a good result, for example, if the preferences of every user are changing. Why not change of recommender agent whenever the user accesses to a Recommender System? Perhaps, in some occasions the user did not obtain suitable precision or what he or she was hoping for, but with these interchanges between the users we might find as much or even more precision than what we might obtain with an agent associated to every user and, in addition, assure and improve the user’s privacy. In our view an agent recommend to different users in different interactions, so that privacy may increases because it is impossible to know the origin of the data that this agent is recommending. As said in [10], this privacy aspect is very important for personal development. Thus, the dissociation method implies that the relation between user-agent is not presented at any moment. 3.1 Definitions x x x
Users U = {U1, …, UN} Agents A = {A1, …, AM} Set of products in the system Q (e.g. {a,b,c,d,e,f,g,h})
S. Delfin et al. / Improving Privacy of Recommender Agents by Means of Full Dissociation
x x x x x x
311
Recommended products by an agent R (e.g. {a,b,c}) User products selection C (e.g. {a,c,e}) Precision P(Aj) of an agent with respect to a user i Precision of the dissociation between agents and users P (Ui, Aj), P (Uj, Ai) Experimentation time in weeks L = {1,…, k} Intersection of the recommended products and the selected of a recommendation UPr is as follows Eq. [1]
UPr = RC ^x Q : x R C`
(1)
According to [11] precision represents the set of products chosen by the user divided by the number of products recommended by an agent described by Eq.(2). P( A j )
U Pr
(2)
R
Calculating the precision of a user P(Uj) is equal to the average of the precisions obtained by every agent P(Aj) throughout the time of experimentation between the same week Wk. Eq (3) K
¦ P( A >L@ i
(3) K The index of certainty is the percentage obtained from analysing the precision of the users between the amounts of user; in that way we would be visualizing the global yield of the system. Eq. (4) P (U j )
CI
L 1
§ P (U i ) · ¨ ¸ 100 Being N the number of users. ¨ N ¸ © ¹
(4)
The recommendations are made in agreement with the information which the agents rely on, and try to identify particularities that better fit the preferences of the users. Our idea is to interchange the knowledge about these preferences. If we include that in some way the tastes of the users can be changing according to occasions and that they have different behaviour profiles, we would agree on changing agents at the same rate as users change their profiles. If we take advantage of this idea, we might say that the interchange is not always a loss of recommendation precision and we might say that a moment would come when it might be even better since it is motivated by new situations we often feel attracted to and experiencing them and managing to incline the scale in favour of what is shown to us. Our experiment consists of analysing the varied behaviour of the users, in two ways: first the user will always will be recommended by the same agent, and second, the interchange of recommender agents So that the user will be recommended by several agents. For the sake of simplicity, the recommender agents will catch the user behaviour in its first interaction and will keep recommending the same throughout all the experiment without any learning. Once we designed agents which work disassociated from users, what about the performance of these fully disassociated agents? Will it increase or decrease. Since there is not one unique agent for each user but several, we might say that the precision
312
S. Delfin et al. / Improving Privacy of Recommender Agents by Means of Full Dissociation
of the interchange between users and agents might be equal or even better than if there was a permanent user-agent association. Let’s do the demonstration by means of a counterexample: Let’s imagine Eq. (5), that given P (Ui, Aj) the precision of the agents Aj, who recommend the users Ui, then,
i z j P U i , A j t max PU i , Ai P U j , A j
(5)
Here follows a counter example, In [Table 1] is present a representative example of the Eq.(2): Table 1. Example U1 C1{a,c,d,f,g}
P U1 , A1
P U 2 , A2
{d , f }
2 3
{d , f , h}
{a} {a, d , f , h}
A1 R1{d,f,h}
U2 C2{a,d,f,h}
P U1, A2
0.67
1 4
A2 R2{a,c,e,g}
P (U 2 , A1 )
0.25
{a, c, g} {a, c, e, g} {d , f , h} {d , f , h}
3 4
0.75
3 1 3
Wherewith, we have found an i z j | P(Ui, Aj) t max( P(Ui, Ai), P(Uj, Ai)) and consequently it is not true that the complete dissociation necessarily involves a loss of precision in the recommendation. This means that with complete dissociation we can even improve the quality of the recommendation while maximising privacy.
4. An Example of Full Dissociation The following example is constructed with the following parameters: 8 agents (A) have each one a profile defined by the first purchase every one of the 8 users had. The purchases consist of 8 products (Q), during 8 weeks. The coincidence of number 8 is just to get square user-products, user-weeks matrixes. A Markov’s chain is generated to represent the set of purchases of every user Ui so the user tends to repeat the purchases of certain products versus others. For example, product number 7 (Q7) is not the most frequently purchased product for any user. [Table 2]. Table 2. Markov’s Chain generation of purchased products for the 8 users during 8 weeks
PRODUCT USER U1 U2 U3 U4 U5 U6 U7 U8
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
9 9 9 9
9 9
9 9
9 9
9
9
Applying the previous analysis we obtain [Table 3]:
9 9
313
S. Delfin et al. / Improving Privacy of Recommender Agents by Means of Full Dissociation Table 3. Behaviour of User U4
PRODUCT WEEK
Q1
Q2
9
W1 W2 W3 W4 W5 W6 W7 W8
9
Q3
Q4
9
9 9
9 9
9 9
Q6
Q7
9 9
9
9 9 9 9
9
Q5
AGENTS
9
A4 A2 A1 A3 A8 A6 A5 A7
9 9 9
9 9 9
Q8
9 9
9
9
For example, a user U4 can have these interactions throughout the test time and, as we observe, the first purchase generates the profile contained in the associated agent, in this case A4. Continuing with an interchange, at this moment delimited by us. To demonstrate the precision of the agents and their recommendations, we can represent it in two ways, an interchange by square rotation and a direct swap between two users and their agents [Figure 1]. U1 A4
a
b
U2 A3
U1
U2
A3
U3 A2
U4 A1
U5 A8
U6 A7
A7
U7 A6
U8 A5
A8
A1
U3
U4
U5
U6
A4
A2
A5
U7
Figure 1. (a) Swap Method.
U8 A6
(b) Square Rotation Method
We obtain the following results after doing the experiment and comparing the same behaviour of every user. In cases where the user has an assigned agent (association) [Table 4] and if they have an agent interchange, we point out the weeks when we applied an interchange (greys) and the rotation method [Table 5]. Table 4. Precision of the association between and agent Ai and its user Ui User
Week W1 W2 W3 W4 W5 W6 W7 W8
U1 1,00 0,67 0,67 0,33 0,33 0,33 0,33 0,33
U2 1,00 0,33 0,67 0,00 0,67 0,67 0,33 0,33
U3 1,00 0,67 0,33 0,33 0,33 0,33 0,33 0,00
U4 1,00 0,25 0,50 0,00 0,50 0,75 0,50 0,75
U5 1,00 0,50 0,50 0,25 0,00 0,00 0,50 1,00
U6 1,00 0,50 0,50 0,00 0,50 0,50 0,25 0,25
U7 1,00 0,25 0,25 0,75 0,50 0,25 0,00 0,25
U8 1,00 0,25 0,25 0,50 0,25 0,50 0,25 0,25
U7 1,00 0,25 0,25 0,50 0,67 0,25 0,33 0,33
U8 1,00 0,25 0,25 0,33 0,75 0,33 0,33 0,33
Table 5. Precision of the dissociation between agents and users. User
Week W1 W2 W3 W4 W5 W6 W7 W8
U1 1,00 0,33 0,50 0,33 0,50 1,00 0,25 0,50
U2 1,00 0,33 0,33 0,25 1,00 0,75 1,00 0,50
U3 1,00 0,50 0,67 0,33 0,33 0,50 0,50 0,00
U4 1,00 0,33 0,67 0,33 0,50 0,50 0,50 1,00
U5 1,00 0,00 0,25 0,25 0,33 0,67 0,50 1,00
U6 1,00 0,50 0,67 0,25 0,33 0,33 0,33 0,50
314
S. Delfin et al. / Improving Privacy of Recommender Agents by Means of Full Dissociation
The results shown in Table 6 (a) were obtained in the middle week of the test time (W5) for the swap case and the following week (W6) for the square rotation case (b). Table6. (a) Precision of Week 5
(b) Precision of Week 6
Swap Disassociated P (U1, A5) 0,50 P (U5, A1) 0,33 P (U2, A6) 1,00 P (U6, A2) 0,33 P (U3, A7) 0,33 P (U7, A3) 0,67 P (U4, A8) 0,50 P (U8, A4) 0,75
> > > < = > = >
Square Rotation Disassociated Associated P (U1, A7) 1,00 > P (U1, A1) 0,33 P (U2, A5) 0,75 > P (U2, A2) 0,67 P (U3, A8) 0,50 > P (U3, A3) 0,33 P (U4, A6) 0,50 < P (U4, A4) 0,75 P (U5, A3) 0,67 > P (U5, A5) 0,00 P (U6, A1) 0,33 < P (U6, A6) 0,50 P (U7, A4) 0,25 = P (U7, A7) 0,25 P (U8, A2) 0,33 < P (U8, A8) 0,50
Associated P (U1, A1) 0,33 P (U5, A5) 0,00 P (U2, A2) 0,67 P (U6, A6) 0,50 P (U3, A3) 0,33 P (U7, A7) 0,50 P (U4, A4) 0,50 P (U8, A8) 0,25
These results show that both swap and square rotation agent dissociation are generally better than association. And this is preserving the user’s privacy at the moment. The user’s profiles are kept in the system information. Thus, precision can be obtained by means of the intersection between the user’s purchases and the agents’ recommendations, although we attempt to not preserve the user’s profile to avoid undesirable situations with other users or some possible intruder. For this reason, if an intruder attempts to attack a specific user and rebuild the user’s profile, this intruder has greater inconvenient with the related information. Such information is useless to the intruder because the relation between user and agent is not associated and an agent has a define profile to perform the recommendation to all the users. In this experiment the final result by user is shown in Table 7 and we present the comparison between association and dissociation. Table 7. Dissociation vs. Association Global Results Dissociation U1 0,55 U2 0,65 U3 0,48 U4 0,60 U5 0,46 U6 0,49 U7 0,45 U8 0,42 CI 0,51
> > > > < > > > >
Association U1 0,50 U2 0,50 U3 0,42 U4 0,53 U5 0,47 U6 0,44 U7 0,41 U8 0,41 CI 0,46
5. Conclusions This paper describes our dissociation approach to perform a swap and square rotation between agents to preserve user privacy in agent recommender systems. We have obtained great precision when the agents are exchanged among the users instead of relying on the user-agent relationship. This provides us the solution to the question of what happens if the change of agents will be positive. Therefore, the system improves user privacy because the user was recommended by different agents throughout the time of the experiment. To give an account of the value of privacy based on the idea that there is a close connection between our ability to control who has
S. Delfin et al. / Improving Privacy of Recommender Agents by Means of Full Dissociation
315
access to us and to information about us, and our ability to create and maintain different sorts of social relationships with different people in this case with recommender agents. Our relationships with other people determine, in large part how we act toward them and how they behave toward us. Moreover, there are different patterns of behaviour associated with different relationships. That’s why we want to implement a total dissociation that not only prompts opportunities for inadvertent privacy violations also heighten user’s apprehension towards the technology.
6. Future Work To implement this dissociation we were inspired in the relationship and the pairing behaviours of living beings, or by the way in which people associate and disassociate. The approach is original as well as liable to also support the novel concept of temporary association. It is also useful for the increasing necessity for data protection in the future digital business ecosystems and we could even improve testing other types of dissociation. In addition, we plan to implement the main contribution of this paper using mechanisms other than precision to analyse system performance, and define dissociation laws and learning patterns for the user’s adaptability. ACKNOWLEDGMENTS This work was supported by grant DPI2005-09025-C02-02 from the Spanish government, “Cognitive Control Systems” References Foner L.N. Political Artifacts and Personal Privacy: The Yenta Multi-Agent Distributed Matchmaking system. In: PhD Thesis, Massachusetts Institute of Technology, 1999. [2] Ackerman, M. S. Cranor, L. F. and Reagle J. Privacy in e-commerce: Examining user scenarios and privacy preferences. In: ACM Conference on Electronic Commerce, 1999 (1–8). [3] Singer, I. Privacy and Human Nature. In: Ends and Means 5, No. l, 2001. [4] Cranor Lorrie Faith. Privacy Tools. In Helmut Baumler, Ed., E-Privacy: Datenschutz im Internet. Braunschweig/Wiesbaden: Vieweg & Sohn Verlagsgesellschaft, August 2000 (107-119). [5] Lis Godoy Daniela, Learning User Interests for user profiling in Personal Information Agents. In. PhD Thesis, Universidad Nacional del Centro de la Provincia de Buenos Aires, Facultad de Ciencias Exactas, 2005. [6] Kobsa, A. and J. Schreck. Privacy through Pseudonymity in User-Adaptive Systems. In: ACM Transactions on Internet Technology 3 (2), 2003(149-183). [7] Teltzrow, M. and A. Kobsa. Impacts of User Privacy Preferences on Personalized Systems: a Comparative Study. In: C.-M. Karat, J. Blom and J. Karat, eds: Designing Personalized User Experiences for eCommerce. Dordrecht, Netherlands: Kluwer Academic Publishers, 2004. [8] Rejman-Greene, M., Final Report of the Roadmap Task. BIOVISION deliverable D2.6/Issue 1.1. Ipswich: BTExact, 2003. [9] "Preferential Hiring." Philosophy and Public Affairs 2 (1973): 364-384. Reprinted in Moral Problems, James Rachels, ed. New York: Harper & Row, 1971. [11] Nissenbaum, H. Privacy as Contextual Integrity. In: Washington Law Review 79, No. 1, 2004 (101139). [10] Ziegler, Cai-Nicholas, and et. al. Improving Recommendation Lists Through Topic Diversification. In: International World Wide Web Conference Committee (IW3C2). Chiba, Japan. May 2005. [1]
This page intentionally left blank
317
Artificial Intelligence Research and Development M. Polit et al. (Eds.) IOS Press, 2006 © 2006 The authors and IOS Press.
Author Index Aciar, S. 258 Agell, N. 19, 63 Aguilar, J. 80 Aguilar-Martin, J. v, 80, 137 Alvarez, J.A. 195 Álvarez-Bravo, J.V. 124 Álvarez-Sánchez, J.J. 124 Angulo, C. 19, 71 Armengol, E. 47 Aulinas, M. 291 Baldrich, R. 167 Baro, X. 9 Binefa, X. 157 Busquets, D. 275 Canals, A. 266 Carrillo, C.I. 239, 266, 283, 308 Castán, J.A. 275 Cerquides, J. 55 Comas, J. 291 Corominas, A. 187 de la Rosa, J.Ll. 114, 179, 239, 258, 266, 275, 283, 299, 308 Delfín, S.L. 239, 266, 283, 308 Diez-Lledo, E. 137 Drummond, I. 227 Escalera, S. 28 Escrig, M.T. 91, 103, 124, 203, 211, 219 Falomir, Z. 203 Ferraz, L. 157 Gaertner, D. 247 García, A. 187 Gibert, K. 37 González-Cabrera, F.J. 124 Graullera, D.A. 211, 219 Grieu, S. 147 Guzmán Obando, J. 114 Ibarra, S. 275, 308 Isaza, C. 80 Le Lann, M.V. 80
Llorens, E. 147 López Herrera, J. 258 López, B. v Martínez, B. 157 Martínez, E. 91 Meléndez, J. v, 227 Montaner, M. 179 Moreno, A. 239, 266, 283, 308 Moreno, S. 211, 219 Muñoz, V. 179 Muntaner, E. 239, 266, 283, 308 Noriega, P. 247 Onaindia, E. 195 Pacheco, J. 103 Pastor, R. 187 Perez-Bonilla, A. 37 Peris, J.C. 124, 203 Polit, M. v, 147 Prats, F. 63 Prim, M. 55 Puertas, E. 47 Pujol, O. 28 Quintero, C. 275 Radeva, P. 28 Ramón, J. 275 Rendón-Sallard, T. 291 Rios-Bolivar, A. 80 Rivas Echeverría, C. 3 Rivas Echeverría, F. 3 Rodriguez-Silva, G. 37 Roig, J. 55 Roselló, L. 63 Ruiz Ordóñez, R.U. 114 Ruiz, F. 19 Sabria, J. 55 Sales, T. 5 Sánchez, M. 63 Sànchez-Marrè, M. 291 Sandri, S. 227 Sebastia, L. 195
318
Sierra, C. Soler, V. Téllez, R.A. Thiery, F.
247 55 71 147
Tous, F. Vanrell, M. Vazquez, E. Vitria, J.
167 167 167 9
This page intentionally left blank
This page intentionally left blank