Advances in Intelligent and Soft Computing Editor-in-Chief: J. Kacprzyk
57
Advances in Intelligent and Soft Computing Editor-in-Chief Prof. Janusz Kacprzyk Systems Research Institute Polish Academy of Sciences ul. Newelska 6 01-447 Warsaw Poland E-mail:
[email protected] Further volumes of this series can be found on our homepage: springer.com Vol. 43. K.M. W˛egrzyn-Wolska, P.S. Szczepaniak (Eds.) Advances in Intelligent Web Mastering, 2007 ISBN 978-3-540-72574-9 Vol. 44. E. Corchado, J.M. Corchado, A. Abraham (Eds.) Innovations in Hybrid Intelligent Systems, 2007 ISBN 978-3-540-74971-4 Vol. 45. M. Kurzynski, E. Puchala, M. Wozniak, A. Zolnierek (Eds.) Computer Recognition Systems 2, 2007 ISBN 978-3-540-75174-8 Vol. 46. V.-N. Huynh, Y. Nakamori, H. Ono, J. Lawry, V. Kreinovich, H.T. Nguyen (Eds.) Interval / Probabilistic Uncertainty and Non-classical Logics, 2008 ISBN 978-3-540-77663-5 Vol. 47. E. Pietka, J. Kawa (Eds.) Information Technologies in Biomedicine, 2008 ISBN 978-3-540-68167-0 Vol. 48. D. Dubois, M. Asunción Lubiano, H. Prade, M. Ángeles Gil, P. Grzegorzewski, O. Hryniewicz (Eds.) Soft Methods for Handling Variability and Imprecision, 2008 ISBN 978-3-540-85026-7 Vol. 49. J.M. Corchado, F. de Paz, M.P. Rocha, F. Fernández Riverola (Eds.) 2nd International Workshop on Practical Applications of Computational Biology and Bioinformatics (IWPACBB 2008), 2009 ISBN 978-3-540-85860-7 Vol. 50. J.M. Corchado, S. Rodriguez, J. Llinas, J.M. Molina (Eds.) International Symposium on Distributed Computing and Artificial Intelligence 2008 (DCAI 2008), 2009 ISBN 978-3-540-85862-1
Vol. 51. J.M. Corchado, D.I. Tapia, J. Bravo (Eds.) 3rd Symposium of Ubiquitous Computing and Ambient Intelligence 2008, 2009 ISBN 978-3-540-85866-9 Vol. 52. E. Avineri, M. Köppen, K. Dahal, Y. Sunitiyoso, R. Roy (Eds.) Applications of Soft Computing, 2009 ISBN 978-3-540-88078-3 Vol. 53. E. Corchado, R. Zunino, P. Gastaldo, Á. Herrero (Eds.) Proceedings of the International Workshop on Computational Intelligence in Security for Information Systems CISIS 2008, 2009 ISBN 978-3-540-88180-3 Vol. 54. B.-y. Cao, C.-y. Zhang, T.-f. Li (Eds.) Fuzzy Information and Engineering, 2009 ISBN 978-3-540-88913-7 Vol. 55. Y. Demazeau, J. Pavón, J.M. Corchado, J. Bajo (Eds.) 7th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2009), 2009 ISBN 978-3-642-00486-5 Vol. 56. Hongwei Wang, Yi Shen, Tingwen Huang, Zhigang Zeng (Eds.) The Sixth International Symposium on Neural Networks (ISNN 2009), 2009 ISBN 978-3-642-01215-0 Vol. 57. M. Kurzynski, M. Wozniak (Eds.) Computer Recognition Systems 3, 2009 ISBN 978-3-540-93904-7
Marek Kurzynski, Michal Wozniak (Eds.)
Computer Recognition Systems 3
ABC
Editors Prof. Marek Kurzynski, Ph.D., D.Sc. Wroclaw University of Technology Wybrzeze Wyspianskiego 27 50-370 Wroclaw Poland E-mail:
[email protected] Michal Wozniak, Ph.D., D.Sc. Wroclaw University of Technology Wybrzeze Wyspianskiego 27 50-370 Wroclaw Poland E-mail:
[email protected]
ISBN 978-3-540-93904-7
e-ISBN 978-3-540-93905-4
DOI 10.1007/978-3-540-93905-4 Advances in Intelligent and Soft Computing
ISSN 1867-5662
Library of Congress Control Number: Applied for c 2009
Springer-Verlag Berlin Heidelberg
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Typeset & Cover Design: Scientific Publishing Services Pvt. Ltd., Chennai, India. Printed in acid-free paper 543210 springer.com
Preface
We perform pattern recognition all the time in our daily lives, without always being aware of it. We firstly observe the world around us by using all our senses (we extract features from a large set of data). We subsequently perform pattern recognition by grouping together similar features and giving them a common label. We can identify similar, non-identical events or objects in an efficient way. We can, for example, recognise whether complete strangers are smiling at us or not. This is a computationally demanding task, yet is seemingly trivial for humans. We can easily understand the meaning of printed texts even if the letters belong to a font that is new to us, so long as the new font is “similar” to ones we already know. Yet making machines responsive to “similarity notions” can be singularly problematic. Recognition is strongly linked with prediction: distinguishing between a smile and an angry face may be critical to our immediate future action. The same principle applies to driving in heavy traffic or dealing with many social situations. The successful automation of recognition tasks is not only a major challenge, it is inextricably linked to the future of our modern world. Recognizing traffic flow and traffic behaviour (be it road traffic, air traffic or internet traffic) can lead to greater efficiency and safety in navigation generally. Recognizing biosignals (such as ECG or EMG) and diseases as efficiently as possible is critical for effective medical treatment. Modern warfare is not covered here, but its development in the 21st century will also depend critically on newer, faster, more robust recognition systems. This monograph is the third edition in the Springer Advances in Intelligent and Soft Computing series, documenting current progress in computer recognition systems. It is our intention to provide the reader with an informative selection of high-quality papers, representative of a wide spectrum of key themes characterizing contemporary research. We consider recognition problems in their widest possible sense, by addressing the needs of professionals working in the traditional fields of artificial intelligence and pattern
VI
Preface
recognition theory as well as those working in highly allied ones, such as robotics and control theory. For that reason we have aimed to strike a healthy balance between introducing new theoretical ideas and emphasing the variety of possible, real-life applications: hence the inclusion of papers describing new learning schemes and others which focus on medical diagnostic aids, for example. The chosen papers were grouped according to key themes of current interest as follows: 1. Image Processing and Computer Vision, which contains papers focusing on image processing and the analysis of visual information; 2. Features, Learning, and Classifiers, which reports on new classification methods and new learning schemes; 3. Speech and Word Recognition, which contains papers on methods and applications of speech classification, as well as on the automated processing and analysis of texts; 4. Medical Applications, which presents the latest advances in computer aided medical diagnosis, together with reports on new classification methods and machine learning; 5. Miscellaneous Applications presents papers on alternative applications of image recognition algorithms. We believe our presentation of the subject matter provides a fine panoramic view of the subject matter as a whole without compromising on quality: 150 submissions were received in total by the publisher and almost 70 of them were carefully selected for publication in this edition, based upon the reviews of specialists working in the considered research areas. Quite simply, we think there is more than something here for everyone. Wroclaw, May 2009
Marek Kurzynski Michal Wozniak
Contents
Part I: Image Processing and Computer Vision Augmenting Mobile Robot Geometric Map with Photometric Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Piotr Skrzypczy´ nski
3
Patchwork Neuro-fuzzy System with Hierarchical Domain Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Krzysztof Simi´ nski
11
Dynamic Surface Reconstruction Method from Unorganized Point Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Karolina Nurzy´ nska
19
Fusion of External Context and Patterns – Learning from Video Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ewaryst Rafajlowicz
27
Computer Visual-Auditory Diagnosis of Speech Non-fluency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mariusz Dzie´ nkowski, Wieslawa Kuniszyk-J´ o´zkowiak, El˙zbieta Smolka, Waldemar Suszy´ nski
35
Low-Cost Adaptive Edge-Based Single-Frame Superresolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Zbigniew Swierczy´ nski, Przemyslaw Rokita
43
Eye and Nostril Localization for Automatic Calibration of Facial Action Recognition System . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaromir Przybylo
53
VIII
Contents
Grade Differentiation Measure of Images . . . . . . . . . . . . . . . . . . . . Maria Grzegorek
63
3D Mesh Approximation Using Vector Quantization . . . . . . . . . Michal Romaszewski, Przemyslaw Glomb
71
Vehicles Recognition Using Fuzzy Descriptors of Image Segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bartlomiej Placzek
79
Cluster Analysis in Application to Quantitative Inspection of 3D Vascular Tree Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Artur Klepaczko, Marek Kocinski, Andrzej Materka
87
The Comparison of Normal Bayes and SVM Classifiers in the Context of Face Shape Recognition . . . . . . . . . . . . . . . . . . . . . . Adam Schmidt
95
Detection of Interest Points on 3D Data: Extending the Harris Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Przemyslaw Glomb New Edge Detection Algorithm in Color Image Using Perception Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Wojciech S. Mokrzycki, Marek A. Samko A Comparison Framework for Spectrogram Track Detection Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Thomas A. Lampert, Simon E.M. O’Keefe Line Detection Methods for Spectrogram Images . . . . . . . . . . . . 127 Thomas A. Lampert, Simon E.M. O’Keefe, Nick E. Pears Morphological Analysis of Binary Scene in APR Integrated Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Marek Kr´ otkiewicz, Krystian Wojtkiewicz Digital Analysis of 2D Code Images Based on Radon Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Rafal Tarlowski, Michal Chora´s Diagnostically Useful Video Content Extraction for Integrated Computer-Aided Bronchoscopy Examination System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Rafal J´ o´zwiak, Artur Przelaskowski, Mariusz Duplaga Direct Filtering and Enhancement of Biomedical Images Based on Morphological Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Juliusz L. Kulikowski, Malgorzata Przytulska, Diana Wierzbicka
Contents
IX
Part II: Features, Learning and Classifiers A Novel Self Organizing Map Which Utilizes Imposed Tree-Based Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 C´esar A. Astudillo, John B. Oommen A New Feature Extraction Method Based on the Partial Least Squares Algorithm and Its Applications . . . . . . . . . . . . . . . 179 Pawel Blaszczyk, Katarzyna Stapor Data Noise Reduction in Neuro-fuzzy Systems . . . . . . . . . . . . . . . 187 Krzysztof Simi´ nski Enhanced Density Based Algorithm for Clustering Large Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Yasser El-Sonbaty, Hany Said New Variants of the SDF Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Maciej Smiatacz, Witold Malina Time Series Prediction Using New Adaptive Kernel Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Marcin Michalak The Sequential Reduction Algorithm for Nearest Neighbor Rule Based on Double Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Marcin Raniszewski Recognition of Solid Objects in Images Invariant to Conformal Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Boguslaw Cyganek A New Notion of Weakness in Classification Theory . . . . . . . . . 239 Igor T. Podolak, Adam Roman The Adaptive Fuzzy Meridian and Its Appliction to Fuzzy Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Tomasz Przybyla, Janusz Jezewski, Krzysztof Horoba Comparison of Various Feature Selection Methods in Application to Prototype Best Rules . . . . . . . . . . . . . . . . . . . . . . . . . 257 Marcin Blachnik A Novel Ensemble of Scale-Invariant Feature Maps . . . . . . . . . . 265 Bruno Baruque, Emilio Corchado Multivariate Decision Trees vs. Univariate Ones . . . . . . . . . . . . . 275 Mariusz Koziol, Michal Wozniak
X
Contents
On a New Measure of Classifier Competence in the Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Tomasz Woloszynski, Marek Kurzynski Dominance-Based Rough Set Approach Employed in Search of Authorial Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Urszula Stanczyk Intuitionistic Fuzzy Observations in Local Optimal Hierarchical Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Robert Burduk Electrostatic Field Classifier for Deficient Data . . . . . . . . . . . . . . 311 Marcin Budka, Bogdan Gabrys Pattern Recognition Driven by Domain Ontologies . . . . . . . . . . 319 Juliusz L. Kulikowski Part III: Speech and Word Recognition Cost-Efficient Cross-Lingual Adaptation of a Speech Recognition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Zoraida Callejas, Jan Nouza, Petr Cerva, Ram´ on L´ opez-C´ ozar Pseudo Multi Parallel Branch HMM for Speaker Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Donato Impedovo, Mario Refice Artificial Neural Networks in the Disabled Speech Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 ´ Izabela Swietlicka, Wieslawa Kuniszyk-J´ o´zkowiak, El˙zbieta Smolka Using Hierarchical Temporal Memory for Recognition of Signed Polish Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Tomasz Kapuscinski, Marian Wysocki Part IV: Medical Applications Strategies of Software Adaptation in Home Care Systems . . . . 365 Piotr Augustyniak Database Supported Fine Needle Biopsy Material Diagnosis Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Maciej Hrebie´ n, J´ ozef Korbicz
Contents
XI
Multistrategic Classification System of Melanocytic Skin Lesions: Architecture and First Results . . . . . . . . . . . . . . . . . . . . . . 381 Pawel Cudek, Jerzy W. Grzymala-Busse, Zdzislaw S. Hippe Reliable Airway Tree Segmentation Based on Hole Closing in Bronchial Walls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Michal Postolski, Marcin Janaszewski, Anna Fabija´ nska, Laurent Babout, Michel Couprie, Mariusz Jedrzejczyk, Ludomir Stefa´ nczyk Analysis of Changes in Heart Ventricle Shape Using Contextual Potential Active Contours . . . . . . . . . . . . . . . . . . . . . . . 397 Arkadiusz Tomczyk, Cyprian Wolski, Piotr S. Szczepaniak, Arkadiusz Rotkiewicz Analysis of Variability of Isopotential Areas Features in Sequences of EEG Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Hanna Goszczy´ nska, Leszek Kowalczyk, Marek Doros, Krystyna Kolebska, Adam J´ o´zwik, Stanislaw Dec, Ewa Zalewska, Jan Miszczak Tumor Extraction From Multimodal MRI . . . . . . . . . . . . . . . . . . . 415 Moualhi Wafa, Ezzeddine Zagrouba Combined T1 and T2 MR Brain Segmentation . . . . . . . . . . . . . . 423 Rafal Henryk Kartaszy` nski, Pawel Mikolajczak The Preliminary Study of the EGG and HR Examinations . . . 431 Dariusz Komorowski, Stanislaw Pietraszek Electronic Records with Cardiovascular Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Angelmar Constantino Roman, Hugo Bulegon, Silvio Bortoleto, Nelson Ebecken Stroke Slicer for CT-Based Automatic Detection of Acute Ischemia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Artur Przelaskowski, Grzegorz Ostrek, Katarzyna Sklinda, Jerzy Walecki, Rafal J´ o´zwiak Control of Bio-prosthetic Hand via Sequential Recognition of EMG Signals Using Rough Sets Theory . . . . . . . . . . . . . . . . . . . 455 Marek Kurzynski, Andrzej Zolnierek, Andrzej Wolczowski Hierarchic Approach in the Analysis of Tomographic Eye Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Robert Koprowski, Zygmunt Wrobel
XII
Contents
Layers Recognition in Tomographic Eye Image Based on Random Contour Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Robert Koprowski, Zygmunt Wrobel Recognition of Neoplastic Changes in Digital Images of Exfoliated Nuclei of Urinary Bladder – A New Approach to Classification Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Annamonika Dulewicz, Adam J´ o´zwik, Pawel Jaszczak, Boguslaw D. Pi¸etka Dynamic Contour Detection of Heart Chambers in Ultrasound Images for Cardiac Diagnostics . . . . . . . . . . . . . . . . . . 489 Pawel Hoser Capillary Blood Vessel Tracking Using Polar Coordinates Based Model Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Mariusz Paradowski, Halina Kwasnicka, Krzysztof Borysewicz Part V: Miscellaneous Applications Application of Rough Sets in Combined Handwritten Words Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 Jerzy Sas, Andrzej Zolnierek An Adaptive Spell Checker Based on PS3M: Improving the Clusters of Replacement Words . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Renato Cordeiro de Amorim Visual Design Aided by Specialized Agents . . . . . . . . . . . . . . . . . . 527 ´ Ewa Grabska, Gra˙zyna Slusarczyk Head Rotation Estimation Algorithm for Hand-Free Computer Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 Rafal Kozik Detection of the Area Covered by Neural Stem Cells in Cultures Using Textural Segmentation and Morphological Watershed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 Marcin Iwanowski, Anna Korzynska Decision Support System for Assessment of Enterprise Competence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 Jan Andreasik The Polish Coins Denomination Counting by Using Oriented Circular Hough Transform . . . . . . . . . . . . . . . . . . . . . . . . . 569 Piotr Porwik, Krzysztof Wrobel, Rafal Doroz
Contents
XIII
Recognizing Anomalies/Intrusions in Heterogeneous Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 Michal Chora´s, L ukasz Saganowski, Rafal Renk, Rafal Kozik, Witold Holubowicz Vehicle Detection Algorithm for FPGA Based Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Wieslaw Pamula Iris Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593 Ryszard S. Chora´s A Soft Computing System for Modelling the Manufacture of Steel Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 Andres Bustillo, Javier Sedano, Leticia Curiel, Jos´e R. Villar, Emilio Corchado Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
Part I
Image Processing and Computer Vision
Augmenting Mobile Robot Geometric Map with Photometric Information Piotr Skrzypczy´ nski Pozna´ n University of Technology, Institute of Control and Information Engineering ul. Piotrowo 3A, PL-60-965 Pozna´ n, Poland
[email protected]
Summary. In this paper we discuss methods to increase the discriminative properties of the laser-based geometric features used in SLAM by employing monocular vision data. Vertical edges extracted from images enable to estimate the length of partially observed line segments. Salient visual features are represented as the SIFT descriptors. These photometric features augment the 2D line segments extracted from the laser data and form a new feature type.
1 Introduction An important task for mobile robots exploring unknown environments is Simultaneous Localization and Mapping (SLAM) – building an environment map from sensory data, while concurrently computing the robot pose estimate. In real, cluttered environments robust data association is still a challenge in SLAM. As so far, most of the SLAM systems employ 2D laser scanners. For successful data association in SLAM the robot should recognize unambiguously its surroundings, what in many environments is impossible relying solely on geometry and the 2D laser data, due to environment symmetries and lack of distinguishable geometric features. Recently, vision received much attention in the SLAM research, because of its ability to capture a rich description of the environment, including both the geometric and photometric information. Cameras enable the robot to recognize a large variety of visually salient photometric features even in environments where distinctive geometric features are unavailable. However, passive visual sensing, particularly with a single camera (monocular), has many limitations related to occlusions, shadows, specular reflections, etc. [3]. Some researchers exploit the idea of combining laser and vision data in SLAM. Within the context of EKF-based SLAM a multi-sensor system consisting of a 2D laser scanner and a camera has been employed in [2]. The authors of [7] use a combination of the feature-based and appearance-based approaches to SLAM employing both the laser and monocular vision data to solve the robot first-location problem. The work by Newman and Ho [6] shows how monocular vision data can be used for loop closing in SLAM. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 3–10. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
4
P. Skrzypczy´ nski
The main idea of this work is to use the laser data to estimate the location of the geometric features (line segments) and use the vision data to make these features more distinguishable in the data association process. We use the results of our previous research on mapping and SLAM with range sensors [8, 9] to extract the line segments from the laser scanner data. Then, from image data we add to these line segments reliable geometric constraints on their length. These constraints and the assumption that segments are parts of planar vertical surfaces allow to convert the segments into semiplanes and embed into them descriptors of salient photometric features. Scale Invariant Feature Transform (SIFT) descriptors have been chosen to efficiently store the information on photometric features. A segment which is constrained by credible edges confirmed by image data, and is augmented with SIFT descriptors of the salient features detected within the area of a semiplane defined by this segment becomes a Perceptually Rich Segment – a new feature type we introduce.
2 Improving Segments with Vision Data 2.1
Extracting Geometric Features from Raw Sensory Data
Reliable extraction of line segments from the noisy clusters of laser scanner points is accomplished by using the method described in [9]. The extracted segment F is represented with regard to the robot frame R by the feature vector LRF , according to the SPmodel framework [2]. Initially, the uncertainty of the segment length is not determined, due to the low credibility of endpoints extracted solely from the laser scan. The line segments obtained from laser data are further structured into polylines. This allows a straightforward detection of corners (either convex or concave) which are points of intersections of two consecutive segments. The covariance matrix of a corner is computed by propagating uncertainty [2] from parameters of the two crossing segments. The detected corners and the segment endpoints are then considered as candidates for vertical edges trimming the segments, if they are confirmed by the photometric information. To detect vertical lines, which are ubiquitous visual features in many indoor environments, the vertical edges in the image are enhanced by using a Sobel filter approximating the image gradient in the horizontal direction. Then, hysteresis thresholding is applied to obtain a binary image. The resulting edges are thinned, and the horizontal position of each edge pixel having the value of ’1’ (the background is set to ’0’) is corrected upon the camera calibration results. Because we look only for vertical lines in the binary images, the line fitting is reduced to an one-dimensional problem, which can be solved by using an one-dimensional version of the Hough transform. Each vertical edge extracted from an image is represented initially by an angle related to the camera coordinate system:
Augmenting Mobile Robot Geometric Map with Photometric Information
φv = arctan
xf − xc f
5
,
(1)
where xf is the x-coordinate of the extracted edge in the image, xc is the x-coordinate of the camera’s center of distortion, and f is the focal length (all values in pixels). Uncertainty of the vision angle σφv is computed by propagating the uncertainty of edge detection and camera calibration parameters obtained with a standard calibration technique [4]. Although position of the edge is computed with sub-pixel resolution, we conservatively set the standard deviation of this measurement to the size of a pixel on the CCD matrix. A
B
C
Fig. 1. Confirmed vertical edges (A,B), and photometric edges (C)
2.2
Integrating Vertical Edges with Segments
A vertical edge extracted from the camera image is represented by its vision angle φv with regard to (w.r.t.) the camera coordinate system. In order to establish correspondence between the features obtained from both sensors the vision angles are transformed to the scanner coordinates. The i-th vision angle φvi is converted to the φL vi angle relative to the laser scanner coordinate system by using the coordinate transformation between the camera and the scanner. This transformation was obtained by using a calibration technique proposed in [2]. Because both sensors involved are parallel to the floor plane, only the 2D translation tLC = [xLC yLC ]T and rotation φLC have been computed. Therefore, photometric features can be represented in the reference frame of the laser scanner. Geometric information on possible vertical edges is extracted from the segment-based local map. These edges are represented as angles in the scanner coordinate system. The observation angle of j-th edge extracted from the local segment-based map is given as xj φlj = arctan , (2) yj where [xj yj ]T are the edge coordinates w.r.t the scanner reference frame. Squared Mahalanobis distance test is performed on the observation angles of all pairs of features from both sensors to find the geometric feature corresponding to the photometric observation. This test takes into account the
6
P. Skrzypczy´ nski
variances σφ2 v and σφ2 l computed by propagating the uncertainty from the particular sensor model [4]. If multiple geometric features satisfy the test, the one that is closest to the sensor is chosen to avoid false pairings resulting from occlusions (Fig. 1A – false match marked with arrow). A geometric vertical edge whose observation angle φlj corresponds to the vision angle φL vi representing a photometric edge is considered as confirmed by image data. Its position uncertainty is reduced by fusing the independent laser and vision data with a standard EKF algorithm. A confirmed vertical edge can be used to terminate (constraint) the 2D line segments that are associated to this vertical edge in the local map (Fig. 1B – constrained segments are marked with arrows). Two boolean variables el and er are added to the feature vector LRF that indicate if the segment is constrained by confirmed vertical edges (left and right, respectively). Segments constrained by confirmed vertical edges at both ends are converted to semiplanes. Some photometric edges do not have corresponding features in the laserbased map. Although such edges do not terminate the segments, they are useful in SLAM, and should be included in the map [1]. Unfortunately, information provided by monocular vision is insufficient to determine range to the detected vertical edges from a single image. Thus, we determine from an image only the bearing φv , while the range to the vertical edge is then estimated using the laser data. We are interested only in photometric edges that are reliable enough to be used in SLAM: they have to be longer than 160 pixels and located on a planar vertical surface defined by some laser-based segment. The location of a vertical edge is estimated as the intersection point of a virtual line defined by the center of the coordinate system and the i-th vision angle φL vi , and the supporting line of the first segment crossed by this virtual line (Fig. 1C). The location uncertainty of a photometric edge is computed by propagating the covariance of the vision angle σφ2 v and the uncertainty (covariance matrix) CF of the segment involved in the intersection.
3 Augmenting Segments with Salient Features 3.1
Perceptually Rich Segments
Once the laser-based 2D segments have been converted into semiplanes representing planar vertical surfaces, they can be used as frames for rich photometric information, which however, defies geometric interpretation. Such photometric information can give the segments more distinctiveness, and enable robust place recognition in SLAM. In order to efficiently embed the photometric information into the segments we need a method for extracting distinctive features from images that can be used to perform reliable matching between different views of a scene. The detected image features should be robust to variations in viewpoint, translation, scale, and illumination. A feature extraction method satisfying these conditions is the Scale Invariant Feature Transform.
Augmenting Mobile Robot Geometric Map with Photometric Information
7
The SIFT algorithm [5] detects interest keypoints, which are local extremes of the Difference-of-Gaussian images through location and scale space. Then, a SIFT descriptor is created by computing the gradient magnitude and orientation at each keypoint in a region around this point location. Dominant orientations are determined by accumulating the sampled information into orientation histograms over 4 × 4 local image regions. The descriptor is a 4 × 4 array of histograms, each with 8 orientation bins, what results in a 128-dimensional feature vector [5]. Figure 2 shows examples of SIFT descriptors detected on images taken in a typical office-like environment. The descriptors are represented by arrows, with the orientation of each arrow indicating the orientation of a descriptor and the length proportional to scale. Note, that SIFT keypoints are detected mainly at visually salient regions of images, such as the posters and door plates (Fig. 2A and C) or fire extinguishers (Fig. 2B). The SIFT keypoints found in an image are projected onto the vertical semiplanes extracted from the corresponding laser scan by using projective geometry and the camera-scanner calibration data. This is done under an assumption that the photometric features represented by SIFT vectors are located on approximately flat vertical surfaces. Because the sensors provide no reliable information about the height of the vertical semiplanes, we set this height to a predefined value. The keypoints located too close to the semiplane borders are not taken into account, because they are very likely produced by long edges (e.g. between a wall and the floor) or shadows (e.g. in a corner). A data structure consisting of a 2D segment limited in length by photometryconfirmed vertical edges and a set of SIFT descriptors located inside the rectangular area created by this segment and the edges is considered a new feature type: Perceptually Rich Segment (PRS). A
D
B
C
E
Fig. 2. Examples of the SIFT features detected in different locations
8
P. Skrzypczy´ nski
The local maps resulting from augmenting the laser segments with the SIFTs shown in Fig. 2A and 2B are depicted in Fig. 2E and 2F, respectively. These local representations consist of ”plain” segments (light grey) and PRS features (dark grey). Note, that the embedded SIFT keypoints clearly represent visually salient features in these scenes, particularly the poster on the doors, the two landmarks attached to the wall and the fire extinguishers. 3.2
Matching of Segments Based on Salient Features
Although the PRS features are much more distinctive than ordinary 2D segments, an efficient SIFT matching procedure is required to employ the photometric information for feature matching in SLAM. In [5] images are matched by individually comparing each SIFT feature descriptor from the considered image to a database of descriptors (from other images) and finding candidate matching descriptors based on nearest neighbor criteria. The nearest neighbor is defined as the descriptor sj from the database with minimum Euclidean distance dij to the considered descriptor vector si : 128 2 dij = (si [n] − sj [n]) , (3) n=1
where n is the dimension index. In the case under study there is no database, SIFT descriptors from two PRS features are compared to judge if they represent a matching pair of local scenes. A PRS extracted from the current pair of observations (laser scan and camera image) is treated as the query feature, and its descriptors are compared to the descriptors from other PRS features, already stored in the map of the SLAM system. For each SIFT descriptor si in the query PRS distances dij to all the sj (j = 1 . . . m) descriptors in the candidate PRS are computed and sorted in an ascending order di1 . . . dim . Then, the distance of the closest neighbor di1 is compared to that of the secondclosest neighbor di2 . If the difference between these distances is smaller than a given threshold, the vectors are considered as matching. For each candidate PRS the number of matching descriptors is computed. The candidate feature that scores the highest number of matchings is considered most similar to the query PRS. If we use only the Euclidean distance for descriptor matching false matchings occur frequently, mostly due to large changes in scale and background clutter (Fig. 3A and C – false matchings marked with arrows). In order to increase matching reliability we use similarity of SIFT vector orientation as an additional matching criteria. If the difference in orientation of two compared vectors is bigger than 20◦ , then the matching result is considered negative regardless of the distance dij . Although this additional criteria makes impossible to match SIFT features extracted from two images that are rotated, in the robotic application under study this criteria helps to eliminate false matchings (Fig. 3B and D), because the images acquired by the mobile robot
Augmenting Mobile Robot Geometric Map with Photometric Information
9
E
A1
A2
C1
C2
B1
D1
B2
D2
Fig. 3. SIFT matching – without (A,C) and with (B,D,E) the angle criteria
with the camera fixed to its body and parallel to the floor plane are free from rotation variations. Figure 3E depicts the PRS features with matching keypoints (black dots) resulting from the pair of views shown in Fig. 3B.
4 Experimental Results An experiment in an unmodified corridor environment has been performed in order to investigate how the proposed PRS features improve quality of the segment-based global map, and to verify the SIFT-matching method in a realistic scenario. A Labmate robot equipped with a Sick LMS 200 laser scanner and a fixed CCD camera providing 640 × 480 images has been used. In this experiment the robot was teleoperated, and 6 images have been taken by the on-board camera in the most feature-rich parts of the environment (Fig. 4C). The second and the sixth images show approximately the same part of the scene, but from a significantly different point of view. Results obtained with an EKF-based SLAM system using both the laser and camera data, and employing the PRS features are shown in Fig. 4A. The map contains segment features (light grey) and PRS features (dark grey). Thick lines are the vertical edges verified by the photometric data and then used to terminate the segments. The vision-verified vertical edges contributed to the map consistency by terminating correctly the segments, thus enabling precise modelling of the environment geometry. Detected SIFTs are shown
B
2
A
C
1
6 3
4
5
Fig. 4. Fragment of the corridor mapped by a SLAM system using the PRS features
10
P. Skrzypczy´ nski
as black points. Those SIFTs that have been matched between two different PRS features are shown as dark grey points – the system correctly identified the matching pair of perceptually rich segments. The inset image (Fig. 4B) shows the two matching scene views with the matching SIFT keypoints.
5 Conclusions Concepts and results concerning the use of photometric features to increase the discriminative properties of laser-based line segments used in indoor SLAM have been presented. The paper constitutes following contributions: (i) effective integration of all the monocular vision data with the segmentbased map; (ii) the concept of Perceptually Rich Segments, which are highly discriminative due to the embedded SIFT keypoints, but precise and easy to maintain in the EKF-based SLAM map due to their geometric frames; (iii) SIFT matching method that takes into account the application domain of the proposed system and enables reduction of false pairings.
References 1. Arras, K.O., Tomatis, N., Jensen, B.T., Siegwart, R.: Multisensor On-the-Fly Localization: Precision and Reliability for Applications. Robotics and Autonomous Systems 34, 131–143 (2001) 2. Castellanos, J.A., Tard´ os, J.D.: Mobile Robot Localization and Map Building. A Multisensor Fusion Approach. Kluwer, Boston (1999) 3. Davison, A., Reid, I., Molton, N., Stasse, O.: MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. on Pattern Anal. and Machine Intell. 29(6), 1052– 1067 (2007) 4. Haralick, R.M.: Propagating Covariance in Computer Vision. Int. Journal Pattern Recog. and Artif. Intell. 10, 561–572 (1996) 5. Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. Int. Journal of Computer Vision 60(2), 91–110 (2004) 6. Newman, P., Ho, K.: SLAM-Loop Closing with Visually Salient Features. In: Proc. IEEE Int. Conf. on Robot. and Autom., Barcelona, pp. 644–651 (2005) 7. Ortin, D., Neira, J., Montiel, J.M.M.: Relocation Using Laser and Vision. In: Proc. IEEE Int. Conf. on Robot. and Autom., New Orleans, pp. 1505–1510 (2004) 8. Skrzypczy´ nski, P.: Merging Probabilistic and Fuzzy Frameworks for Uncertain Spatial Knowledge Modelling. In: Kurzy´ nski, M., et al. (eds.) Computer Recognition Systems, pp. 435–442. Springer, Berlin (2005) 9. Skrzypczy´ nski, P.: Spatial Uncertainty Management for Simultaneous Localization and Mapping. In: Proc. IEEE Int. Conf. on Robot. and Autom., Rome, pp. 4050–4055 (2007)
Patchwork Neuro-fuzzy System with Hierarchical Domain Partition Krzysztof Simi´ nski Institute of Informatics, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland
[email protected]
Summary. The paper presents the patchwork hierarchical domain partition in the neuro-fuzzy system with parameterized consequences. The hierarchical domain partition has the advantages of grid partition and clustering. It avoids the curse of dimensionality and reduces the occurrence of areas with low membership to all regions. The paper depicts the iterative hybrid procedure of hierarchical split. The splitting procedure estimates the best way of creating of the new region: (1) based on finding and splitting the region with the highest contribution to the error of the system or (2) creation of patch region for the highest error area. The paper presents the results of experiments on real life and synthetic datasets. This approach can produce neuro-fuzzy inference systems with better generalisation ability and subsequently lower error rate.
1 Introduction Neuro-fuzzy systems combine fuzzy approach with the ability to extract and generalise knowledge from presented data. Many systems have been proposed as ANFIS [7], LOLIMOT [13], ANNFIS and ANLIR – systems with parametrized consequences [6,9,10,8]. The crucial part of neuro-fuzzy system is the fuzzy rule base. The premises of the rules split the input domain into regions. The common way of creating rules has two stages: first the input domain is split into regions (determining the rules’ premises) and then the consequences of rules are calculated. The methods of splitting can be organised in three essential classes: clustering, grid split and hierarchical split [8]. The advantage of the grid split is that it leaves no areas with very low membership to all regions. The most important disadvantage is the exponential growth of number of regions with the growth of task’s dimensionality – curse of dimensionality. The drawback of this partition method is the presence of redundant regions that contain no or very few objects. These regions have to be removed from the system. The clustering avoids the curse of dimensionality. The problems of this approach are: the determination of cluster number and the areas with very low membership to all regions (clusters). The unseen cases may fall into these areas what may cause poor results elaborated by the system for these cases. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 11–18. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
12
K. Simi´ nski
The hierarchical approach to domain split can be shortly depicted as system that splits the domain into two parts, decides whether further partition is required and if so – continues splitting. The hierarchical approach joins the advantages of clustering and grid split: reduces the low membership regions, lacks the curse of dimensionality and alleviates the determination of region number. Some attempts have been made to apply the hierarchical domain in neuro-fuzzy systems (LOLIMOT [13], binary space partitioning [22, 23, 1] – regions are always split into two equal subregions). In [20] HS-47 – the neurofuzzy inference system with parametrized consequences and hierarchical input domain partition is presented. This system is not limited to splitting regions into two equal subregions. This paper presents the further development of this system. Having split the input domain the areas of increased error may occur. For these areas the local patches may be proposed. The proposed system splits hierarchically the input domain but also applies the local patches to reduce the total system error. In comparison with HS-47 the presented system uses also a modified splitting procedure.
2 Fuzzy Inference System with Parameterized Consequences The neuro-fuzzy system with parameterized consequences [6,9,10] is the system combining the Mamdani-Assilan [11] and Takagi-Sugeno-Kang [25, 24] approach. The fuzzy sets in consequences are isosceles triangles (as in Mamdami-Assilan system), but are not fixed – their location is calculated as a linear combination of attribute values as the localisation of singletons in Takagi-Sugeno-Kang system. The system with parameterized consequences is the MISO system. The rule base contains fuzzy rules in form of fuzzy implications R(i) : X is A(i) ⇒ Y is B (i) (θ),
(1)
where X = [x1 , x2 , . . . , xN ]T and Y are linguistic variables, A and B are fuzzy linguistic terms (values) and θ is the parameter vector of the consequence linguistic term. The linguistic variable Ai (A for ith attribute) is described with the Gaussian membership function: 2 (xi − ci ) μAi (xi ) = exp − , (2) 2s2i where ci is the core location for ith attribute and si is this attribute Gaussian bell deviation. Each region in the domain is represented by a linguistic variable A. The term B is represented by an isosceles triangle with the base width w, the altitude of the triangle in question is equal to the firing strength of the ith rule:
Patchwork Neuro-fuzzy System with Hierarchical Domain Partition
F (i) (X) = μA(i) (X) = μA(i) (x1 ) T · · · T μA(i) (xN ), 1
13
(3)
N
where T denotes the T-norm and N stands for number of attributes. The localisation of the core of the triangle membership function is determined by linear combination of input attribute values:
(i) (i) (i) T T T y (i) = θT = a0 , a1 , . . . , aN · [1, x1 , . . . , xN ] . (4) i · 1, X The fuzzy output of the system can be written as μB (y, X) =
I
Ψ μA(i) (X) , μB (i) (y, X) ,
(5)
i=1
where denotes the aggregation, Ψ – the fuzzy implication, i – the rule’s index and I – the number of rules. The crisp output of the system is calculated using the MICOG method: I y=
g (i) (X) y (i) (X) , I (i) (X) i=1 g
i=1
(6)
where y (i) (X) stands for the location of center of gravity of the consequent fuzzy set, F (i) – the firing strength of the ith rule, w(i) – the width of the base of the isosceles triangle consequence function of the ith rule. The function g depends on the fuzzy implication, in the system the Reichenbach one is used, so for the ith rule function g is g (i) (X) =
w(i) (i) F (X) . 2
(7)
3 Hierarchical Domain Split with Patches The proposed system is a hybrid one that comprises the hierarchical input domain partition with creating patch rules for areas with highest error. The algorithm is presented in Fig. 1. The regions the input domain is split into are defined with the Eq. 2. The first action of the partitioning procedure is the creation of the initial region. The values of c = [x1 , x2 , . . . , xN ] and s = [s1 , s2 , . . . , sN ] are calculated as follows. The centre of the region is calculated as a mean attribute values ci = x ¯i of all data examples. Similarly the fuzzification parameter s is calculated as standard deviation of attributes. Having created the first region it is tuned. The tuning of the region aims at better evaluation of regions parameters and the elaboration of consequences. The parameters of the rules (the rules premises, the widths of the bases of the isosceles triangle sets in consequences) are tuned by means of the gradient method. The linear coefficients θ of the localisation of the triangle sets in consequences are calculated with the least square method.
14
K. Simi´ nski
procedure P = Partition (data) stop-criterion = false P =∅ {no regions} P.add (Create-Initial-Region) {first region created} P.tune {premise and consequent tuned} error = Calculate-Error (P, data) {root square mean error} while not stop-criterion P-split = P P-patch = P {attempt at splitting the worst region:} worst = Find-the-worst-region (P-split, error ) [subregion-1, subregion-2 ] = split (worst, error ) P-split.add (subregion-1 ) {both subregions added} P-split.add (subregion-2 ) P-split.tune {all rules tuned} split-error = calculate-error (P-split, data) {attempt at adding patch:} P-patch.add (create-patch(P, error )) {patch created and added} P-patch.tune {all rules tuned} patch-error = calculate-error (P-patch, data) if (patch-error < split-error ) {best method selected} P = P-patch; error = patch-error else P = P-split; error = split-error end if stop-criterion = Check-stop-condition end while return P {fuzzy rule base created} end procedure Fig. 1. Algorithm of patchwork hierarchical input domain partition
Next the global root square mean error (RSME) for training data is evaluated. Now the iterative splitting of regions starts. In each iteration there is an attempt at splitting the worst region and at applying the patch. Splitting always begins with search for worst region – the one with the highest root mean square error value ei for all N tuples calculated as N 1
2 g (i) (X n ) y (i) (X n ) − Yn , ei = N n=1 where Yn is the original, expected value for the nth tuple. This region has the highest contribution to the global error of the system. The found region is then split into two subregions. In the papers on the hierarchical split of domain in data mining there are described many methods using various criteria (eg. Gini index [4], entropy [18, 26], impurity, information gain [12, 15], twoing rule [4], histogram [2], variance of the decision
Patchwork Neuro-fuzzy System with Hierarchical Domain Partition
15
attribute [16,17]). These methods are used for obtaining the crisp split of the domain. The fuzzy splitting of the domain is not very widely discussed in papers. Often the region is split by half [1, 14]. In the proposed approach the region is split into two regions by fuzzy C-means clustering algorithm. The error for each tuple determined after the tuning is added as an additional attribute for clustering. The output parameter is also used in clustering. Thus the clustered domain can be written symbolically as X × E × Y, where X denotes the attribute space, E – the error space and Y – the output attribute space. After splitting the regions are tuned and the global error is computed. Not always the most optimal procedure is splitting of the worst error, sometimes it is better to create new patch region instead of splitting. The patch is also a region described with centre c and s. These values are calculated similarly as the initial region, but here each example has a weight – the normalised to the interval [0, 1] value of error for this example calculated in the previous iteration. Having added the patch, all regions are tuned and the global error is computed. Now there is only one thing left before ending the iteration. The errors for splitting and patch are compared and it is determined whether it is better to create new region by splitting one of the regions into two subregions or to add a patch region. Some attempts have been made to find a efficient and robust heuristics for deciding whether to split the worst region or to add the patch. Many approaches have been tested but none of them resulted in any reasonable heuristics. The stop criterion is set true if the RSME for test data starts to increase (the knowledge generalisation ability declines).
4 Experiments The experiments were performed using the synthetic (‘Hang’, ‘leukocyte concentration’) and real life (‘methane concentration’, ‘wheat prices’, ‘Cardiff meteorological data’, ‘Boston housing’, ‘Warsaw Stock Data’, ‘gas-furnace’) data sets. For brevity of the paper we do not provide the precise description of the datasets. The descriptions of the datasets can be found following papers: ‘Hang’ [19], ‘leukocyte concentration’ [8], ‘methane concentration’, ‘wheat prices’, ‘noisy polynomial’ and ‘Cardiff meteorological data’ [21], ‘Boston housing’ [20], ‘gas-furnace’ [3]. The ‘Warsaw Stock Data’ dataset is used for the first time in this paper. The dataset is prepared from the close value of Warsaw Stock Exchange Index for largest 20 companies (WIG-20). The raw data1 comprise the Index from 2000-01-01 till 2000-12-31. The tuples are prepared according to the template: [w(t) − w(t − 10), w(t) − w(t − 5), w(t) − w(t − 2), w(t) − w(t − 1), w(t − 1), w(t)]. The tuples are divided into train dataset (tuples: 1 – 100) and test dataset (tuples: 101 – 240). 1
http://stooq,com/q/d/?s=wig20&c=0&d1=20000101&d2=20001231
16
K. Simi´ nski
Table 1. Comparison of knowledge generalisation ability for ANNBFIS [6], HS47 [20] and proposed system HS-65. Abbreviations used in the table: ‘#r’ – number of rules, ‘RSME’ – root square mean error. dataset methane concentration noisy polynomial wheat prices Cardiff meteo. data Hang Boston housing leukocyte concentration WIG-20
ANNBFIS HS-47 HS-65 #r RSME #r RSME #r RSME 2 4 4 2 34 4 35 3
0.311201 1.406299 0.662019 0.378094 0.002426 3.058712 0.000666 0.004353
1 10 1 2 37 3 27 2
0.301229 0.456760 0.598857 0.354027 0.000323 3.078690 0.000847 0.005479
1 5 1 2 31 5 31 4
0.301229 1.236798 0.598857 0.346821 0.001501 2.379220 0.000659 0.003463
Table 2. Comparison of data approximation ability for ‘gas furnace’ data set. If not cited otherwise the result have been taken from [6, 5]. Abbreviation ‘impl.’ stands for ‘implication’. author/system Tong Xu-Lu Pedrycz Box-Jenkins Sugeno-Yasukawa Chen Lin-Cunningham Sugeno-Tanaka Wang-Langari Zikidis-Vasilakos Kim-Park-Ji Kim-Park ANLIR (G¨ odel impl.) ANNBFIS ANNBFIS HS-47 ANLIR (Fodor impl.) HS-47 Czekalski [5] proposed system HS-65 proposed system HS-65
# rules RMSE 19 25 81 1 6 3 4 2 2 6 2 2 2 3 6 6 6 8 8 6 8
0.6848 0.5727 0.5656 0.4494 0.4348 0.2678 0.2664 0.2607 0.2569 0.2530 0.2345 0.2190 0.1892 0.1791 0.1537 0.1455 0.1353 0.1344 0.1280 0.1247 0.1044
Table 1 presents the results of knowledge generalisation elaborated by ANNBFIS, system with clustering used for input domain partition [6, 9, 10] and two systems with hierarchical domain partition: HS-47 [20] and proposed
Patchwork Neuro-fuzzy System with Hierarchical Domain Partition
17
system HS-56. The performance of the systems is determined by the root mean square error for the test sets. Table 2 gathers the results of data approximation for the ‘gas furnace’ data achieved by the researchers who used this benchmark data set.
5 Summary The hierarchical domain split in neuro-fuzzy systems comprises the advantages of grid partition and clustering. This domain partition method avoids the curse of dimensionality and reduces the occurrence of areas with low membership to all regions. The split of regions into two subregions in the proposed system is based on the fuzzy clustering, resulting in both splitting and fuzzyfication of the subregions. Both decisive and error values are taken into consideration in splitting the regions. The algorithm enabling adding the patch region for areas of increased error can produce neuro-fuzzy inference systems with better generalisation ability and subsequently lower error rate.
References 1. Roberto, M., Almeida, A.: Sistema h´ıbrido neuro-fuzzy-gen´etico para minera¸ca ˜o autom´ atica de dados. Master’s thesis, Pontif´ıca Universidade Cat´ olica do Rio de Janeiro (2004) 2. Basak, J., Krishnapuram, R.: Interpretable hierarchical clustering by constructing an unsupervised decision tree. IEEE Transactions on Knowledge and Data Engineering 17(1), 121–132 (2005) 3. Edward, G., Box, P., Jenkins, G.: Time Series Analysis, Forecasting and Control. Holden-Day, Incorporated (1976) 4. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth, Belmont (1984) 5. Czekalski, P.: Evolution-fuzzy rule based system with parameterized consequences. International Journal of Applied Mathematics and Computer Science 16(3), 373–385 (2006) 6. Czogala, E., L ¸eski, J.: Fuzzy and Neuro-Fuzzy Intelligent Systems. Series in Fuzziness and Soft Computing. Physica-Verlag, A Springer-Verlag Company (2000) 7. Jang, J.-S.R.: ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Transactions on Systems, Man, and Cybernetics 23, 665–684 (1993) 8. L ¸eski, J.: Systemy neuronowo-rozmyte. Wydawnictwa Naukowo-Techniczne, Warszawa (2008) 9. L ¸eski, J., Czogala, E.: A new artificial neural network based fuzzy inference system with moving consequents in if-then rules and selected applications. BUSEFAL 71, 72–81 (1997) 10. L ¸eski, J., Czogala, E.: A new artificial neural network based fuzzy inference system with moving consequents in if-then rules and selected applications. Fuzzy Sets and Systems 108(3), 289–297 (1999) 11. Mamdani, E.H., Assilian, S.: An experiment in linguistic synthesis with a fuzzy logic controller. International Journal of Man-Machine Studies 7(1), 1–13 (1975)
18
K. Simi´ nski
12. Murthy, S.K., Kasif, S., Salzberg, S.: A system for induction of oblique decision trees. Journal of Artificial Intelligence Research 2, 1–32 (1994) 13. Nelles, O., Isermann, R.: Basis function networks for interpolation of local linear models. In: Proceedings of the 35th IEEE Conference on Decision and Control, vol. 1, pp. 470–475 (1996) 14. Nelles, O., Fink, A., Babuˇska, R., Setnes, M.: Comparison of two construction algorithms for Takagi-Sugeno fuzzy models. International Journal of Applied Mathematics and Computer Science 10(4), 835–855 (2000) 15. Quinlan, J.R.: Induction of decision trees. Machine Learning 1, 81–106 (1986) 16. Quinlan, J.R.: Learning with continuous classes. In: Adams, Sterling (eds.) AI 1992, Singapore, pp. 343–348 (1992) 17. Quinlan, J.R.: Combining instance-based and model-based learning. In: Utgoff (ed.) ML 1993, San Mateo (1993) 18. Rastogi, R., Shim, K.: PUBLIC: A decision tree classifier that integrates building and pruning. Data Mining and Knowledge Discovery 4(4), 315–344 (2000) 19. Rutkowski, L., Cpalka, K.: Flexible neuro-fuzzy systems. IEEE Transactions on Neural Networks 14(3), 554–574 (2003) 20. Simi´ nski, K.: Neuro-fuzzy system with hierarchical partition of input domain. Studia Informatica 29(4A (80)) (2008) 21. Simi´ nski, K.: Two ways of domain partition in fuzzy inference system with parametrized consequences: Clustering and hierarchical split. In: OWD 2008, X International PhD Workshop, pp. 103–108 (2008) 22. de Souza, F.J., Vellasco, M.B.R., Pacheco, M.A.C.: Load forecasting with the hierarchical neuro-fuzzy binary space partitioning model. Int. J. Comput. Syst. Signal 3(2), 118–132 (2002) 23. de Souza, F.J., Vellasco, M.M.R., Pacheco, M.A.C.: Hierarchical neuro-fuzzy quadtree models. Fuzzy Sets and Systems 130(2), 189–205 (2002) 24. Sugeno, M., Kang, G.T.: Structure identification of fuzzy model. Fuzzy Sets Syst. 28(1), 15–33 (1988) 25. Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application to modeling and control. IEEE Trans. Systems, Man and Cybernetics 15(1), 116– 132 (1985) 26. Wang, Y., Witten, I.H.: Inducing model trees for continuous classes. In: Proc. of Poster Papers, 9th European Conference on Machine Learning, Prague, Czech Republic (April 1997)
Dynamic Surface Reconstruction Method from Unorganized Point Cloud Karolina Nurzy´ nska Institute of Informatics, The Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland
[email protected]
Summary. The aim of this paper is to introduce a 3D Profile Neighbourhood as an improvement factor in the dynamic surface reconstruction process. The application of this profile in the fitting process causes that the surface better reflects the shape enclosed in the unorganized point cloud. It is achieved due to the utilization of the probability distribution of the point cloud values in the area of each vertex in the evolving irregular polygonal mesh representing the surface. The improvements of the dynamic surface reconstruction method are described and the results are presented.
1 Introduction The utilization of the newest technology increases the amount of scientific data which needs to be processed. In case of computed tomography (CT) the data is represented as a semi-organized point cloud and stores the information of the shape and structure of the scanned object. When one has got in mind the medical application of this device then the description of the intrinsic body structure of human beings is represented by this data. Generally, the CT data could be threaten as an unorganized point cloud, where only the information of the point location in 3D space is known. However, due to the division of the dataset into separate cross-sections the point cloud contains a parameter which allows creating the semi-organized point cloud. The additional information which specifies the division criteria of the point cloud into the cross-sections could be utilized to improve the method work, although, on the other hand it is responsible for the gaps in the point cloud structure and from this point of view, this is a disadvantage of such dataset specification. This paper addresses the dynamic surface reconstruction problem from the unorganized point cloud, where the resulting surface is represented as an irregular polygonal mesh. The suggested solution develops the ideas for dynamic surface reconstruction method described in previous work [16, 17, 18]. Therefore, firstly the basic idea is described and afterwards the improvements. The paper consists of following parts: Section 2 contains the literature M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 19–26. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
20
K. Nurzy´ nska
overview, Section 3 describes the basic idea whereas in Section 4 the improvements are gathered; in Section 5 the experimental results are presented and in Section 6 the conclusions are drawn.
2 State of the Art The problem of the surface reconstruction from the point cloud resulting in the irregular polygonal mesh has been studied for several years. First attempts, however, concentrated on the static reconstruction methods, later the dynamic surface reconstruction methods have been introduced. Generally, the static reconstruction methods in an iterative manner look over the points in the unorganized point cloud and exploting them reconstruct the mesh. There could be distinguished methods which are volume and surface oriented. In the case of volume oriented methods the point cloud firstly is enclosed in some volume, which is divided in smaller subvolumes. The final process of surface reconstruction take place in the subvolumes. The most famous example of this attitude is Marching Cubes method [13]. In contrary, the surface oriented methods tend to reconstruct the mesh directly from the unorganized point cloud, usually three starting points are taken to create first facet and then new points are inserted. In this group the most famous is the Delaunay Triangulation which is based on the theory described in [6]. When the dataset is represented as semi-organized point cloud the Contour Stitching method [8,19] is exploited. More recently this method was improved to assure all possible surface shape reconstruction [14, 15]. The dynamic surface reconstruction methods evolve from Active Contours introduced by Kass et al. [10] and developed by Cohen [1, 2]. This methods base on the model which fits its shape to the unorganized point cloud during the processing. The model is described by the internal forces, which are responsible for keeping the contour points together, whereas the external forces drag the model to the points of interest in the image (in the 2D example). There have also been attempts to create Kass Active Contours for 2.5D or 3D [7, 11], yet non of them allows full specification of the surface behaviour in 3D space. Methods based on 3D generalization of Cohen forces give better results [9, 12, 20]. It shows to be good in finding edges in data, yet only when no assumption on the shape of the contour could be imposed. When one would like to find a contour of a given shape in the data the Active Shape Model described by Cootes et al. [3,4,5] should be taken under consideration. This solution in the pre-processing stage from the training dataset creates a statistical shape model which is exploited in the fitting process to assure that the contour fits only the data of similar shape.
3 Surface Reconstruction The dynamic surface reconstruction method has been introduced in [16, 17, 18]. It consists of two parts. Firstly, the statistical surface shape model is
Dynamic Surface Reconstruction Method from Unorganized Point Cloud
21
created from the training dataset. Secondly, the fitting process takes place which deforms the initial surface shape to best fit the unorganized point cloud. However, the constraints describing the shape variability are taken under consideration in order to keep the shape characteristics from the training dataset. 3.1
Pre-processing Stage
The aim of the pre-processing stage of the dynamical surface reconstruction methodology is to obtain the statistical surface shape model. The model is calculated with the utilization of principal components analysis (PCA) on the normalized training dataset. The elements in the training dataset are described by a characteristic point set (ChPS). The order of the points in the set is significant therefore each point in the dataset space reflects the changes of the shape of the surface within the training dataset in the area of this point. The ChPS is given as a vector X = (x1 y1 z1 ... xk yk zk )T . (1) where xi , yi , and zi are the coefficients of ith point in the space. In order to calculate the statistics of the surface shape change within the training dataset additional information, like translation X, Y , Z, rotation θ and scale σ, should be removed. Therefore, the Procrustus Analysis [22] is applied. Finally, the point distribution model (PDM) is calculated exploiting PCA [21]. It describes the shape as a mean surface shape Xmean calculated from normalized surface shapes in the training dataset of n elements according to the formula n 1 Xmean = Xi (2) n i=1 and a matrix of linearly independent modes P , also called eigenvectors, with a vector of eigenvalues λ. This information is sufficient to reconstruct any shape from the training dataset following the formula Xnew = Xmean + bP
(3)
where b is a weighting vector describing the influence of each mode on the resulting shape. It should fulfil the assumption that: bi = k λi (4) k ∈< −3, 3 > For instance, for the training dataset of cubicoid, where each element is described by a ChPS of eight points, each in one corner. It results in 24 eigenvectors which describe the shape variability. Nevertheless, the most information of the shape change the bigger the eigenvalue binded with the eigenvector. Fig. 1 represents the changes of the shape when the mean shape (in this case
22
K. Nurzy´ nska
Fig. 1. The influence of the eigenvectors on the shape change. Each column corresponds to the change of k parameter in the range < −3, 3 >. Whereas each row represents separate eigenvalue.
it was a cube) is enriched with the values from one of the fourth eigenvectors with the highest eigenvalues. Each column represents the changes of k from −3 to 3. It is worth to notice that the significant shape change for first two eigenvectors corresponds big values of eigenvalues 0, 0101538 (48, 55%) and 0, 0100213 (47, 92%) respectively. Whereas, next vectors introduce a change of only one vertex location of the cube as the eigenvalues are smaller 0, 0001567 (0, 75%) and 0, 0001132 (0, 54%). 3.2
Fitting Process
The fitting process in an iterative manner changes the initial surface shape in order that it reflects the object surface enclosed in the unorganized point cloud. However, the statistical surface shape model limits the possible shape change to that described within the statistical model. This assures that the reconstructed surface belongs to the shape variability given in the training dataset simultaneously reflecting the shape in the unorganized point cloud in the best way. The process firstly moves all vertices in the initial surface described as an irregular polygonal mesh in the direction normal to the surface in the point. This force definition is an adaptation of the Cohen forces for 3D space. Then the obtained surface shape is checked whether it still belongs to the statistical surface shape and when the constraints are not met it is altered to reflect the assumed shape. This process is performed till the termination conditions are met. The termination condition are fulfilled when all vertices in the irregular polygonal
Dynamic Surface Reconstruction Method from Unorganized Point Cloud
23
Fig. 2. The fitting procedure example. In the middle the initialization. On the left fitting with the statistical shape model with constraints utilization on the right without.
mesh has in its neighbourhood points from the point cloud or the shape has not changed within two following iteration. Figure 2 compares the fitting of the surface shape model of an elliptical shape to the unorganized point cloud representing the cube. The middle part of the image represents the initialization of the fitting process. On the left hand, it is the result of the fitting of the shape when the constraints are exploited, whereas on the right side is depicted the surface resulting from the fitting when the constraints are disabled. It is worth to point out that the resulting surface keeps its shape and therefore it does not reflects the shape of the point cloud completely, while in the second case (without constraints) the surface fitted the point cloud and changed its shape to reflect the shape of the point cloud.
4 Surface Reconstruction Improvements - 3D Profile Neighbourhood The described dynamic surface reconstruction method proved to give good results. Although, in case of real data sometimes the fitting procedure lack in accuracy. Therefore, the 3D Profile Neighbourhood (3DPN) was introduced. It allows to generate an additional movement in the direction different that this imposed by the Cohen forces, yet it improves the fitting procedure. The 3DPN is created for each point in the ChPS. It is described as a vector which consists of graylevel values around data in six- or eight- direction (see fig. 3. The six-directional neighbourhood takes into consideration values of points which lay on the lines parallel to the axes in a give region of interest, described by the maximal radius. Whereas, the eight-directional neighbourhood is build from values of points which lay on the diagonal of a cube in which center is located the point of interest. The size of this cube decides the number of points. In the preprocessing stage for each point the mean and a covariance of this neighbourhhods are calculated. After applying the Cohen forces to provide better accuracy in the neighbourhood of a new location of
24
K. Nurzy´ nska
Fig. 3. 3D Profile Neighbourhood. On the left the six-directional profile; on the right side the eight-directional profile.
the vertex some locations (depending on the choosen algorithm) are chosen and for them the profile is calculated. Subsecuently, the Mahalanobis distance between the calculated profiles and the statistical one are obtained and the vertex is moved to the location where the distance was minimal.
5 Results The aim of this work was to improve the accuracy in the dynamical surface reconstruction process from unorganized point cloud. Therefore, the fitting procedure was assisted by the step where 3D Profile Neighbourhood is exploited. In the research six-direction and eight-direction neighbourhoods are compared with the basic method processing algorithm. Figure 4 depicts the comparison of surface reconstruction process. On the left side the fitted object without any 3DPN; in the middle the reconstructed surface with the six-directional profile is depicted, whereas on the right with eight-diretional profile. The experiments shows that the utilization of 3DPN causes that the reconstruction is more accurate as the surface is closer to the unorganized point cloud it should reflect. Additionally, it is worth notticing that the eight-directional profile gives better results, than the six-directional. It is caused mainly by the profile definition. In case of the eight-directional profile one takes into consideration the changes of the neighbourhood in all possible direction around the examined vertex, whereas when the six-directional profile is exploited not all directions are taken under consideration, hence worse results. Moreover, it is worth adding that the best
Fig. 4. Comparison of the fitting procedure performance for the dynamic method for surface reconstruction. On the left basic algorithm, in the middle exploiting the six-direction neighbourhood, on the right the eight-direction neighbourhood.
Dynamic Surface Reconstruction Method from Unorganized Point Cloud
25
results from conveyed researches were obtained when both the profil radius and the radius of search of profile in data was of 5 pixel length.
6 Conclusions The paper describes the research concerning the dynamic surface reconstruction from unorganized point cloud. The methodology with the improvements has been introduced. According to the conveyed experiments the methodology proved to work well for the real data. The defined 3D Profile Neighbourhoods improved the reconstructed surface accuracy. Nevertheless, it would be worth to concentrate following research on the problems of automatic data extraction, especially the characteristic dataset creation.
References 1. Cohen, L.D.: On Active Contour Models and Balloons. Computer Vision, Graphics and Image Processing: Image Understanding 53, 211–218 (1991) 2. Cohen, L.D., Cohen, I.: Finite-Element Methods for Active Contour Models and Balloons for 2-D and 3-D Images. IEEE Transactions on Patern and Machine Intelligence 15, 1131–1147 (1993) 3. Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active Shape Models Their Training and Application. Computer Vision and Image Understanding, 38–59 (1995) 4. Cootes, T.F., Taylor, C.J.: Data Driven Refinement of Active Shape Model Search. In: Proc. 7th British Machine Vision Conference (1996) 5. Cootes, T.F., Edwards, G., Taylor, C.J.: Comparing Active Shape Models with Active Appearance Models, pp. 173–182. BMVA Press (1999) 6. Delaunay, B.: Sur la sphere vide, Otdelenie Matematicheskikh i Estestvennykh Nauk, vol. 7 (1934) 7. Duan, Y., Qin, H.: 2.5D Active Contour for Surface Reconstruction. In: Proc. of 8th International Workshop on Vision. Modeling and Visualization, pp. 431– 439 (2003) 8. Ganapathy, S., Dennehy, T.G.: A New General Triangulation Method for Planar Contours. In: Proc. of SIGGRAPH, pp. 69–75 (1978) 9. Holtzman-Gazit, M., Kimmel, R., Peled, N., Goldsher, D.: Segmentation of Thin Structures in Volumetric Mediacal Images. IEEE Transactions on Image Processing, 354–363 (2006) 10. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active Contour Models. International Journal of Computer Vision 1 (1987) 11. Kim, D.K., Hwang, C.J.: Boundary Segmentation and Model Fitting Using Affine Active Surface Model. TENCON - Digital Signal Processing Applications, 26–29 (1996) 12. Li, B., Millington, S.A., Anderson, D.D., Acton, S.T.: Registration of Surfaces to 3D Images Using Rigid Body Surfaces. In: Fortieth Asilomar Conference on Signals, Systems and Computers, pp. 416–420 (2006) 13. Lorensen, W.E., Cline, H.E.: Marching Cubes: A high resolution 3D surface construction algorithm, Computer Graphics. In: SIGGRAPH, pp. 163–169 (1987)
26
K. Nurzy´ nska
14. Karolina, N.: Substitute Contours Method for Object Reconstruction from Parallel Contours. In: Proc. of 38th Annual Conference on Teaching and Learning through Gaming and Simulation, Nottingham, England, pp. 183–189 (2008) 15. Karolina, N.: 3D Object Reconstruction from Parallel Cross-Sections. In: Proc. of International Conference on Computer Visualization and Graphics. Lectures Notes of Computer Science. Springer, Heidelberg (2008) (in press) 16. Karolina, N.: Metoda tworzenia deformowalnych wielorozdzielczych powierzchni i ich interaktywna wizualizacja. PhD Thesis, The Silesian University of Technology, Gliwice, Poland (2008) 17. Karolina, N.: Constrained Dynamical Surface Reconstruction fron Unorganized Point Cloud. In: EUROCON 2009 (2009) (in review) 18. Karolina, N.: Surface Reconstruction from Unorganized Point Cloud Methodology. In: Mirage 2009 (2009) (in review) 19. Parker, J.R., Attia, E.N.: Object Reconstruction from Slices for Vision. In: Proc. of International Computer Graphics, pp. 58–64 (1999) 20. Slabaugh, G., Unal, G.: Active Polyhedron: Surface Evolution Theory Applied to Deformable Models. In: Computer Vision and Pattern Recognition, pp. 84– 91 (2005) 21. Smith, L.I.: A tutorial on Principal Component Analysis (2002) 22. Stegmann, M.B., Gomez, D.: A Brief Introduction to Statistical Shape Analysis. Informatics and Mathematical Modelling, Technical University of Denmark, 15 (2002)
Fusion of External Context and Patterns – Learning from Video Streams Ewaryst Rafajlowicz Institute of Computer Eng. Control and Robotics, Wroclaw University of Technology, Wyb. Wyspia´ nskiego 27, 50 370 Wroclaw, Poland
[email protected]
Summary. A mathematical model, which extends the Bayesian problem of pattern recognition by fusion of external context variables and patterns is proposed and investigated. Then, its empirical version is discussed and a learning algorithm for an orthogonal neural net is proposed, which takes context variables into account. The proposed algorithm has a recursive form, which is well suited for learning from a stream of patterns, which arise when features are extracted from a video sequence.
1 Introduction Fusion of classifiers seems to be a dominating approach to information fusion in pattern recognition (see [8]). The second approach is based on fusion of features [9]. The approach proposed here is based on fusion of features and variables, which are called an external context, since they influence our decisions, but they are not features of a particular object to be recognized (see below for more detailed explanations). The role of context was found useful at a very early stage of development of pattern recognition theory and practice (see [5], [10]). In [6], [7] and [1] it was proposed to consider a class of contexts, which can be called external contexts. From the view point of pattern recognition, external context variables influence our decisions, but they are not features of a pattern to be recognized. For example, a man running across a parking area is more likely to be in a hurry during a day time, while in the night one can suspect him to be a thief, although features of the man are the same. Our aim in this paper is to propose a mathematical model of pattern recognition problems, in which context variables are external and their fusion with patterns arise in a natural way. It extends the well known Bayesian model in such a way that context variables enter into a model in way different than features of patterns to be recognized. As a consequence, they appear in a different way also in formulas describing optimal classifiers. Let X ∈ RdX denote dX -dimensional vector of features of a pattern to be recognized. X was drawn at random from one of L ≥ 2 classes, which are numbered as 1, 2, . . . , L. Observing X, we simultaneously observe vector Z ∈ M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 27–34. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
28
E. Rafajlowicz
RdZ , which is interpreted as an external context. Z is also a random vector, which can help us in a proper classification of X, but it is different in its origin than the feature vector X. For example, if we have to classify a kind of illness, using typical features used by physicians, then we can treat additional information on purity of air as the external context, which helps in decisions. Note that environmental conditions are not features of a particular patient. Additionally, these conditions usually influence our health, but not conversely. Fusion of these two kinds of variables may lead to a proper diagnosis. Thus, we assume that the triple (X, Z, i) describes pattern X, which appeared in context Z and i ∈ {1, 2, . . . , L} is the number of class from which X was drawn. When a new pair (X, Z) is observed, then our task is to classify it to one of L classes. Before trying to construct a pattern recognition algorithm, which is based on a learning sequence, it is expedient to consider a mathematical model of the above sketch problem. For simplicity of the exposition, we shall assume the existence of all necessary probability density functions (pdf), although our model can easily be extended to general probability measures. Denote by fi (x; θi ) a density of patterns from i-th class, which depends on vector θi ∈ Rdi of parameters (means, variances etc.). We assume that there exit functions φi : RdZ → Rdi , i = 1, 2, . . . , L, which are links between θi ’s and the context vector Z. A priori class probabilities qi , i = 1, 2, . . . , L may also depend on external context Z by link functions ψi : RdZ → [0, 1], i.e., qi = ψi (z), i = 1, 2, . . . , L, which are such that the following conditions hold: ψi (z) ∈ [0, 1], L i=1 ψi (z) = 1. The formal scheme of generating patterns is as follows: 1) A random vector Z0 , say, is drawn from pdf fZ (z), which describes a random mechanism of context formation. A priori class probabilities are cal(0) culated qi = ψi (Z0 ), i = 1, 2, . . . , L and the class number i(0) , say, is drawn at random with these probabilities. 2) Values of parameters of pdf in class i(0) are established according to (0) θi = φi (Z0 ), i = 1, 2, . . . , L and the vector of features X0 , say, is drawn at random from the distribution (0)
fi(0) (x; θi ) = fi(0) (x; φi (Z0 )).
(1)
It is clear that the above scheme includes the classical pattern recognition problem statement as a special case (ψi ’s and φi ’s are constant functions).
2 Bayes Risk in the Presence of External Context In this section we shall treat all the densities, link functions and a priori probabilities as known. This is necessary for deriving formulas the Bayes risk in the presence of context. Later, we relax these assumptions. Denote by fi (x|z) the conditional density of features from i-th class, given context z. Then,
Fusion of External Context and Patterns – Learning from Video Streams
fi (x|z) = fi (x; φi (z)),
i = 1, 2, . . . , L.
29
(2)
Let fi (x, z) be the joint pdf of X and Z. From (2) we immediately obtain fi (x, z) = fi (x; φi (z)) fZ (z),
i = 1, 2, . . . , L.
(3)
This formula indicates that the roles played by X and Z are formally different, since fZ (z) does not depend on i, which have consequences for the form of the optimal classifier and for learning procedures. Let us denote by Ψ (X, Z) a classifier, which provides a class number from the set {1, 2, . . . , L} when pattern X was observed in a context described by vector Z. Clearly, its decision can be erroneous and, as in the classical theory, we attach loss S(i, j), i, j = 1, 2, . . . , L when a pattern from i-th class is classified to class j = Ψ (X, Z). The most popular is the so called 0-1 loss, for which S(i, j) = 0, if i = j and S(i, j) = 1, if i = j. The loss S(i, Ψ (X, Z)) is called the local one, since it provides the loss for a given X and Z. In order to derive a statistically meaningful loss, we have to average it over all random factors, which appear in our model, including Z. Such a loss is called the Bayes risk of a given classifier Ψ and it is further denoted by R(Ψ ). Denote by P (i|x, z) the a posteriori probability that pattern x, observed in context z is a member of i=th class. Then, the Bayes risk is as follows: R(Ψ ) = RdX
L
S(i, Ψ (x, z)) P (i|x, z) f (x, z) dx dz.
(4)
RdZ i=1
Now, it remains to express P (i|x, z) in terms of a priori probabilities and L joint pdf of X and Z, which has the form: f (x, z) = i=1 ψi (z) fi (x, z), where fi (x, z)’s are given by (3). A posteriori probabilities are given by: P (i|x, z) = ψi (z) fi (x, z)/f (x, z),
i = 1, 2, . . . , L.
(5)
The optimal classifier Ψ ∗ , say, is the one which minimizes the Bayes risk, i.e., R(Ψ ∗ ) = min R(Ψ ), Ψ
(6)
where the minimization is carried out over the class of all functions Ψ , which are measurable in the Lebesgue sense. Comparing 4 and 6, one can easily notice structural similarities of our problem and the classical problem of pattern recognition (see [2]). We can use this similarity in order to characterize the minimizer of 6, while particular solutions will be different than in the classical case due to special form of 3. def L Corollary 2.1. Define conditional risk r(ψ, x, z) = i=1 S(i, ψ) P (i|x, z) and assume that loss function S is nonnegative. Then, the optimal decision rule, which minimizes 6 can be obtained by solving the following optimization problem
30
E. Rafajlowicz
r(Ψ ∗ (x, z), x, z) = min r(ψ, x, z),
(7)
ψ∈R
where the minimization in the r.h.s. of 7 is carried out w.r.t. a real variable ψ, while x and z are treated as parameters. Let us note the first difference between our problem and the classical one. Corollary 2.2. The optimal classifier Ψ ∗ (x, z) does not depend on the particular form of fZ . Its dependence on the observed value of external context z is realized only by φi ’s and ψi ’s. For the proof note that P (i|x, z) can be expressed as follows P (i|x, z) = L
ψi (z) fi (x; φi (z))
k=1
ψk (z) fk (x; φk (z))
,
(8)
since fZ is a common factor of the nominator and the denominator. Let us select S as the 0-1 loss. Then, exactly in the same way as in the classical (context free) case (see [2]) we arrive at the following decision rule: pattern x, which is observed in context z is classified to class i∗ if P (i∗ |x, z) = max P (i|x, z),
(9)
1≤i≤L
possible ties being broken by a randomization. From 9 and 8 we obtain. Corollary 2.3. The optimal classifier classifies pattern x in context z to class i∗ for which ψi∗ (z) fi∗ (x; φi∗ (z)) = max1≤i≤L [ψi (z) fi (x; φi (z))]. Example 2.4. Consider the classical example of classifying patterns to two classes with normal pdf √ fi (x|z) = ( 2 π)−dX |Σ|−1 exp[−(x − φi (z))T Σ −1 (x − φi (z))/2], where the covariance matrix Σ is the same for both classes, while mean vectors φi (z), i = 1, 2 depend on context vector z. Define the following functions def
w0 (z) = ln(ψ1 (z)/ψ2 (z)) − φT1 (z) Σ −1 φ1 (z)/2 + φT2 (z) Σ −1 φ2 (z)/2 def
and wT (z) = (φ1 (z) − φ2 (z))T Σ −1 . Then, according to Corollary 2.3, the optimal classifier classifies x in context z to class 1, if w0 (z) + wT (z) x > 0
(10)
and to class 2, otherwise. In 10 one can easily recognize a linear discriminating function w.r.t. x, but the hyperplane, which separates classes, moves in a way dependent on context z. The way of moving depends on ψi (z) and φi (z), i = 1, 2. Let us note that the optimality of 10 rule is retained also when the covariance matrix Σ depends on z, still being the same for both classes. One can also prove that if covariance matrices in the two classes are different, then the optimal classifier is a quadratic form in x with coefficients dependent on z.
Fusion of External Context and Patterns – Learning from Video Streams
31
The above example indicates that by fusion of context variables and patterns, we gain an additional flexibility, retaining the optimality of the classifier. Furthermore, the optimal classifier is formally dX -dimensional in x, while the fusion with z variables is realized through the dependence of classifier’s coefficients on them.
3 Learning from a Stream of Patterns A learning algorithm for a classifier should be chosen in a way, which depends on our knowledge on fi (x; θi ) and functions φi , ψi . In this paper we assume that these functions are unknown and we are faced with a nonparametric estimation problem. In such a case it seems reasonable to estimate directly conditional densities fj (x|z), j = 1, 2, . . . , L, which are then plugged-in into a decision rule. Let us assume that we have a learning sequence of the form: L = {(Xn , Zn , j(n)),
n = 1, 2, . . .},
(11)
where Xn is a pattern, which was observed in context Zn , and its correct classification is in . We assume that triples (Xn , Zn , j(n)) are stochastically independent. We treat 11 as a stream in the sense that it is potentially infinite and intensive, which requires relatively fast (hence, also simple) algorithms for its processing. Additionally, we can not afford to store the whole sequence in a storage of reasonable capacity. Examples of streams that we have in mind, include sequences of images provided by cameras, which are then converted on-line into patterns Xn in a problem dependent way, while contexts Zn are provided either by an environment of a camera (e.g., temperature) or by images itself (e.g., illumination). For the simplicity of further exposition we shall assume that context vectors Zn ’s either assume a finite number of states κ1 , κ2 , . . . , κM (or they were approximately clustered to this sequence). We divide L into M disjoint subsequences Lm Lm = {(Xn , Zn , j(n)) ∈ L : Zn = κm },
m = 1, 2, . . . , M,
(12)
which correspond to the same vector of external context κm . Example 3.1. Consider a camera, which serves as a smoke detector1 in an empty room (hall) and our aim is to design a system, which takes a stream of images as input, extracts features from images and decides whether smoke is present or not. A very simple method was used for extracting features, namely, an image containing background was subtracted from each image provided by a camera and the resulting image was converted to a binary one, by comparing 1
This example is presented mainly to illustrate ideas, since simple smoke alarm devices are wide spread. On the other hand, a typical smoke detector contains a source of radiation and acts only when smoke reaches the detector, while camera can ”see” smoke, which is far away, e.g., in a large hall.
32
E. Rafajlowicz
Fig. 1. Left panel – number of black pixels vs frame number when smoke is (not) present in a bright room and in a dark room – right panel (learning sequence – boxes, testing sequence – circles)
the gray level of each pixel with a threshold. The total number of pixels, which were marked as ”black” was used as only one feature indicating the possible presence of smoke. It is well known that for classification to two classes, which is based on one feature only, it usually suffices to select a threshold separating classes. The analysis of Fig. 1 indicates, however, that in this example it is insufficient. The left panel of this figure shows the number of black pixels registered by the camera in a bright room on subsequent frames. Selecting the threshold tb = 2000 from the learning sequence (lower curve), one can easily read out from the upper testing curve that we commit about 3% of errors. If this threshold is applied to the observations, which are shown in the right panel, then all the observations would be classified to the class ”no smoke present”, leading to about 40% of errors. The reason is that these observations were made by the camera in an almost dark room. Selecting another threshold td = 50 for these observations we commit again about 4% of errors. Summarizing, fusion of two level context variable (bright or dark room) and only one feature leads to the threshold changing with context and we obtain a simple and still reliable classifier. In the rest of the paper we discuss the way of fusion of context information and patterns in the learning phase. As a tool for estimating fj (x|z) we propose to use an orthogonal neural network with weights dependent on z, as suggested by Example 2.4. Let v1 (x), v2 (x), . .. be a sequence of functions, which are orthonormal in set X ⊂ RdX , i.e. X vk (x) vj (x) dx = 0 for k = j and 1, otherwise. Selecting set X one should take into account the support of fi (x; θi )’s. The sequence of vk ’s should be sufficiently rich for purposes of approximation (e.g., to be complete in the class of all square integrable functions L2 (X )). For fi (x; θi )’s with bounded supports one can select trigonometric functions, while for X = RdX the tensor products of Hermite polynomials are good candidates.
Fusion of External Context and Patterns – Learning from Video Streams
33
To provide motivations for constructing our learning scheme, consider the following representations
K(j)
fj (x|z) =
akj (z) vk (x),
(13)
k=1
which are convergent in L2 (X ) norm as K(j) → ∞ for each fixed z, provided that vk ’s are complete and for j = 1, 2, . . . , L, k = 1, 2, . . . , K(j) akj (z) = vk (x) fj (x|z) dx = EX(j) [vk (X(j))|Z = z] , (14) X
where X(j) is a random vector with pdf fj (x|z). In practice, we shall use a large but finite K(j) in 13. Denote by αkjm = akj (κm ). From 14 it follows that it is natural to estimate αkjm as follows. We start from initial guesses α ˆ kjm (old) of αkjm ’s, e.g., setting all of them to zero. We also initialize counters Ijm = 0, which accumulate how many times α ˆ kjm (0), k = 1, 2, . . . , K(j) were updated. As new Xn and Zn are available, we ask the expert to which class it should be classified and denote its answer by j(n). Then, by comparing Zn with κm ’s, we establish the number of the corresponding Lm , which is further denoted by m(n), and we update the estimates for k = 1, 2, . . . , K(j(n)) as follows: α ˆ kj(n)m(n) (new) = (1 − βn ) α ˆ kj(n)m(n) (old) + βn vk (Xn ),
(15)
def
where βn = 1/(card(Ij(n)m(n) ) + 1). Simultaneously, we update Ij(n)m(n) by adding 1 to it. When updating in 15 is completed, we replace α ˆ kj(n)m(n) (old) by α ˆ kj(n)m(n) (new). In 15 one can easily recognize the familiar arithmetic mean estimator, written in the recursive form. Alternatively, one can estimate αkjm ’s from the stream of patterns by EWMA (exponentially weighted moving average) just by selecting βn = const, or depending on k and j, but not on n. After observing sufficiently many initial observations from the stream, we can estimate class densities, conditioned on context, as follows:
K(j)
fˆj (x|κm ) =
α ˆ kjm (old) vk (x).
(16)
k=1
We have used ”old” versions of α ˆ kjm ’s in 16, since, we can test the performance of the empirical classifier before updating them. Denote by qjm a priori probability that a pattern comes from j-th class in context κm , i.e., qjm = ψj (κm ). Its estimate, denoted further as qˆjm , is the frequency of occurrence of patterns from j-th class in m-th context up to time n, i.e., qˆjm = Ijm /n. According to the plug-in rule, the proposed classifier classifies new Xn , observed in context Zn (which is equal to κm (n)) to class ˆi(n) for which the maximum in the expression below is attained
34
E. Rafajlowicz
max qˆim(n) fˆi (Xn |κm(n) ).
1≤i≤L
(17)
Under suitable conditions imposed on the smoothness of fj ’s and on the growth of K(j)’s as the number of observations increases, one can prove the consistency of fˆj (x|κm )’s (see [4], [2]), which in turn implies the asymptotic optimality of the decision rule 17 (see [3]). Acknowledgements. This paper was supported by the Ministry of Science and Higher Education of Poland, under grant ranging from 2006 to 2009.
References 1. Ciskowski, P., Rafajlowicz, E.: Context-Dependent Neural Nets-Structures and Learning. IEEE Trans. Neural Networks 15, 1367–1377 (2004) 2. Devroye, L., Gy¨ orfi, L., Lugosi, G.: Probabilistic Theory of Pattern Recognition. Springer, New York (1996) 3. Greblicki, W.: Asymptotically Optimal Pattern Recognition Procedures with Density Estimates. IEEE Trans. Information Theory IT 24, 250–251 (1978) 4. Greblicki, W.: Asymptotic Efficiency of Classifying Procedures using the Hermite Series Estimate of Multivariate Probability Densities. IEEE Trans. Information Theory IT 17, 364–366 (1981) 5. Kurzy´ nski, M.: On the identity of optimal strategies for multistage classifiers. Pattern Recognition Letters 10, 39–46 (1989) 6. Rafajlowicz, E.: Context-dependent neural nets – Problem statement and examples. In: Proc. 3rd Conf. Neural Networks Applications, Zakopane, Poland, May 1999, pp. 557–562 (1999) 7. Rafajlowicz, E.: Learning context dependent neural nets. In: Proc. 3rd Conf. Neural Networks Applications, Zakopane, Poland, May 1999, pp. 551–556 (1999) 8. Ruta, D., Gabrys, B.: An Overview of Classifier Fusion Methods. Computing and Information Systems 7, 1–10 (2000) 9. Sun, Q.S., et al.: A new method of feature fusion and its application in image recognition. Pattern Recognition 38, 2437–2448 (2005) 10. Turney, P.: The identification of context-sensitive features: A formal definition of context for concept learning. In: Proc. 13th Int. Conf. Machine Learning (ICML 1996), Bari, Italy, pp. 53–59 (1996)
Computer Visual-Auditory Diagnosis of Speech Non-fluency Mariusz Dzie´ nkowski1 , Wieslawa Kuniszyk-J´o´zkowiak2, El˙zbieta Smolka2 , and Waldemar Suszy´ nski2 1
2
Department of Informatics Systems, Faculty of Management, Lublin University of Technology
[email protected] Department of Biocybernetics, Institute of Computer Science, Maria Curie-Sklodowska University, Lublin
[email protected]
Summary. The paper focuses on the visual-auditory method of analysis for utterances of stuttering people. The method can be classified as an intermediate solution which is in between a traditional auditory and automatic methods. The author prepared a special computer program DiagLog, with the aim of carrying out the visual-auditory analysis, which can be used by logopaedists to make a diagnosis. The speech disfluencies are assessed by means of the observation of the spectrum and the envelope of fragments of recordings with simultaneous listening to them. A collection of 120 a few-minute recordings of 15 stuttering people was used to verify the correctness of the method and to compare it with the traditional auditory technique. All the samples were analysed by means of the auditory and the visual-auditory method by two independent experts. Consequently, the diagnosis using an additional visual aspect proved itself to be more effective in detecting speech non-fluencies, in classifying and measuring them.
1 Introduction Experiences of many researchers show that the auditory method used to assess stuttering has proved itself to be imperfect because of its subjectivity and discrepancies in listeners’ opinions. So far there have never been objective and sufficiently effective automatic ways of diagnosing this speech disorder [1-6]. Therefore the authors of the work suggest supporting the acoustic method by a simultaneous visual analysis of the spectrum and the course of the signal amplitude in time (the speech envelope, oscillogram). The spectrum is used since after the preliminary spectrum analysis made by the hearing organ, the input speech signal is received by the brain in the form of a spectrogram (the theory of speech perception - analysis by synthesis) [6,7]. While diagnosing stuttering one is supposed to analyse not only non-fluent fragments of utterances as the whole structure of the utterance is disturbed. Hence the analysis of the envelope of long statements containing both fluent and characteristic non-fluent fragments needs to be made. The analysis is carried out M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 35–42. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
36
M. Dzie´ nkowski et al.
by smoothing the acoustic pressure changes in time intervals. As a result of such a transformation the information content of the speech signal is limited and therefore a complex analysis of the amplitude-time structure of long utterances is possible [6,8-10]. With the aim of realizing the visual-auditory method one of the authors developed the DiagLog computer program. It enables a complex system diagnosis and evaluation of the therapy results.
2 Computer Program for the Visual-Auditory Analysis DiagLog takes data from WAVE files which use the standard Pulse Code Modulation. The WAVE files are of high quality and the algorithm of their recording and playing has a relatively simple structure. The optimal parameters for recording human speech are: a sampling frequency of 22050Hz, a 16-bit amplitude resolution and a number of channels- one (MONO). The sampling frequency is determined by the Nyquist theorem as well as by the top range of frequencies produced by the human vocal tract (below 10000Hz) [11]. The computer program is based on the Fourier transform, especially on the FFT algorithm [12,13]. The transform is frequently used in the analysis and processing of speech signals recorded in the form of digital files. It is a mathematical procedure which computes the harmonic or the frequency content of the discrete signal [14]. Before the Fourier transform is made the input data sequence often undergoes windowing to minimise unfavourable leakage. Therefore the Hamming window is used in the program for this purpose [15]. The program gives a possibility to change the linear scale into the Bark scale. The Bark scale reflects the nonlinearity of processing of the human hearing system. The hearing band was divided here into 24 subbands [16]. The conversion of the linear frequency scale expressed in Hertz into a perceptual scale in Bark units is possible thanks to the formulas of Zwicker, Traunmller, Schroeder, Fourcin or others. The Traunmller formula is the best possible approximation for applications analysing speech signals [17]. DiagLog enables listening to the sound as well as the simultaneous visual observation of a two-dimensional coloured spectrum and the amplitude speech envelope. Thanks to the program the speech disfluencies can be identified more easily and can be classified in a more precise manner. The information about all the non-fluent fragments is kept in a register connected with the analysed sound file. The presentation of disfluencies with information about their types is given in a coloured oscillogram. The program shows a list of all non-fluent fragments or selected types of disfluencies concerning the analysed utterance of the person recorded. The DiagLog program uses three charts for the visual analysis of sound files. The oscillogram is the easiest way of the transformation of an acoustic signal into a digital form presenting the amplitude-time course. Such a
Computer Visual-Auditory Diagnosis of Speech Non-fluency
37
Fig. 1. The main window of the DiagLog program presenting a fragment of an utterance of the stuttering person
presentation of the sound contains a lot of information. However, it does not fully reflect the knowledge about the sound structure resulting from the complexities of speech [18]. The spectrogram and the envelope show the same fragment of the utterance. The chart of the amplitude speech envelope enables preliminary assessment of the quality of the recording where the level of noise is clearly visible. The view of the spectrum also enables measurement of single phonation times and pauses between them as well as measurement of the total phonation time and pauses. Moreover the area under the speech envelope of the whole utterance or a selected fragment can be computed. The area under the envelope is only slightly influenced by possible noises and disturbances connected with speech. [6,8-10]. The DiagLog program eliminates the disturbances of the acoustic area by setting a suitable level of cutting off and taking it into consideration during calculating the area under the envelope course. The maximum level of such disturbances does not usually exceed 10%. The program enables controlling the smoothing of the envelope by means of setting a time window. The optimal size of the time window of the envelope used in research is 40 ms. The program enables a smooth change exact to 0.1 ms.
38
M. Dzie´ nkowski et al.
Fig. 2. The illustration of the principle for calculating the phonation times (t1...t5) and pauses (p1...p4) as well as the area under the speech envelope (upper darker areas)
3 Comparison of the Auditory and the Visual-Auditory Method of Diagnosis for Stuttering People Diagnosing of utterances of stuttering people is connected with a thorough analysis of their utterances in order to determine the indicator of the degree of severity of the speech disorder. The stuttering intensity functions as a typical parameter [6,19]. The research material was a collection of 150 a few-minute recordings of 15 stuttering people. All the speech samples were analysed by means of the auditory and the visual-auditory method by two independent experts. The aim of the research was to compare the correctness of both methods. While the auditory method was applied, the subsequent 10-second fragments of recordings of the utterances were listened to twice, all speech non-fluencies were counted, classified and the observed results were registered. Whereas the auditory analysis was made by means of the WaveStudio program, the visual-auditory method was entirely based on the DiagLog program. Thanks to the DiagLog software the recorded utterances were listened to in four-second fragments with the simultaneous observation of spectrograms. The person making the analysis selected, classified, marked the borders and also measured the time of encountered disfluencies. The final assessment of recordings of utterances of stuttering people was made after counting all errors and all words in particular recordings. The degree of severity of the speech disorder was marked by dividing the number of errors by the number of all words and multiplication by 100 in order to receive a percentage representation of the result. To compare the effectiveness of the auditory with the visual-auditory assessment of the recorded utterances of stuttering patients a t-Student test was used for related samples because both analyses concerned the same group of people. For the purpose of the statistics analysis data of various stuttering intensity from 15 people from 8 registrations was used, which resulted in 120 cases. The effects of comparison of both the auditory and the visual-auditory method of diagnosis are significantly different in terms of statistics (t=-10,632; p=0,0000) no matter what significance level is used. In the comparisons the average stuttering frequency was used as a
Computer Visual-Auditory Diagnosis of Speech Non-fluency
39
Fig. 3. Differences in the number of errors detected by two experts by means of the auditory and the visual-auditory method
measure. It was defined as a number of non-fluencies for all the people in all experimental situations divided by a number of words uttered by all the people in all experimental situations. Figure 3 presents the total numbers of non-fluencies detected by both experts during the auditory and the visual-auditory analysis. It was proved that using the auditory method the first expert did not notice 649 whereas the other 427 errors. Analysing the differences in levels of stuttering frequency established by the auditory and the visual-auditory method is as important as a percentage of particular type of errors occurring in the speech of stuttering people. For this purpose the number of particular types of errors in every utterance was counted. However, the content, syntax or grammatical mistakes were not taken into consideration. The research focused on 6 groups of disfluencies: intrusions, repetitions, prolongations, blockades and other types of errors. Assessment of the number of particular errors is not only time-consuming but also tiring and complicated. The results show the differences in numbers of occurrence of particular error types in the same utterance depending on it being analysed by means of the auditory or the visual-auditory method. Far more disfluencies were detected when a visual-auditory method was used. Taking into account the type of errors in the examined group intrusions and repetitions were dominant. The biggest number of errors appeared in the spontaneous speech. Classification of complex and long incorrect fragments posed the biggest problem during the research.
40
M. Dzie´ nkowski et al.
Fig. 4. Spectrogram of the fragment of the utterance given by AM (talking with the visual echo with a 100-millisecond delay)
Figures 4 and 5 present spectrograms generated by means of the DiagLog program which show examples of disfluencies omitted when the auditory method was used but detected by means of the visual-auditory method. With the help of the spectrograms it is possible to analyse recordings thoroughly and detect fragments containing errors imperceptible by means of the auditory method. Figure 4 shows an example of a fragment of the utterance with a blockade which was not detected during the auditory analysis. This type of error in the spectrogram is shown by a clearly visible, lasting over 1270 ms pause preceding the actual word. Moreover the pause is filled with a single deformed ”g” consonant being an unsuccessful attempt of uttering the word ”go´sciem” (English - ”guest”). Because of a high intensity of disfluencies in the speech of stuttering people only the most severe non-fluencies are detected. Figure 5 illustrates a situation in which two big, neighbouring errors cover the surrounding area, hence the other errors may not be noticeable. In this case a small blockade before the word ”kt´ orego” (English - ”whose”) was left out in the auditory analysis as the repeated stop consonant was simply imperceptible. The presented examples show that visualization of speech helps with identification and classification of all disfluencies. The visual-auditory method used for evaluation of stuttering by means of the DiagLog program enables measurement of duration of non-fluent episodes. This parameter provides a lot of information vital for making a diagnosis, monitoring progress and also for assessment of the final results of the therapy. The research done earlier shows there is a high correlation between the duration of the therapy and shortening of non-fluent episodes [20]. In the course of time it will be possible to observe the changes during the logopaedist therapy. The examined group of stuttering people showed a tendency for the biggest number of repetitions and intrusions. The analyses of duration distributions of all errors occurring during spontaneous speech of all stuttering participants of the therapy show that most intrusions lasted from 200 to 300 ms.
Computer Visual-Auditory Diagnosis of Speech Non-fluency
41
Fig. 5. Spectrogram of the fragment of the utterance given by FZ (spontaneous speech)
4 Conclusion The comparison of both methods evaluating utterances of stuttering patients shows that the visual-auditory analysis is more precise and effective than the auditory one. Thanks to the DiagLog program both experts were able to detect more speech errors. Having the spectrum of the recording at their disposal they were also able to classify all the disfluencies more easily and effectively. Their diagnosis can be supported by measuring the duration of single non-fluencies as well as by parameters received from analysing the course of the amplitude speech envelope. The earlier experiments which used the speech envelope show that the areas under the speech envelope determined for longer fragments of recordings are much lower in disfluent than in fluent utterances. Furthermore the total phonation time is longer in fluent than in non-fluent fragments of speech. One can also assume the parameters will change in the course of the therapy when the speech of the stuttering person becomes more fluent. Thus the amplitude speech envelope parameters can also have a significant function while assessing the speech fluency [6,8-10]. In conclusion, the presented DiagLog program is a comprehensive computer tool giving possibilities for a more complex diagnosis of stuttering as well as for evaluation of progress made during the therapy. The program facilitates diagnosing of disturbed speech and it can be successfully used by logoapedists. Still, it must be noticed that the presented research focused mainly on one aspect i.e. stuttering intensity. The further stages of scientific work will concentrate on supporting the visual-auditory diagnosis by parameters of the speech envelope, which has been only marginally used in the work so far.
References 1. Czy˙zewski, A., Kostek, B., Skar˙zy´ nski, H.: Technika komputerowa w audiologii, foniatrii i logopedii. Exit, Warszawa (2002) 2. Suszy´ nski, W., Kuniszyk–J´ o´zkowiak, W., Smolka, E.: Acoustic methods in diagnosing and therapy of speech disorders. Maintenance and Reliability 7, 19–27 (2000)
42
M. Dzie´ nkowski et al.
3. Kuniszyk-J´ o´zkowiak, W., Smolka, E., Raczek, B., Suszy´ nski, W.: Metody bada´ n zaburze´ n mowy. In: Biosystemy: Biocybernetyka i in˙zynieria biomedyczna, EXIT, Warszawa, pp. 176–197 (in Polish) 4. Kuniszyk-J´ o´zkowiak, W., Suszy´ nski, W., Smolka, E.: Metody akustyczne w diagnostyce i terapii nieplynno´sci mowy. In: Biopomiary: Biocybernetyka i in˙zynieria biomedyczna 2000, t. II, EXIT, Warszawa (2000) (in Polish) 5. Kuniszyk-J´ o´zkowiak, W., Sztubacki, M.: Perceptive assessment of stutterers’ speech. In: Proc. 24th Congr. Int. Ass. Logoped. Phoniatr. Amsterdam, pp. 721–723 (1998) 6. Kuniszyk-J´ o´zkowiak, W.: Akustyczna analiza i stymulacja plynno´sci m´ owienia. Praca habilitacyjna, Lublin (1996) 7. Kuniszyk-J´ o´zkowiak, W.: Procesy wytwarzania i percepcji mowy u os´ ob jakajacych sie. Logopedia 22, 83–93 (1995) (in Polish) 8. Kuniszyk-J´ o´zkowiak, W.: A comparison of speech envelopes of stutterers and nonstutterers. The Journal of the Acoustical Society of America 100(2), Pt 1, 1105–1110 (1996) 9. Kuniszyk-J´ o´zkowiak, W.: Distribution of phonation and pause durations in fluent speech and stutterers speech. Archives of Acoustics 17(1), 7–17 (1992) 10. Kuniszyk-J´ o´zkowiak, W.: The statistical analysis of speech envelopes in stutterers and non-stutterers. Journal of Fluency Disorders 20, 11–23 (1995) 11. Dzie´ nkowski, M., Kuniszyk-J´ o´zkowiak, W., Smolka, E., Suszy´ nski, W.: Cyfrowa analiza plik´ ow d´zwiekowych. In: VI Lubelskie Akademickie Forum Informatyczne, PTI Lublin, pp. 71–78 (2002) (in Polish) 12. Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical recipes in C: The Art of Scientific computing. Cambridge University Press, Cambridge (1988-1992) 13. Rabiner, L.R., Schafer, R.W.: Digital processing of speech signals. Bell Labolatories, Englewood Cliffs (1979) 14. Lyons, R.G.: Wprowadzenie do cyfrowego przetwarzania sygnal´ ow. WKL, Warszawa (2000) (in Polish) 15. Tadeusiewicz, R.: Sygnal mowy. WKL, Warszawa (1988) (in Polish ) 16. Zieli´ nski, T.P.: Cyfrowe przetwarzanie sygnal´ ow. WKL, Warszawa (2005) (in Polish) 17. Zwicker, E., Fastl, H.: Psychoacoustics - Facts and Models. Springer, Berlin (1990) 18. Suszy´ nski, W., Kuniszyk-J´ o´zkowiak, W., Smolka, E., Dzie´ nkowski, M.: Wybrane metody komputerowej analizy sygnalu w diagnozowaniu nieplynno´sci mowy. In: Lubelskie Akademickie Forum Informatyczne, Lublin (2002) (in Polish) 19. Bloodstein, O.: A Handbook on Stuttering. Singular Publishing Group, Inc., San Diego (1995) nski, W.: Kom20. Dzie´ nkowski, M., Kuniszyk-J´ o´zkowiak, W., Smolka, E., Suszy´ puterowa zindywidualizowana terapia nieplynnej mowy. In: XIII Konferencja Naukowa Biocybernetyka i In˙zynieria Biomedyczna, tom I, Gda´ nsk, pp. 546– 551 (in Polish)
Low-Cost Adaptive Edge-Based Single-Frame Superresolution ´ Zbigniew Swierczy´ nski1 and Przemyslaw Rokita2 1
2
Cybernetics Faculty, Military University of Technology, S. Kaliskiego 2, 00-908 Warsaw
[email protected] Cybernetics Faculty, Military University of Technology, S. Kaliskiego 2, 00-908 Warsaw
[email protected]
Summary. In this paper we propose a simple but efficient method for increasing resolution of digital images. Such algorithms are needed in many practical applications like for example digital zoom in camcorders or conversion between conventional TV content and high resolution HDTV format. In general the main problem when converting an image to higher resolution is the lack of high frequency components in the resulting image. The result is the blurry aspect of images obtained using conventional algorithms like, for example, commonly used bilinear or bicubic interpolation. High frequency components in the frequency domain correspond to the image edges in the spatial domain. Building on this simple constatation here we propose to reconstruct high frequency components and sharp aspect of resulting images using edge information.
1 Introduction and Previous Work Single-frame superresolution and in general algorithms for increasing the resolution of digital images are based on interpolation. In general, interpolated function can be given through the functional j on the function f , i.e., (f ) = { j (f ) , j = 0, 1, . . . , n}. The interpolation task is to find, in the defined class, the estimate g of the given function f or which the functionals adopt the same values j (f ) = j (g) for j = 0, 1, . . . , n., i.e., (f ) = (g) [1]. Examples of interpolations are algebraic, trigonometric polynomials, and catenated functions. Such operations are performed, for example, in all digital cameras, video cameras, and scanners. The effects of interpolation are magnification or reduction of a digital picture or change of the aspect ratio. For example, during the change of the width of an image from 600 to 800 pixels, the number of pixels in each line should be increased by 200. The mechanisms of interpolation can be used to calculate the value of new pixels. Algorithms of interpolation can be divided into adaptive and non-adaptive ones [10], [14]. The characteristic feature of the first group of algorithms is that they interpolate in the manner defined in advance, independently from the processed image. These algorithms are computationally uncomplicated M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 43–51. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
44
´ Z. Swierczy´ nski and P. Rokita
and easy to implement. Second group of algorithms is characterized by the ability to detect local, characteristic areas in the neighborhood of the processed pixel and, taking this into consideration, calculation of the value of the interpolated pixel. Those algorithms can give better quality results. The price of the better visual quality of images obtained using this method, is the increased mathematical complexity of the algorithms and higher computational cost after implementation. Conventional non-adaptive methods for increasing resolution of digital images (e.g., nearest neighbor [5], [10], [14], bilinear [2], [5], [10], [14] or bicubic [2], [5], [10], [14]) lead to obtaining images of higher resolution characterized, according to the applied method, by significant loss of sharpness or pixelated areas [14]. Such methods lead to poor visual results, decreasing in the same time the possibility that the viewer will be able to recognize the picture details - like, for example, numbers on a registration plate. In our work we have focused on obtaining an algorithm for increasing the resolution of digital images that will be able to preserve sharpness of the objects edges, and in the same time preserving soft transitions inside objects and in the background, avoiding pixelated aspect. In our new method we are taking into account the information about edges found in the original image and this information is used later on to guide the interpolation process. In our algorithm we are combining elements of non-adaptive (iterative) and adaptive approaches to increase resolution. This way we are able to obtain good quality images (comparable to effects of adaptive methods) at a low computational cost - as only small percentage of generated pixels (i.e. pixels laying on detected edges path) needs the special adaptive treatment.
2 Hybrid Method for Increasing Resolution of Digital Images Using Edge Information To improve quality of the resultant image (especially its sharpness) the presented method makes use of edges detected in the original image. Our algorithm sharpens edges in the generated high resolution image by overwriting blurred (in the standard process of interpolation) edges with values of pixels appropriately taken from neighboring areas, along reconstructed edges. Our method is based on a special treatment of areas that correspond to edges in the original image. Information about edges extracted from the input image is used to define pixels that should be analyzed and modified. Main step of the analysis consists in defining the tangent to the edge at the given pixel. After defining this direction, it is possible to begin local, adaptive processing of the given pixels surroundings. The algorithm checks for the occurrence of one of four possible vitiations of the edges position in relation to the chosen pixel. The direction of the edge and the way the analyzed pixel adjoins it, are taken under consideration in defining the type of the variation. The following variations may occur: horizontal edge, vertical edge, corner type 1 (Fig. 1a), corner type 2 (Fig. 1b).
Low-Cost Adaptive Edge-Based Single-Frame Superresolution
45
Fig. 1. Corners: on the left side corner type 1a, on the right side corner type 1b
The corners presented on Fig.1 were obtained as the result of the bicubic interpolation of corners marked by rectangles in the Fig.2 (the interpolated solid rectangle used in this example is black, its color on Fig.2 was modified to gray for better presentation purpose). Taking under consideration the described positioning of the edges, four different overwriting procedures will be performed, depending on the local features of an image. The remaining two types of corners are not considered since in their case the proper overwriting of pixels is obtained by applying the algorithm used for processing horizontal edge and the vertical one. In the case of the corners of type 1 and 2 using such a simplified approach would lead to undesirable effects shown on Fig. 3. The benefit of this method is to connect iterative methods (their low complexity influences pace of image processing) and adaptive ones, which positively influence the quality of the obtained image. Detecting edges in an image is the first stage of its processing. Then the whole original image is subjected to processing with usage of non-adaptive method (e.g., bicubic). Areas, which could be created by blurring the edges, are analyzed in the image obtained in such a way - information about localization of such areas is acquired by analyzing the original image (detecting edges). If the areas fulfill certain conditions, they are overwritten by the pixels from the neighboring areas. It should be emphasized that the adaptive methods in this algorithm are
Fig. 2. The original image from which the blurred corners presented in Fig.s 1a and 1b were obtained
46
´ Z. Swierczy´ nski and P. Rokita
Fig. 3. Effect of overwriting pixels excluding occurrence of corner
used only in certain areas (neighborhood of pixels belonging to the edges). Such approach significantly speeds up operation of the whole algorithm in comparison to the processing of the whole image using adaptive methods and the obtained image is of the same good quality. The method of detecting edges has especially been adapted for this algorithm. In our earlier work on single-frame enhanced superresolution we were using standard edge detection methods and edges as narrow as possible [14]. Algorithm located significant change of the neighboring pixel’s values and marked the edge there drawing it one side of the detected boundary (Fig. 4a). In case of the new algorithm proposed here, it turned out to be necessary to mark pixels on both sides of the boundary) (Fig. 4b). The algorithm of edge detection applied earlier drew them outside the area limited by the edges [14]. Here we need to obtain annotation of the edge through two pixel lines. The formula for detecting and marking edges for the purposes of our algorithm is based on modified thresholded Laplacian and is computed as follows: edge(x,y)=abs(image(x,y)*4 - image(x-1,y) - image(x,y-1) image (x+1,y) - image (x,y+1)); if edge(x, y)>255 edge(x, y)=255; end
Fig. 4. a) On the left side - image of edges drawn on one side of the boundary, b) On the right side - image of edges drawn on both sides of the boundary
Low-Cost Adaptive Edge-Based Single-Frame Superresolution
47
Fig. 5. a) Original image, b) image of the edges, c) image of the edges split apart at the resolution scale increased x4
where: edge(x, y) - pixel with coordinates (x,y) of the image containing generated edges image(x, y) - pixel with coordinates (x,y) of the original image. What is more - having an edge defined by two lines, after sliding them apart (in an image with increased resolution) (Fig. 5) it is possible to find and examine relations of pixels situated on the line perpendicular to the edge. This way we are able to enhance sharpness of edges in the produced higher resolution image. To limit the number of pixels to be processed by the second stage of the algorithm, the initial selection of edges is performed - i.e., if the value of a pixel exceeds the given threshold it is left unchanged, in the opposite case it is set to zero. Additionally, to improve the algorithm described below, relations between pixels belonging to the blurred edges are checked. Pixels with coordinates (x*scale value, y*scale value) that are included in the area of the blurred edges are checked. It is checked how many neighbors of the checked pixel also belongs to the area of the blurred edges. Gradients of eight directions of brightness changes within the range of the radius=(scale value-1) are checked in relation to those pixels. From this data the line perpendicular to the edge is found. Having centre points and neighbors included on the line perpendicular to the edge we can appropriately overwrite values of neighboring pixels in the image of higher resolution in order to improve its sharpness. Algorithm of checking relations between pixels to define the direction is as follows: for x=3:height_img-2 for y=3:width_img-2 if new_edge(x,y)>0 if new_edge(x+scale_value,y)>0 if abs(auxiliary_img(x,y) auxiliary_img(x+scale_value,y))>threshold for w=1:scale_value-1 %horizontal edge
48
´ Z. Swierczy´ nski and P. Rokita
for k=0:scale_value-1 img_superres(x+w,y+k)=auxiliary_img(x,y+k); end; end; end; end if new_edge(x,y+scale_value)>0 if abs(auxiliary_img(x,y) auxiliary_img(x,y+scale_value))>threshold %vertical edge for w=0:scale_value-1 for k=1:scale_value-1 img_superres(x+w,y+k)=auxiliary_img(x+w,y); end; end; end; end if new_edge(x+scale_value,y)>0 && new_edge(x,y+scale_value)>0 if abs(auxiliary_img(x,y) auxiliary_img(x+scale_value,y))>threshold && abs(auxiliary_img(x,y) auxiliary_img(x,y+scale_value)) >threshold % corner type 1 for w=1:scale_value-1 for k=1:scale_value-1 img_superres(x+w,y+k)=auxiliary_img(x,y); end; end; end; end if new_edge(x-scale_value,y)>0 && new_edge(x,y-scale_value)>0 if abs(auxiliary_img(x,y) auxiliary_img(x-scale_value,y))>threshold && abs(auxiliary_img(x,y) auxiliary_img(x,y-scale_value))>threshold % corner type 2 for w=1:scale_value-1 for k=1:scale_value-1 img_superres(x-w,y-k)= auxiliary_img(x-scale_value,y-scale_value); end; end; end; end; end
where height img - height of interpolated image width img - width of interpolated image new edge - higher resolution image containing split apart edge pixels of the original resolution image (Fig. 5c) auxiliary img - higher resolution image auxiliary buffer on which pixels are overwritten img superres - higher resolution image containing final result threshold - minimal value of pixels’ brightness difference in the auxiliary image which correspond to pixels edges detected in the original image scale value - resolution increase factor (it is also the radius of pixel’s neighborhood - expressed in the number of pixels, which is analyzed and considered for further processing)
Low-Cost Adaptive Edge-Based Single-Frame Superresolution
49
Fig. 6. Superresolution image (resolution increased four times), on the left - the original, in the midle - image obtained using proposed new method, on the right side image obtained using conventional bicubic interpolation
Results and Summary The results of exemplary experiments verifying usefulness of the developed algorithm in cases of processing various types of images are presented in this part of the paper. They were compared to the results of increasing resolution using bicubic method. In the first case the resolution of the original image was 32x32 pixels and it was increased fourfold (Fig. 6). Foe better presentation of the achieved results the same size of pixels was maintained in the original and in the resultant images, which resulted in enlarging the resultant image as much as much resolution was increased.
Fig. 7. Superresolution image - resolution increased four times: on the top - the original, on the left side - image obtained using proposed new method, on the right side - image obtained using conventional bicubic interpolation
50
´ Z. Swierczy´ nski and P. Rokita
Fig. 8. Superresolution image - resolution increased twice: on the left side - the original, in the middle - image obtained using proposed new method, on the right side - image obtained using conventional bicubic interpolation
In the second example (Fig. 7) the resolution of the original image was 60x60 pixels and it was enlarged four times (up to 240x240 pixels). In the third example (Fig. 8) the resolution of the original image was 115x115 pixels and it was enlarged two times (up to 130x130 pixels). Looking at the results of the experiments it can be noticed that this approach gives good visual results. Using our method sharp edges can be reconstructed in the output high resolution image, where high frequency information is originally missing. The single-frame superresolution method we are proposing here is based on edges extracted in the input image. We are using directional information obtained from those edges to guide locally the interpolation process. This enables us to obtain better quality, sharper images. Proposed approach is based on local operators giving low computational cost and a simple implementation.
References 1. Jankowska, J., Jankowski, M.: A Review of Methods and Numerical Algorithms. Wydawnictwa Naukowo-Techniczne. Scientific-Technical Publishers, Warsaw (1988) 2. Wr´ obel, Z., Koprowski, R.: Processing of an Image in MATLAB Program. Silesian University Publisher, Katowice (2001) 3. Malina, W., Smiatacz, M.: Methods of Digital Processing of Images, Akademicka Akademicka Oficyna Wydawnicza EXIT, Warsaw. Academic Publisher - EXIT 4. Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recipes in C: The Art of Scientific Computing, 2nd edn. Cambridge University Press, Cambridge (1992), http://www.nrbook.com/a/bookcpdf.php 5. Image processing toolbox - user guide for use with Matlab version 3, The Math Works Inc. (2001) 6. Tadusiewicz, R., Korohoda, P.: Computer Analysis and Images Processing. Publisher of Telecommunication Progress Foundation, Cracow (1997)
Low-Cost Adaptive Edge-Based Single-Frame Superresolution
51
7. Bose, N.K., Lertarattanapanich, S., Chappalli, M.B.: Superresolution with second generation wavelets. Signal processing: Image Communication 19, 387–391 (2004) 8. Ani´sko, C.: Edges detection, http://anisko.net/agent501/segmentacja/DetekcjaKrawedzi/ 9. Candocia, F.M., Principe, J.C.: Superresolution of image based on local correlations. IEEE Transactions on Neural Networks 10(2), 372–380 (1999) 10. Chen, T.: A study of spatial color interpolation algorithms for single-detector digital cameras, http://www-ise.stanford.edu/~ tingchen/ 11. Freeman, W.T., Jones, T.R., Pasztor, E.C.: Example-based Super-resolution. IEEE ComputerGraphits and Applications (March/April 2002) 12. Baker, S., Kanade, T.: Super-resolution optical flow, Technical Report CMURI-TR-99-36, Carnegy Mellon University (1999) 13. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 2nd edn. Prentice Hall, Englewood Cliffs (2002) ´ 14. Swierczy´ nski, Z., Rokita, P.: Increasing resolution of digital images using edgebased approach. Opto Electronic Review 16(1), 76–84 (2008)
Eye and Nostril Localization for Automatic Calibration of Facial Action Recognition System Jaromir Przybylo AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakow, Poland
[email protected] Summary. The work presented here focusses on automatic facial action recognition using image analysis algorithms and application of facial gestures to machine control. There are many sources of variation in facial appearance which make recognition a challenging task. Therefore, machine adaptation to human and environment is — in our opinion — the key issue. The main contribution of this paper is eye and nostril localization algorithm designed to initialize facial expression recognition system or recalibrate its parameters during execution.
1 Introduction The primary means of human-computer interaction is — and probably will remain — graphical user interface based on windows, pointing devices (such as mouse) and data entry using keyboard. There are also many specialized devices which extend means of interaction in certain applications. For example: tablets and 3D gloves used in CAD, gamepads, etc. Unfortunately, typical input devices are not designed for people with severe physical disabilities. People who are quadriplegic — as a result of cerebral palsy, traumatic brain injury, or stroke — often have great difficulties or even cannot use mouse or keyboard. In such situations facial actions and small head movements can be viewed as alternative way of communication with machines (computer, electric wheel chair, etc.). There are various approaches to build mouse alternatives for people with physical disabilities. Some of them use infrared emitters that are attached to the user’s glasses, head band, or cap [6]. Other try to measure eyeball position changes and determine gaze direction [1] [15]. Also, there are attempts to leverage electroencephalograms for human-machine interaction [16] [4]. Typically these systems use specialized equipment such as: electrodes, goggles and helmets which are uncomfortable to wear and also may cause hygienic problems. On the other hand, optical observation gives possibility of easy adaptation to serve the special needs of people with various disabilities and does not require uncomfortable accessories. Therefore, vision based systems have been developed rapidly over the past decade. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 53–61. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
54
J. Przybylo
2 Vision Based Facial Action Recognition There are many approaches to vision-based facial action recognition. One of them is the ,,Camera Mouse” system [2] provides computer access for people with severe disabilities. User’s movements are translated into the movements of the mouse pointer on the screen. System tracks facial features such as the tip of the user’s nose or finger. The visual tracking algorithm is based on correlation technique and requires manual selection of tracked features. Automatic framework to analyze facial action units proposed in [10] leverages the redeye effect to robustly detect the pupils. Infrared light reflected from retina is detected by infrared sensitive camera equipped with IR LEDs. Pupil positions are then used to automatic initialization of recognition algorithm. The system depends upon the robust pupil tracking, which fails when the subjects are wearing glasses (bright spots in the difference image due to specularity) or in the presence of strong direct sunlight. Above examples, show typical problems in designing robust facial action recognition system. Many frameworks (especially based on feature extraction algorithms) require manual intervention during initialization. Also, currently available systems are often very restricted due to the limited robustness and hard constraints imposed on the image acquisition conditions. The same observation has been made by Fasel and Luettin in their survey of automatic facial expression analysis methods [7]. Sources of variation in facial appearance are categorized [8] into two groups — intrinsic factors related to the physical nature of the face (identity, age, sex, facial expression) and extrinsic factors due to interaction of the light with the face and observer (illumination, viewing geometry, imaging process, occlusion, shadowing). It is the facial expression that system must learn to understand and interpret. Other sources of variation have to be modelled so that their effects can be neglected.
3 Algorithm Outline Although many eye tracking algorithms exist, most of them are based on the same principle. First the eyes are detected and localized using motion analysis techniques [5] [14], combined with post processing methods. For eye detection static image analysis is also used — image profiles [11] or active IR illumination based pupil detection [19]. Then, the tracking algorithm is initialized with extracted eye templates and optionally blink detection is performed. In some situations tracking may fail — especially when significant head pose change occurs — and tracker re-initialization is needed. Also, eye detection algorithm may fail due to illumination change or image noise. Our algorithm is based on both — static image analysis and motion detection performed simultaneously. Algorithm outline is given in fig. 1. Each frame of the image sequence is transformed into YCbCr color space. If image quality is poor (noise high level) the additional preprocessing stage is executed (median filtering).
Eye and Nostril Localization for Automatic Calibration
55
Fig. 1. Overview of the algorithm
Motion detection is performed in luminance component by creating Motion Energy Images — MEI [3]. The MEI is defined to be a union of the sequence of thresholded absolute difference images BWdif f (1). Threshold level has been selected experimentally. MEI representation (fig. 2b) shows where in the image the motion occurs. The τ parameter defines the temporal extent of a movement and is set to match the typical eye blink period. Summing all MEI representations gives information about how big motion is. It can be used to detect head movements or significant scene lighting changes. M EIτ (x, y, k) =
τ −1
BWdif f (x, y, k − i)
(1)
i=0
The aim of object analysis stage is to remove objects from binary MEI image which do not fulfill assumed criteria (object size, ratio of major to minor axis, object area divided by area of its bounding box...). Above criteria, efficiently remove objects generated by image noise or small head movements. All remaining objects are considered as possible eye candidates. To select proper pair of objects which are most likely located in eye regions of the image, additional criteria are applied — the distance and the angle of line passing through eye candidates must lie within assumed limits. If more than two pairs of candidates are found, one with lowest weight is selected. The weights are proportional to computed parameters (distance, angle) and average values based on morphological properties of the face. To detect blink event the remaining pair of eye candidates must exists on the same image region on N consecutive frames. At this stage of the algorithm both eye localization and blink detection are possible. Approximate eye
56
J. Przybylo
positions obtained are used to define region of interests (ROIs) for accurate eye and nostril localization. The blink event triggers feature (eye, nostril) localization algorithm to execute until head movement or significant scene lighting change is detected. In such situation (for example when user shakes head) feature detectors are suspended until next blink. Feature detection is performed on computed ROIs using both luminance and chrominance components. These components are delayed for τ1 frames to match MEI content. Two probability maps are created, each of them based on different image representation. The probability map from the chroma is based on the observation that high Cb and low Cr values can be found around the eyes [?]. It is constructed by: P 1map =
1 2 2 · ICb + I˜Cr + ICb /ICr 3
(2)
Since nostrils and eyes usually form dark blobs on the face image, the second probability map is constructed from luminance component using blob detection algorithm based on a scale-space representation [12] of the image signal. The entity used to construct map is the square of the normalized Laplacian operator: P 2map = ∇2norm L = σ · (Lxx + Lyy ) (3) For eye detection normalized sum of both maps is used — for nostril 2nd map accordingly (fig. 2c and d). To detect and localize eyes, binary map (4) is computed by thresholding probability map with level selected upon statistical criterion (mean(eyeROI) + n · std(eyeROI)). 1; [(P 1map + P 2map ) > eyethres ] BWeye = (4) 0; otherwise After thresholding, several (or none) objects on binary maps may exist. Eyes are detected when at least two object pairs has been found, satisfying the following criteria: size greater than threshold, the distance and angle between eye candidates must be within assumed tolerance in relation to values computed during blink. Because in some situations eyebrows can be detected instead of eyes (similar response on probability map), additional verification step is executed. When two pairs of candidate eyes are found, the lowest is selected (since area below the eyes is usually uniform without blobs). Nostril detection is performed on both — scale-space representation and luma component. Possible nostril locations are computed using two different algorithms. First algorithm founds several maxima on P 2map . Second method uses projective functions [18] to find probable nostril coordinates. Then, obtained positions are compared and algorithm selects coordinates found by both methods. Nostrils are detected when at least two object pairs, satisfying the following criterium, has been found — the angle of line passing through nostril candidates must be within assumed tolerance in relation to values computed during blink. To select most probable nostril locations from
Eye and Nostril Localization for Automatic Calibration
57
Fig. 2. (a) Video 1a, (b) MEI, (c) eye prob. map (ROI), (d) nostril prob. map (ROI)
remaining pair of objects, candidates with lowest value of the sum of luma values in nostril positions are taken.
4 Experimental Results Proposed algorithm has been modeled and tested on MATLAB/Simulink platform running on Windows-based PC (Intel Core2Duo, 2GHz). It is impossible to evaluate impact of all possible combinations of intrinsic and extrinsic factors, so the following video sequences has been selected for tests (fig. 2a, 3): 1. Good quality camera (Sony), image res. 352*288, uniform background: a) optimum lighting conditions (daylight approx.320lux), b) optimum lighting conditions but slightly different pose, c) bad lighting conditions (daylight approx.40lux), d) optimum lighting conditions, different facial action (eyebrow raised), e) complex lighting conditions — additional left-side illumination. 2. Low quality USB camera, image res. 320*240, complex background and lighting conditions (daylight right-side illumination), person wears glasses, head movements between blinks exist. 3. Good quality camera (UNELL), image res. 320*240, complex background (moving objects), medium lighting conditions (overhead fluorescent lamps), other person, head movements. 4. Medium quality USB camera, image resolution 320*240, complex background, bad lighting conditions (overhead fluorescent lamps), other person, head movements, person wears glasses. By selecting such video sequences we mainly evaluate influence of lighting conditions, background complexity and use of different cameras. Illumination changes, especially from side spot light, often results in shadows which makes localization a challenging task. Factors such as identity, individual facial expression and viewing geometry are also addressed. These are typical conditions occurring while user is working with computer. Tests have been performed using the same set of algorithm parameters applied to all videos. This expresses requirement that algorithm designed for system calibration should work well under various conditions without the need of adjusting its parameters. However, tests for different sets of parameters have also been performed.
58
J. Przybylo
Fig. 3. Selected video sequences used for experiments
To evaluate blink detection accuracy on each of the video sequences, consecutive frames when blink event occurs have been manually annotated as single ,,blink”. Eye locations were manually annotated as well. The following statistics have been measured: number of blinks detected and eye position error measured on frames classified as blink. Overall blink-detection rate was good. There were cases when number of detected blinks are greater than number of annotated blinks. This situation occurs because blink consists of two actions (closing and opening eyes) marked as one blink but classified by algorithm as separate blink events. From the standpoint of main goal of the algorithm (automated calibration for the whole system) this is correct. Other blink detection methods can be initialized with obtained results — e.g. template-based detector described in [5]. Also, there were cases when blinks were detected on frames annotated as non-blink (sequences 2 and 4). That was mainly the result of eye or eyelid movements and can also be used for calibration. In all sequences except 4, eyes has been located correctly. In the problematic sequence several blinks have been missed and during one blink event eyes have been located incorrectly. This was result of two factors — user shook head when blinking and head was lightly tilted. It is possible to improve blink-detection rate by adjusting algorithm parameters. To evaluate accuracy of eye and nostril localization algorithm position errors have been computed for frames, starting with the blink event, until head movement or significant scene lighting change is detected (MEI sum greater than threshold). The following statistics have been measured: N F — number of valid frames in the sequence, T P — number of frames when feature candidates have been found, F D — number of false localizations (po1 2 1 2 sition error is above typical eye or nostril size), Eerr , Eerr , Nerr ,Nerr — eye and nostril position error [px]. Table 1 summarizes results of feature localization. Eye localization results were good for sequences with optimum lighting conditions. For sequences with low and side light (1c, 2, 4), localization effectiveness is lower. However, we notice that localization fails mainly when eyes are closed and only eyelids are visible. Also, in seq. 2 and 4 (where errors are significant) person wears glasses and low quality cameras are used. In video 4 several blinks were missed and there is one incorrect blink-eye localization. This affects eye localization because results of blink detection initialize algorithm (ROI). Experiments also shows that eye localization in static images is more precise than by motion detection. Nostril localization
Eye and Nostril Localization for Automatic Calibration
59
Table 1. Eye and nostril localization results Video N F T P eye
1 2 1 2 FD Eerr Eerr TP FD Nerr Nerr eye1 eye2 μ max μ max nostril nostr.1 nostr.2 μ max μ max
1a 1b 1c 1d 1e 2 3 4
0 0 3 0 0 12 2 32
103 67 74 47 71 73 128 190
103 67 74 47 67 73 128 187
0 1 5 0 0 9 2 17
3 3 3 3 2 5 4 6
6 5 20 5 5 13 19 36
5 5 7 5 3 8 3 4
9 14 22 7 6 14 18 31
103 67 74 47 71 72 128 53
0 0 0 0 0 0 0 15
0 0 7 0 0 0 0 14
2 3 3 1 2 1 2 5
4 3 4 2 4 1 4 16
4 4 4 3 3 1 2 5
5 4 7 3 4 2 4 11
was very good, except sequence 4, where nasolabial furrow has been detected instead of nostrils. For this sequence annotation of nostril position is difficult even for human.
5 Conclusions The work presented here is part of the larger project of building vision based facial action recognition system for people with disabilities [17]. In our opinion the key issue is adaptation of the machine to human rather than vice versa. A shift toward a human-centered interaction architecture, away from a machinecentered architecture is also observed in Human-Computer-Interaction (HCI) domain [13]. Some of the sources of variation in facial appearance (such as lighting conditions, viewing geometry and image noise) usually change while user is working with system. Therefore they have to be monitored constantly and their influence should be estimated. Other factors (gender, age, etc.) are constant until other person starts using system. The calibration procedure performed during initialization can handle this case as well. Experimental results show that proposed eye and nostril detection algorithm is efficient and can be used to calibrate facial action recognition system. Blink detection can be performed also. Using images taken by different cameras for tests can help design algorithms, which are more robust to various scenarios. To our knowledge, most blink detection algorithms [5] [11] [14] [19] are evaluated only on image sequences from one camera. Although, results are promising there are several issues that should be addressed in future work. Because of algorithm implementation, we could not perform tests in real-time video. Therefore, real-time implementation is needed. Also, more tests (especially with different people) can help identify algorithm weaknesses.
60
J. Przybylo
Acknowledgement This work is supported by AGH University Science and Technology, grant nr. 10.10.120.783.
References 1. Augustyniak, P., Mikrut, Z.: Complete Scanpaths Analysis Toolbox. In: IEEE Engineering in Medicine and Biology Society, EMBS 2006, pp. 5137–5140 (2006) 2. Betke, M., Gips, J., Fleming, P.: The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access For People with Severe Disabilities. IEEE Trans. on Neural Systems and Rehabilitation Eng. 10, 1–10 (2002) 3. Bobick, A.F., Davis, J.W.: Action Recogn. Using Temporal Templates. In: MBR 1997 (1997) 4. Broniec, A.: Motion and motion intention representation in EEG signal and possibilities of application in man – machine interface. AGH University of Science and Technology Master Thesis (2007) 5. Chau, M., Betke, M.: Real time eye tracking and blink detection with usb cameras. Boston University Computer Science (2005) 6. Evans, D.G., Drew, R., Blenkhorn, P.: Controlling mouse pointer position using an infrared head-operated joystick. IEEE Trans. on Rehabilitation Engineering 8, 107–117 (2000) 7. Fasel, B., Luettin, J.: Automatic Facial Expression Analysis: A Survey. Pattern Recognition 36, 259–275 (2003) 8. Gong, S., McKenna, S.J., Psarrou, A.: Dynamic Vision: From Images to Face Recognition. Imperial College Press (2000) 9. Rein-Lien, H., Abdel-Mottaleb, M., Jain, A.K.: Face Detection in Color Images. IEEE Trans. Pattern Anal. Mach. Intell. 24, 696–706 (2002) 10. Kapoor, A., Qi, Y., Picard-Rosalind, W.: Fully Automatic Upper Facial Action Recognition. AMFG, 195–202 (2003) 11. Lalonde, M., Byrns, D., Gagnon, L., Teasdale, N., Laurendeau, D.: Real-time eye blink detection with GPU-based SIFT tracking CRV, 481–487 (2007) 12. Lindeberg, T.: Scale-space theory: A basic tool for analysing structures at different scales. J. of Applied Statistics, 224–270 (1994) 13. Lisetti, C.L., Schiano, D.J.: Automatic Facial Expression Interpretation: Where Human-Computer Interaction. Artificial Intelligence and Cognitive Science Intersect. Pragmatics and Cognition 8(1), 185–235 (2002) 14. Lombardi, J., Betke, M.: A Self-initializing Eyebrow Tracker for Binary Switch Emulation (2002) 15. Ober, J., Hajda, J., Loska, J.: Applications of eye movement measuring system OBER2 to medicine and technology. In: SPIE’s 11th Annual International Symposium on Aerospace/Defence Sensing, Simulation, and Controls, Infrared technology and applications, pp. 327–336 (1997) 16. Pregenzer, M., Pfurtscheller, G.: Frequency component selection for an EEGbased brain to computer interface. IEEE Transactions on Rehabilitation Engineering 7, 246–252 (1999)
Eye and Nostril Localization for Automatic Calibration
61
17. Przybylo, J.: Automatic facial action recognition in face images and analysis of those within Human-Machine Interaction context. AGH University of Science and Technology, adviser: Piotr Augustyniak DSc EE (2008) 18. Zhouand, Z.H., Geng, X.: Projection functions for eye detection. Pattern Recognition 37, 1049–1056 (2004) 19. Zhiwei, Z.: Real-time eye detection and tracking under various light conditions, pp. 139–144. ACM Press, New York (2002)
Grade Differentiation Measure of Images Maria Grzegorek Institute of Computer Science, Polish Academy of Sciences, Ordona 21, 01-237 Warsaw
[email protected]
Summary. This paper describes application of newly developed grade differentiation measure between two datasets to images processing. Pixels of an image are transformed into a dataset of records describing pixels. Each pixel is characterized by its gray level, gradient magnitude and a family of variables n1 , ..., nk , where k has been arbitrarily chosen. Grade Correspondence Cluster Analysis procedure implemented in program GradeStat allows reorder a sequence of records and divides pixels onto similar groups/subimages. Procedure GCCA takes a significant amount of time in the case of large images. Comparison of grade differentiation measures between variables allows to decrease the number of variables and the same to decrease processing time.
1 Introduction Pixels of an image can form structures hidden in the image. Pixels grouped with the aid of their similarity and similarity of pixels belonging to their neighborhood should reveal some of these hidden structures. The way in which the image is transformed to a dataset is described in [4]. Pixels of a gray image are assigned to a rows of the data matrix. In columns of data matrix are values of variables derived from the image. Gray level value of the pixel is obtained from the image immediately. Gradient magnitude value is calculated using near neighborhood of the pixel. Next k variables n1 , ..., nk result from comparisons of gradient magnitudes in the neighborhood of pixel with the family of thresholds t1 , ..., tk . The data matrix is a common table of non-negative numbers. It is processed by the Grade Correspondence Cluster Analysis (GCCA) procedure. The grounds, methods and applications of GCCA is widely described in [2]. All algorithms are implemented in a clear and user friendly shape in freeware program GradeStat [3]. GCCA belongs to statistical methodology based on grade transformation of multivariate dataset involving a copula, certain continuous distribution on an unit square. The data matrix is properly transformed into two-way probability table. Regularity of the table measures Spearman’s rho (ρ∗ ). Another measure is M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 63–70. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
64
M. Grzegorek
Kendall’s tau (τ ). Rows and columns of the table are reordered to make the table more regular and positive dependent, chosen dependence measure becomes possibly maximal. New succession of the sequence of pixels cause similar pixels locating near each other and those which differs are distant in the sequence. Clustering procedure divides sequence of ordered pixels into predefined number of subsets of neighboring similar pixels. Pixels assigned to the same subset are visualized in separate subimage. Results of an image decomposition and some properties of data table visualized on stripcharts are presented in [6]. Another visualization tool of the program GradeStat - an overrepresentation map of the data table - is shown in [4]. Decomposition of the image and other applications of GradeStat are described in [7]. Different ways of variables construction are described in [8]. There is a question how to choose the value of k. In [4] k was equal to 10. It occurs that k equal to 7 is good enough ( [6], [7], [8]). In this paper further decreasing of k is discussed. The background of comparison of two data tables transformed to copulas is sketched in next Section as well as the grade differentiation measure between two tables. Section 3 describes performed experiments in which variables are compared in pairs and the grade differentiation measures are determined. In the last Section conclusions are emphasized and the future work is mentioned.
2 Grade Differentiation between Multivariate Datasets Pixel of the image is connected with values of a few variables. In previous papers the explored data matrices are composed of rows assigned to pixels and columns assigned to variables. Width of such matrix is equal to the number of variables and the hight of this matrix is equal to the number of pixels which is large enough. Developed application permits to investigate matrices of variable values and makes possible comparison between these matrices. In [1] authors propose grade measures between bivariate probability tables of the same size. Suitably transformed matrix is compared with related binormal matrix. Permutation of rows and columns of both matrices (separately) results usually in more regularly positive dependent tables (such permutation performs GCA). Cumulative distribution functions of respective variables form set of concentration curves on unite square. These curves form resulting copula which grade measure of dependence (ρ∗ or τ ) can evaluate departure the first matrix from possible to obtain regularly positive dependence owned by the second matrix. Dependence measure between two datasets is used in [5] for cipher quality testing. Main question is if it is possible to distinguish zero-one sequence from the cipher and from pseudo-random generator. Predefined number of consecutive signs from the first source fills in one data matrix, the same number of signs from generator forms the second data matrix. Then a copula is constructed representing grade differentiation between two bivariate distributions and a parameter ρ∗max is determined. Selection of a starting point in
Grade Differentiation Measure of Images
65
the sequence from the cipher is repeated and the authors compare 1000 pairs of matrices and 1000 parameter values. Results are presented in different ways, for instance charts of parameter values resulting from different ciphers and generators, concentration curves of cumulative distribution functions of parameter from two different ciphers or the cipher and the generator. In [9] authors present description of a background and an example of grade differentiation measure used to differentiation evaluation between two data tables T1 and T2 . Tables are matrices of non-negative multivariate data. Table Ti (i = 1, 2) is transformed to bivariate probability table (cells of that table are divided by the sum of the table). A copula Cop[Ti ] is created from probability table by subsequent transforming its marginal distributions to uniform. Copula is a continuous distribution on an unit square [0, 1]2 with uniform marginal distributions. The unit square is divided by horizontal and vertical lines. Horizontal lines form horizontal stripes which heights are proportional to marginal probabilities of rows and vertical lines form vertical stripes which widths are proportional to marginal probabilities of columns. Rectangular cells of probability table have constant values. Graph of copula can be visualized by program GradeStat as overrepresentation map. The strength of dependence in the copula (as well in the table) is measured by Kendall’s tau or Spearman’s rho. Reordering of rows and columns of the table allows to obtain possible maximal value of grade dependence measure τ = τmax . If sums of columns in both tables are equal, widths of respective stripes in copulas representing columns are equal. Pairs of columns (one from copula Cop[T1 ] and respective column from copula Cop[T2 ]) are used to form concentration curves. Values of cumulative distribution functions of a pair of columns constitute coordinates of the concentration curve which is distributed on [0, 1]. All pairs of columns form concentration surface and a new resulting table. It has an uniform column marginal function but the row marginal function has to be transformed to become uniform. Obtained copula Cop[T1 : T2 ] represents grade differentiation between two bivariate distributions resulting from tables T1 and T2 . τ and ρ∗ measures strength of dependence of the copula Cop[T1 : T2 ]. In a case the sums of columns are not equal the concentration curves might be constructed with the aid of columns belonging to different variables. In spite of this difficulties a solution of the problem is only a bit more complicated ( [9]). Procedure approximating Cop(T1 : T2 ) is implemented in program GradeStat. The comparison of multivariate datasets developed in [1], [2] and [9] meets a requirement of preserving spatial connections between neighboring pixels. Connections have been preserved indirect through the value of gradient magnitude (defined in 3×3 neighborhood) and construction of variables n1 , ..., nk which are defined on 3 × 3 neighboring values of gradient magnitudes (resulting in 5 × 5 influence neighborhood). However, there was no possibility immediately adapt the neighborhood of pixels in the image domain to objects
66
M. Grzegorek
in the dataset due to procedure GCCA reorders sequence of pixels. Similar pixels are put near each other on a base of grade similarity of whole records assigned to pixels. The grade differentiation measure between data tables offers opportunity to profit from pixel coordinates. The data table obtained from image consists of 2 + k variables. The first obvious variable is the gray level of the grey image. The second variable is the gradient magnitude obtained with a simple gradient operator (Sobel operator). Next k variables n1 , ..., nk are constructed with the use of a family of thresholds t1 , ..., tk . Value of the variable ni for the pixel is equal to the number of neighboring pixels which gradient magnitude differs from the pixels magnitude less than the value of the threshold ti . Threshold values are calculated on the basis of a parameter b established for the family. The thresholds family may be derived from a rule ti = i ∗ b ∗ gmmax ∗ /1000, where gmmax is the maximal value of gradient magnitude in the image [8]. On the other hand each of variables forms a separate data matrix with values of this variable. Sizes of matrices are the same as the image. There are 2 + k data matrices: a gray level data matrix, a grade magnitude data matrix and k data matrices of the variables family. There is (2 + k) ∗ (2 + k − 1) pairs of data tables, for every pair of data tables a copula Cop[T1 : T2 ] is constructed, then its grade differentiation measure ρ∗ is calculated. In applications of grade differentiation measures described in [5] and [9] data tables are preliminary reordered to gain its maximal ρ∗max value and preprocessed tables are compared to obtain the grade differentiation measure. In specific application to image variables the data matrices should not be reordered due to values in adjacent cells of the data table results from adjacent pixels of the image. Spatial constrains should be preserved.
3 Experiment Results Two test gray images are used to present grade differentiation between pairs of data tables derived from the image: fragment of an NMR image (Figure 1a) and a crossroad image (Figure 2a). The gray level image forms data table filled in with integer numbers from range [0,255]. The second variable is gradient magnitude which results in data table of real numbers from range [0, gmmax ], where gmmax is the maximal gradient magnitude in this image. Figures 1b and 2b show visualization of pixels gradient magnitudes in images from Figures 1a and 2a. Next k data tables are derived from the family of variables ni . Values in this tables are integer numbers from range [0,8]. Visualization of the variables family ni is demonstrated in Figures 1c-i for the first image and in Figures 2c-i for the second image. Grade differentiation measures are calculated for pairs of data tables resulting from the image. Left graphs in Figures 3 and 4 show values ρ∗ connected with lines for gray level (label j) and gradient magnitude (label m) (tables T1 in Cop[T1 : T2 ]) with respect to all variables. Labels of successive variables, which are set as table T2 , are put on horizontal axes. Right graphs
Grade Differentiation Measure of Images
a
b
c
d
e
f
g
h
i
67
Fig. 1. a) NMR image, b) gradient magnitude, c-i) variables n1 − n7 , b = 0.2
a
b
c
d
e
f
g
h
i
Fig. 2. a) crossroad image, b) gradient magnitude, c) variables n1 − n7 , b = 2
68
M. Grzegorek
Fig. 3. Grade differentiation measure values between gray level (j) and gradient magnitude (m) vs. all variables (left), grade differentiation measure values of variables n1 , ..., nk vs. all variables, b = 0.2, image from Figure 1a
Fig. 4. Grade differentiation measure values between gray level (j) and gradient magnitude (m)vs. all variables (left), grade differentiation measure values of variables n1 , ..., nk vs. all variables, b = 2, image from Figure 2a
in those figures show values ρ∗ for variables n1 , ..., nk (labels z1, ..., z7) vs. all variables. Of course grade differentiation of the variable with respect to itself is equal to zero and the plot intersects horizontal axis. Variables on horizontal axes are ordered in the same succession which results from variables reordering after GCCA performed for image decomposition (e.g. in [6]). Values ρ∗ derived from comparisons with variables n3 , ..., n7 are placed on or near straight lines. It seems possible to replace group of variables n3 , ..., n7
Fig. 5. Mean processing time of image decomposition for 9 and 5 variables (on horizontal axis - number of image pixels, on vertical axis processing time in sec)
Grade Differentiation Measure of Images
69
Fig. 6. Percents of pixels in each of 20 subimages (x axis) in 9 and 5 variables case, for image from Figure 1a, b = 0.2 (left) and for image from Figure 2a, b = 2 (right)
a
b
c
d
e
f
g
h
Fig. 7. Subimages no. 5 and 8. First and third column - 9 and 5 variables case, second and fourth column - the same subimages after segmentation and removing pixels in regions greater than 10 pixels, image from Figure 1a, b = 2
by one variable only with limited loss of information. The number of variables n1 , ..., nk decreases from k = 7 to k = 3 and the number of variables decreases from 9 to 5 in the image decomposition. Processing times of image decomposition are compared in Figure 5. On the horizontal axis numbers of pixels of the processed image are put for a few images (14976 pixels in the image shown in Figure 1a, 74576 in Figure 2a and 654030 pixels in entire NMR brain image). Processing time is shortened from 35 even till 43 percent. In Figure 6 amounts of pixels belonging to subsequent subimages are shown in the case of 9 and 5 variables used in the image decomposition. In Figure 7 subimages no. 5 and 8 are shown, with all pixels and with pixels in segments equal or greater than 10 pixels, both for 5 and 9 variables case.
4 Conclusions Performed investigation allows restrict the number of variables n1 , ..., nk to k = 3 and a total number of variables used in the image decomposition
70
M. Grzegorek
from 9 to 5 only. This effects in a significant shortening of processing time. Moreover, the distribution of pixels in subimages is different. Pixels, earlier accumulated in a few subimages, are moved to a greater number of subimages which may result in a greater number of regions revealed in subimages. The influence of the smaller number of variables on the image decomposition will be investigated.
References 1. Szczesny, W., Kowalczyk, T.: On Regularity of Multivariate Datasets. In: Klopotek, M., Wierzcho´ n, S.T., Michalewicz, M. (eds.) Intelligent Information Systems 2002, Proceedings of the IIS 2002, Sopot, Poland, June 3-6, 2002. Advances in Soft Computing, pp. 237–246. Physica-Verlag, Heidelberg (2002) 2. Kowalczyk, T., Pleszczy´ nska, E., Ruland, F. (eds.): Grade Models and Methods for Data Analysis, With Applications for the Analysis of Data Populations. Studies in Fuzziness and Soft Computing, vol. 151. Springer, Heidelberg (2004) 3. http://gradestat.ipipan.waw.pl/ 4. Grzegorek, M.: Image Decomposition by Grade Analysis - an Illustration. In: ˙ lnierek, A. (eds.) Computer RecogKurzy´ nski, M., Puchala, E., Wo´zniak, M., Zo nition Systems. Advances in Soft Computing, pp. 387–394. Springer, Heidelberg (2005) 5. Szczesny, W., Kowalczyk, T., Srebrny, M., Such, P.: Testing ciphers quality as random numbers generators: results of experiments explorating grade statistical methods. Technical report ICS PAN, Warsaw, Poland (2006) (in Polish) 6. Grzegorek, M.: Homogeneity of pixel’s neighborhoods in gray level images investigated by the Grade Correspondence Analysis. In: Kurzy´ nski, M., Puchala, ˙ lnierek (eds.) Computer Recognition Systems 2. Advances E., Wo´zniak, M., Zo in Soft Computing, pp. 76–83. Springer, Heidelberg (2007) 7. Pleszczy´ nska, E.: Application of grade methods to medical data: new examples. Biocybernetics and Biomedical Engineering 27(3), 77–93 (2007) 8. Grzegorek, M.: Variables Applied in a NMR Image Decomposition with the Aid of GCCA. J. of Medical Informatics and Technologies 12, 183–187 (2008) 9. Szczesny, W., Kowalczyk, T.: Grade differentiation between multivariate datasets (in preparation)
3D Mesh Approximation Using Vector Quantization Michal Romaszewski and Przemyslaw Glomb Institute of Theoretical and Applied Informatics of PAS Baltycka 5, 44-100 Gliwice, Poland Tel.: +48 32 231 73 19 {michal,przemg}@iitis.pl
Summary. We analyze the application of the Vector Quantization (VQ) for approximation of local 3D mesh fragments. Our objective is to investigate distortion resulting from representation of a mesh fragment with a set of symbols. We view this set of symbols as helpful for 3D object retrieval and comparision purposes, however here we focus solely on representation errors. We propose a mesh quantization scheme using Linde-Buzo-Gray (LBG) algorithm with appropriate mesh preprocessing and investigate the impact of codebook creation parameters on quantizer distortion.
1 Introduction With the growing importance and usage of 3D graphic emerged the problem of storing and processing information about the geometry of an object in three dimensional space. Finding a similar shaped fragments, identifying objects or searching for characteristic surface features is still a nontrivial task. Since 3D polygon meshes are widely used to store a data about polyhedral objects, this paper will concentrate on mesh processing operations. To minimise computation time it is beneficial to find a representation of a mesh fragment that maintains the characteristic of an original and is smaller and easier to analyse – these tasks were researched in many publications related to mesh compression i.e. [3], [4], search and object recognition i.e. [5], [6]. Operations on large sets of meshes can be optimised by creating and assigning a finite set of mesh approximations for each individual mesh fragment (so the information about the mesh is stored as a list of approximations without the need to store all the vertices and edges). The example of such cases are mesh comparison and search operations, when only the ability to identify the fragment is needed. In this paper we use the Linde-Buzo-Gray [1] Vector Quantization algorithm (based on k-means clustering technique [2]) to find an approximation of a mesh fragments. The goal is to research the distortion of 3D mesh fragments quantization and decide if the approximation is good enough to be used in 3D mesh searching alghoritm. The second goal is to determine how M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 71–78. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
72
M. Romaszewski and P. Glomb
the quantization distortion depends on codebook parameters (codebook size, number of learning samples) for different vector sizes and to find a first-guess values of these parameters. Quantization (for 3D data) is a popular method, especially with an objective of mesh compression [3], this results from the nature of data (floating point representation of vertices’ coordinates). Vector Quantization application, which would quanitze several coordinates or vertices at a time is rarer; to the best of authors’ knowledge, it has not been investigated much. Some far similarities can be seen in the method of [10] and in the work of [4] (which presents a summary of clustering techniques for 3D meshes compression; clustering has common elements with some VQ approaches). Chapter 2 (Method) presents initial mesh processing and LBG vector quantization algorithm. Mesh fragments are normalized and converted to 2D vectors to reduce processing complexity. Then LBG VQ is used to find mesh fragments approximation. Chapter 3 (Experiment) presents the research results. The VQ distortion estimation as well as the explanations about experiment parameters are provided. The results - distortion estimation for multiple codebook and vector size is presented in chapter 4 (Results). Chapter 5 (Conclusion) summarizes the results and gives the ideas for future research.
2 Method The idea of the method is to select mesh fragments as a subsets of an original mesh and then using a vector quantization algorithm to find their aproximations. We denote the set of vertices (in our case a subset of the original mesh vertices) by V and the dimension √ of the quantized vector by k ∈ N (we use the subset of values to have also k ∈ N). Each vertex v ∈ R3 . Our preprocessing function has the form fp : V → Rk and quantization function is fvq : Rk → N. The first step is selection of the mesh fragment (also called a ‘sample’). Given the set of mesh vertices W and some initial vertex v0 ∈ W the fragment is created by v0 and all vertices v reachable by walking a graph of mesh vertices and edges, with an Euclidean distance d(v0 , v) < r. The cut radius r is assumed to be large enough for the fragments to cover characteristic features of a mesh and to contain at least a minimal, fixed number of vertices |s| > ρ (based on previous experiments with meshes of different topologies it was assumed that ρ ∼ 200). Each sample V ⊂ W was a subject of a preprocessing described below. a) Translation and Rotation The sample is translated to the point (0, 0, 0). Next, by finding a surface normal vector n ¯ as an average mean of sample’s edge cross products and rotating the sample so that n ¯ ¯ a, a ¯ = [0, 0, 1] it is ensured that mesh fragments of the same shape will be similar regardless of their orientation in 3D space.
3D Mesh Approximation Using Vector Quantization
73
b) Scaling and Normalization
The sample vertices v = [vx , vy , vz ] are scaled so that they can be projected and normalized in the next step to simplify processing and distortion evaluation. The scaling and normalization operation is written as ⎡ ⎤⎡ ⎤ τs 0 0 vx v → v , v = ⎣ 0 τs 0 ⎦ ⎣vy ⎦ (1) 0 0 τs τn vz where the coefficients for scaling τs and normalization τn are defined as √ k 1 τn = . (2) τs = maxv∈V (|vx |, |vy |) maxv∈V (|vz |) c) Interpolated Projection A set of points V processed by previous operations is now projected √ √onto the Oxy plane. First the plane is divided into regular grid of bins k × k (recall √ that k ∈ N). Every v ∈ V is assigned to a bin based on vx and vy values, and vz becomes the value associated with the bin. If more than one vertex falls into a bin, a mean of their vz is used; if a bin is empty, the value is interpolated from by computing an arithmetic mean of four nearest existing values. The values associated with the bins are ordered, row by row, to form a vector e ∈ Rk , which is the result of preprocessing of V . 2.1
Vector Quantization
Vector quantization is a clustering technique commonly used in compression, image recognition and stream encoding [7], [8]. It is the general approach to map a space of vector valued data to a finite set of distinct symbols, in a way to mnimize distortion associated with this mapping. In this paper the implementation proposed by Linde-Buzo-Gray (LBG) [1] is used. A vector quantizer Q of dimension k maps k-dimensional vectors of the vector space Rk into a finite set of n numbers: Qk : Rk → N ∩ 0, n)
(3)
Each resulting number i has an assigned vector zi ∈ Rk , which is called a code vector or a codeword, and the set of all the codewords is called a codebook. The number of vectors n is called the codebook size. Associated with each codeword, is a nearest neighbourhood region called Voronoi region Vi = x ∈ Rk : x − zi ≤ x − zj , ∀j = i (4)
74
M. Romaszewski and P. Glomb
The set of Voronoi regions partition the entire space Rk such that: n
n
Vi = Rk
i=0
Vi = φ
(5)
i=0
In 3D mesh processing a Voronoi region is a centroid containing the codebook vectors. A codebook (set of codewords) is prepared before the quantization by processing number of learning samples. Its creation doesn’t affect quantization time. Simplified codebook preparation alghoritm can be written as follows: Begin Select n initial codevectors Do For each sample vector Find the nearest codevector Move codevectors to the centers of their nearest vector clusters While Codevectors difference < λ or maximum number of iterations reached End
Where λ is the desired precision.
3 Experiments Two meshes used for experiments–”Sabines” and ”Bacchus”–were a scans of historical objects taken with Minolta VI-9j scanner, each containing ∼50 000 vertexes. Following the methodology outlined in [9] two independent sample sets were prepared from each of the meshes, training (for quantizer learning) and testing. The latter was used for distortion estimation. Quantizer codebook was created using the training set. Initial codevectors were selected randomly and then optimised until the difference between their respective Voronoi regions’ centers in consecutive steps was lower than assumed precision |λ, λ + 1| < λ, or until the number of iterations exceeded maximum value. The precision and the maximum number of iterations (20) were an based on empirical investigation from previous quantization experiments–it was assumed that λ = 0.005. The second set was used to estimate quantization distortion. For every sample vector, the codevector with the lowest distance between the fragments was found (see section 2.1). Euclidian distance between an input vector v = [v1 , v2 , . . . , vk ] and a codevector z = [z1 , z2 , . . . , zk ] was used as a distortion measure k d(v, z) = (vi − zi )2 (6) i=1
3D Mesh Approximation Using Vector Quantization
75
Fig. 1. ”Bacchus” and ”Sabines” - meshes used in experiments
The range of values √ after preprocessing is constrained, so the maximum distortion max = 2 k, where k is the dimension of an input vector. The distortion is presented as a percentage of maximum distortion for a mesh fragment. f inal =
max
× 100
(7)
To test how the quantizator distortion depends on codebook creation parameters, in each of the experiments codebook size n was proportional to a sample set m size (ranging from 0.02m to 0.9m). We researched cases for vector size k = {100, 144, 196, 256, 324, 400, 484}. Each experiment was repeated 5 times and the presented result was obtained by computing an avarage distortion values.
4 Results In this chapter we present the results of the 3D mesh vector quantization experiments. Samples cut from mesh were converted to a vectors of different sizes (from 64 to 484). Multiple results were obtained for codebook size n varying fom 0.03m to 0.3m. For every experiment the codebook creation set and the testing set were similar in size. The results were different for large and small vectors. Figure 2 shows the small vector case (k = 100). The distortion drops with extending the codebook (increase of n) as well as with the extension of a sample set (increase o m). It can be observed that enlarging a sample set results into a lower k distortion for a proportional codebook size. Training sets with small m ratio k ( m < 0.05) have marginally steadier distortion drop rate when extending a codebook. When the number of codebook vectors reaches ∼ 0.1m, the performance gain drops rapidly. The existence of the major change in distortion value when δout δt ∼ 0.1 where δout is the size of an output set and δt represents the size of a training set was predicted in [9]. n It is possible for the distortion value to rise for the cases with large m ratio. n It can be observed for a small and moderate sized vectors with m ∼ 0.3 and with a reduced sample set size m.
76
M. Romaszewski and P. Glomb
Fig. 2. Distortion results for k = 100, k ∼ 0.02 − 0.3m n Figure 3 shows a case where large m ratio results in a distortion gain. n k ∼ 0.15 − 0.2. When Minimal distortion can me observed for m ∼ 0.2 and m the sample set it not large enough, rising number of codewords n results in a small number of sample vectors in every Voronoi region of codebook vectors. This results in an inappropriate approximation of a sample (the sample size is too large). Figure 4 shows the results for a large vector (k = 484). The distortion value is increasing with the size k of a codevectors (and sample vectors). Larger k size of a training set ( m < 0.05) results in significally steadier distortion drop
Fig. 3. Distortion results for k = 144, k ∼ 0.15 − 0.2m
3D Mesh Approximation Using Vector Quantization
77
Fig. 4. Distortion results for k = 484, k ∼ 0.02 − 0.3m n associated with codebook extension for m > 0.1 as well as lower distortion k ∼ 0.1 and more no significant values than the small sample size cases. For m distortion drop related to codebook extension was observed, so no results for these cases were presented.
5 Conclusion Presented results show that Vector Quantization is a promising approach for mesh approximation and representation. The proposed method of mesh pre quantization processing simplifies the operation and reduces computation time. Extending the codebook size results in a linear raise of quantization complexity, but the sample set size is only important in an initial, learning, phase and doesn’t affect the time of quantization. For finding an optimal k quantizer it may be beneficial to initially create large training set ( m < 0.05) m and a codebook of size n = 10 . The sample can be used as a first guess codebook and for the comparision with further results.
Acknowledgments Historical objects scanned and used in this article were made available by Museum of Gliwice. Quantization experiments were done with the High Performance Supercomputer Cluster at IITiS. This work was supported by the Ministry of Science and Higher Education of Polish Government, the research projects NN516186233 ‘Visual techniques for the multimodal hierarchic 3D representations of cultural heritage artifacts’ and 3T11C04530 ‘Application of contourlet transform for coding and transmission stereo-images’.
78
M. Romaszewski and P. Glomb
References 1. Linde, Y., Buzo, A., Gray, R.M.: An Algorithm for Vector Quantizer Design. IEEE Trans. Commun. COM-28, 84–95 (1980) 2. Hartigan, J.A., Wong, M.A.: A K-Means Clustering Algorithm. Applied Statistics 28, 100–108 (1979) 3. Alliez, P., Gotsman, C.: Recent Advances in Compression of 3D Meshes. In: Proceedings of the Symposium on Multiresolution in Geometric Modeling, pp. 3–26. Springer, Heidelberg (2003) 4. Gotsman, C., Gumhold, S., Kobbelt, L.: Simplification and Compression of 3D Meshes. In: Proceedings of the European Summer School on Principles of Multiresolution in Geometric Modelling, pp. 319–361. Springer, Heidelberg (2002) 5. Bustos, B., Keim, D.A., Saupe, D., Schreck, T., Vrani´c, D.V.: Feature-Based Similarity Search in 3D Object Databases. ACM Computing Surveys 37(4), 345–387 (2005) 6. Osada, R., Funkhouser, T., Chazelle, B., Dobkin, D.: Matching 3D Models with Shape Distributions. Shape Modeling International, 154 (2001) 7. Yu, P., Anastasios, Venetsanopoulos, N.: Improved Hierarchical Vector Quantization for Image Compression. In: Proceedings of the 36th Midwest Symposium on Circuits and Systems, Detroit, MI, USA, vol. 1, pp. 92–95 (1993) 8. Makur, A.: Vector Quantization of Images Using Input-Dependent Weighted Square Error Distortion. IEEE Transactions on Circuits and Systems for Video Technology 3(6), 435–439 (1993) 9. Jain, A.K.(IEEE Fellow), Duin, R.P.W., Mao, J.: Statistical Pattern Recognition: A Review. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(1), 4–37 (2000) 10. Lee, E., Ko, H.: Vertex Data Compression For Triangular Meshes. In: Proceedings of Pacific Graphics, pp. 225–234 (2000)
Vehicles Recognition Using Fuzzy Descriptors of Image Segments Bartlomiej Placzek Faculty of Transport, Silesian University of Technology ul. Krasinskiego 8, 40-019 Katowice, Poland
[email protected]
Summary. In this paper a vision-based vehicles recognition method is presented. Proposed method uses fuzzy description of image segments for automatic recognition of vehicles recorded in image data. The description takes into account selected geometrical properties and shape coefficients determined for segments of reference image (vehicle model). The proposed method was implemented using reasoning system with fuzzy rules. A vehicles recognition algorithm was developed based on the fuzzy rules describing shape and arrangement of the image segments that correspond to visible parts of a vehicle. An extension of the algorithm with set of fuzzy rules defined for different reference images (and various vehicle shapes) enables vehicles classification in traffic scenes. The devised method is suitable for application in video sensors for road traffic control and surveillance systems.
1 Introduction Vehicles recognition is an important task for vision-based sensors. Accomplishment of this task enables advanced traffic control algorithms to be applied. E.g. recognition of public transport vehicles (buses, trams) allows for priority introducing in traffic signals control. Recognition of personal cars, vans, trucks etc. enables determination of optimal assignment of green time for particular crossroad approach. It is obvious that traffic control algorithms using additional information on vehicles classes provides lower delays and higher fluidity level of the traffic in comparison to less sophisticated control strategies. For real-world traffic scenes vehicles recognition is a complex problem. Difficulties are encountered especially for crossroads, where queues are formed and vehicles under scrutiny are occluded by dynamic or stationary scene components. Vehicles recognition systems have also to cope with influence of objects shadows and ambient light changes. In this paper a method is introduced that allows for vehicles recognition on the basis of simple image segments analysis. Matching of segments detected in the image with model segments is performed using fuzzy reasoning system. For the proposed method particular image segments are analysed that correspond to visible parts of a vehicle. This approach enables recognition of M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 79–86. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
80
B. Placzek
partially occluded vehicles. Applied method of segments description includes several shape parameters and enables considerable reduction of data amount processed by the reasoning system. Rest of this paper is organized as follows. Section 2 includes a brief survey of vision-based vehicles recognition methods and presents a general design of the proposed method. Section 3 includes definitions of segments parameters and merging operation. Section 4 describes fuzzy rules designed for vehicles recognition. In section 5 implementation details of the fuzzy reasoning system are discussed. Section 6 deals with pre-processing of the input data, including segments extraction and merging as well as application of vehicle models. An experiment results are shown in section 7 and finally, conclusions are drawn.
2 Related Works and Proposed Method Several vision-based vehicles recognition methods have been developed so far. All of these methods use various forms of vehicle shape models. Some of the models describe shape of entire vehicle body (3-D models); other models consider selected shape properties or local features arrangement in the vehicle image. Usually operations of vehicle detection and tracking are preliminary steps in the algorithm of vehicles recognition. In [5] a method is introduced that uses only vehicle dimensions to classify vehicles into two categories: cars and non-cars (vans, pickup, trucks, buses, etc.). Vehicles models are defined as rectangular image regions with certain dynamic behavior. Correspondences between regions and vehicles are recognized on the basis of tracking results. A vehicle recognition approach that uses parameterised 3-D models is described in [4]. The system uses a generic polyhedral model based on typical shape to classify vehicles in a traffic video sequence. Using 3-D models partially occluded vehicles can be correctly detected [6]. However, algorithms of this type have higher computational complexity. Feature-based vehicle recognition algorithms [7] include edge detection techniques, corner detection, texture analysis, harr-like features, etc. On the basis of these detection tools the vehicle recognition methods have been developed [10]. In [8] a vehicles classifier is proposed based on localfeature configuration. It was applied to distinguish four classes: sedan, wagon, mini-van and hatchback. Another approach combines elements of 3-D models with feature-based methods. Local 3D curve-groups (probes) are used, which when projected into video frames are features for recognizing vehicle in video sequence [3]. This classifier was applied to three classes of vehicles: sedans, minivans and SUV’s. The method introduced in this paper was designed to recognise vehicles and categorize them into classes that are relevant from a point of view of the traffic control objectives. Taking into account impact on a crossroad capacity
Vehicles Recognition Using Fuzzy Descriptors of Image Segments
81
Fig. 1. Scheme of the vehicles recognition method
and priority level, five fundamental vehicle classes have to be distinguished: personal car, van, truck, bus and tractor-trailer. In the proposed approach vehicles recognition is performed by reasoning procedure using fuzzy rules [9]. The rules describe shape of a vehicle model and allow for level evaluation of similarity between image objects and the assumed model. Scheme of the method is presented in fig. 1. The recognition is based on description matching of the model segments with segments extracted from an input image. Segment description takes into account its geometrical properties, shape coefficients and location. This particular data format was applied for both model segments (reference image) as well as segments extracted from input image. Processing procedure of input image includes segmentation and merging of the extracted segments (see sec. 6). This method do not require tracking data to recognise a vehicle. The complexity reduction of recognition algorithm is achieved due to processing of simplified image description. Fuzzy rules are inducted on the basis of segments properties determined for vehicle model. Rules induction process consists in fuzzyfication of the parameters describing model segments MS. Antecedents of the rules take into account description of segments MS; consequents indicate degree of similarity between input image and the model. The segments merging, rules induction and reasoning procedures are performed using devised representation method of image segments (segments description based on selected geometric parameters). Operations are not executed directly on image data, what is crucial in reducing computational complexity of the recognition algorithm. These procedures will be described in detail in the following sections.
3 Segments Description and Merging Operation Introduced shape description method for objects registered in an image is based on selected geometrical parameters of the image segments. The description method applied for segments of reference image (model) as well as for extracted segments of the input image remains the same.
82
B. Placzek
Description of i-th image segment is defined by the formula: , xmax , yimin , yimax , Si = Ai , xi , yi , xmin i i
(1)
where: Ai - area of the segment, ci = (xi , yi ) - centre of mass determined for the segment, w(Si ) = xmax − xmin - width of the segment, h(Si ) = i i max min yi − yi - height of the segment. Computations of the segments parameters, defined above, are performed for coordinate system x-y (fig. 2) that has orientation determined by the model orientation in camera coordinate system. The principles of segments merging operation are illustrated in fig. 2. Parameters of merged segments are depicted in fig. 2 a); fig. 2b) presents results of the segments merging. Formally, the merging operation is denoted by relation: Sk = Si M Sj ,
(2)
which means that segment Sk is created by merging Si with Sj and parameters of its description are computed according to the following formulas: Ai xi + Aj xj Ai yi + Aj yj , yk = , Ai + Aj Ai + Aj xmin , ykmin = min yimin , yjmin , = min xmin , xmin k i j xmax , ykmax = max yimax , yjmax . = max xmax , xmax k i j
Ak = Ai + Aj ,
xk =
(3)
The merging operation has following properties that are important for its implementation: S i M S j = S j M S i , S i M S j M S k = Si M S j M S k . (4) Merging operation is used for pre-processing of segments extracted in input image. It is necessary for correct vehicle recognizing by fuzzy rules that are introduced in the following section.
Fig. 2. Segments merging operation
Vehicles Recognition Using Fuzzy Descriptors of Image Segments
83
4 Fuzzy Rules for Vehicles Recognition Fuzzy rules are produced for recognition procedure on the basis of segments defined in reference image (model). As the method was intended to categorize vehicles into classes, a separate model and rules are needed for each class. Set MS c includes!descriptions of all segments determined for vehicle model " of class c : MS c = Sjc , j = 1...n (c) Segments extracted from input image are described by elements of set ES : ES = {Si } , i = 1...m. For every vehicles class a set of classification rules is created including three types of rules, connected with different aspects of the segments layout: shape rules, placement rules and arrangement rules. Shape rules (5) describe shape of segments defined for the vehicle model of class c using shape coefficients. They allow for similarity determination of segments from MS c and ES. if Ai is A˜cj and q(Si ) is q˜(Sjc ) then p(Si ) is j,
(5)
where: i = 1...m, j = 1...n(c) and q(Si ) = w(Si )/h(Si ) is shape coefficient. Placement rules (6) describe mutual placement of the segments. Using these rules each pair of segments from ES is processed. It is performed by comparing the relative location of their mass centres (defined by [dx, dy] vector) with the mass centres arrangement of the model segments (j1 , j2 ). ˜ c , S c ) and dy(Si , Si ) is dy(S ˜ c , Sc ) if dx(Si1 , Si2 ) is dx(S j1 j2 j1 j2 1 2 and p(Si2 ) is j2 then d(Si1 , Sjc2 ) is (j1 , j2 ),
(6)
where: i1 , i2 = 1...m, i1 = i2 , j1 = 1, j2 = 2...n(c), dx(Si , Sj ) = xi − xj , dy(Si , Sj ) = yi − yj . The spatial arrangement rules (7) take into account placement of all model segments. Both the segments shape similarity and segments locations are checked. It is performed using results of the previous reasoning stages that exploit (5) and (6) rules. if p(Si ) is 1 and d(Si , S2c ) is (1, 2) and d(Si , S3c ) is (1, 3) c ) is (1, n(c)) then class is c, and ... and d(Si , Sn(c)
(7)
where: i = 1...m. The tilde symbol was used in rules definitions (5) and (6) to indicate fuzzy sets having trapezoidal membership functions. The membership functions are determined for specific values of model segments parameters.
5 Reasoning System Implementation A registered vehicle is recognised using fuzzy reasoning system with base of rules that are defined in the previous section. Input data of the reasoning system is fuzzified into type-0 fuzzy sets (singleton fuzzification). The system
84
B. Placzek
uses Mamdani reasoning method and averaging operator (arythmetic mean) for fuzy sets aggregation [1]. Due to these assumptions, the membership functions forconsequents of rules (5) - (7) are computed according to the formulas (8) - (10). In the first stage of the reasoning procedure a membership function is computed for each segment Si that determines its similarity to particular segments Sjc of the model. μp(Si ) (j) =
1 μA˜c (Ai ) + μq˜(Sjc ) (q(Si )) , i = 1...m, j = 1...n(c). j 2
(8)
In the second stage another membership function is evaluated to check if the segments arrangement in the image is consistent with the model definition: 1 μd(Si ,Sjc ) (j1 , j2 ) = max μ ˜ c c (dx(Si1 , Si2 ))+ 2 i2 3 dx(Sj1 ,Sj2 )
+ μdy(S , (9) c ,S c ) (dy(Si1 , Si2 )) + μp(Si ) (j2 ) ˜ 2 j1
j2
where: i1 = 1...m, i2 = 1...m, i1 = i2 , j1 = 1, j2 = 2...n(c). At last, the third membership function determines similarity level of the object recorded in input image and vehicle shape model of class c: ⎧ ⎤⎫ ⎡ ⎬ ⎨ 1 ⎣μp(Si ) (1) + μclass (c) = max μd(Si ,Sjc ) (1, j)⎦ , (10) i ⎩ n(c) + 1 ⎭ j where i = 1...m. The class number of recognised vehicle is a product of the defuzzification of function (10). It is determined by the maximal membership method.
6 Processing of Input Data Input data of the vehicle recognition algorithm (fig. 1) consists of descriptors computed for both segments ES detected in the input image as well as segments MS, selected on the basis of the vehicle model. Segments ES in the input image can be extracted using various segmentation methods. In the presented approach background subtraction, edge detection and area filling algorithms were applied for this task [2]. Thus, segments ES correspond to regions of input image that are bounded by edges and do not belong to the background. Processing procedure of input image includes merging of the extracted segments ES into segments ES, that match better with shape of the model MS. The segments merging is motivated by assumed fidelity level of the model, which does not take into account minor parts of vehicle like headlights or number plate. Set ES of extracted segments is transformed into ES. This
Vehicles Recognition Using Fuzzy Descriptors of Image Segments
85
operation uses threshold value τ of similarity between merged segments Si and model segments Sj : " ! (11) ES = Si = M(PSi ) : ∃ Sj ∈ MS : μp(Si ) (j) ≥ τ , PSi = {Sik }, k = 1...z is a set of extracted segments PSi ⊂ ES and M(PSi ) denotes the merging operation on set PSi : M(PSi ) = Si1 M Si2 M ...M Siz . The rules induction procedure requires parameters determination of the reference image (model) segments. A reference image description (set MS) is generated on the basis of three-dimensional (3-D) vehicle shape model that describes faces arrangement of a vehicle body [2]. This model is transformed into 2-D model, for defined locations of the vehicle and camera. Thus, the 2-D model establishes a reference vehicle image of a given class that is located in a specific place within the scene. Faces of the 3-D model, projected on image plane, defines segments MS of the model. The vehicle model has to be defined for each vehicles class that is taken into consideration by the recognition procedure.
7 Experimental Results The proposed vehicles recognition method was tested using number of images selected in video sequences of traffic scenes, where vehicles of different classes have been recorded. Furthermore, synthetic traffic images were utilised for vehicles queue modeling in performance analysis of the method. The experiments of vehicle class recognition were performed using vehicle models for five classes (personal car, van, truck, bus and tractor-trailer). Segments determined in vehicles models were matched with those extracted from input images. This task was executed by applying fuzzy reasoning system described in sec. 5. Examples of the experiment results are presented in fig. 3. It includes input images and vehicle models marked with white lines. The models are displayed corresponding with recognised class of a vehicle, identified by maximal value of function μclass (eq. 10). For particular cases depicted if fig. 3 the recognition results were obtained as follows: a) personal car 0,44; b) personal car 0,51; c) personal car 0,38; d) bus 0,37; e) truck 0,31; personal car 0,41; f) tractor-trailer 0,35. The above numbers denote maximum of μclass for each example.
Fig. 3. Examples of recognised vehicles
86
B. Placzek
The overall result of the experiments indicates that for nearly 90% of test images the proposed vehicle recognition method provides correct classification results. However, the results strongly depend on correctness of the segments extraction in input images.
8 Conclusion A fuzzy description method of image segments was introduced. The method was implemented for automatic recognition of vehicles recorded in image data. This implementation was based on fuzzy reasoning system with rules describing properties of image segments. Experimental results confirmed that the proposed method is effective to vehicle recognition. It was demonstrated that the system can categorize vehicles into five classes. It should be noticed that low complexity of the proposed image description makes the method suitable for application in video sensors for road traffic control and surveillance systems.
References 1. Bezdek, J., Keller, J., Krisnapuram, R., Pal, N.: Fuzzy Models and Algorithms for Pattern Recognition and Image Processing. Springer, New York (2005) 2. Placzek, B., Staniek, M.: Model Based Vehicle Extraction and Tracking for Road Traffic Control. In: Kurzy˜ nski, M., et al. (eds.) Advances in Soft Computing. Computer Recognition Systems, vol. 2. Springer, Heidelberg (2007) 3. Dongjin, H., Leotta, M., Cooper, D., Mundy, J.: Vehicle class recognition from video-based on 3D curve probes. In: 2nd Joint IEEE Int. Workshop on Visual Surveillance, pp. 285–292 (2005) 4. Ferryman, J., Worrall, A., Sullivan, G., Baker, K.: A Generic Deformable Model for Vehicle Recognition. In: BMVC, Birmingham, vol. 1, pp. 127–136 (1995) 5. Gupte, S., Masoud, O., Martin, R., Papanikolopoulos, N.: Detection and Classification of Vehicles. IEEE Trans. on ITS 3(1), 37–47 (2002) 6. Haag, M., Nagel, H.: Tracking of complex driving manoeuvres in traffic image sequences. Image and Vision Computing 16, 517–527 (1998) 7. Hongliang, B., Jianping, W., Changpin, L.: Motion and haar-like features based vehicle detection. In: 12th Int. IEEE Conf. Multi-Media Modelling, pp. 356–359 (2006) 8. Mohottala, S., Kagesawa, M., Ikeuchi, K.: Vehicle Class Recognition Using 3D CG. In: Proc. of 2003 ITS World Congress (2003) 9. Placzek, B., Staniek, M.: Fuzzy rules for model based vehicles classification. In: Central European Conf. on Inf. and Int. Systems, Varazdin, pp. 493–500 (2008) 10. Xiaoxu, M., Grimson, W.: Edge-based rich representation for vehicle classification. In: Tenth IEEE Int. Conf. on Computer Vision, vol. 2, pp. 1185–1192 (2005)
Cluster Analysis in Application to Quantitative Inspection of 3D Vascular Tree Images Artur Klepaczko, Marek Kocinski, and Andrzej Materka Technical University of Lodz, ul. Wolczanska 211/215, 90-924 Lodz {aklepaczko, kocinski, materka}@p.lodz.pl
Summary. This paper provides — through the use of cluster analysis — objective confirmation of the relevance of texture description applied to vascular tree images. Moreover, it is shown that unsupervised selection of significant texture parameters in the datasets corresponding to noisy images becomes feasible if the search for relevant attributes is guided by the clustering stability–based optimization criterion.
1 Introduction Medical diagnosis of the vascular system largely exploits image information obtained with, e.g. computed tomography, magnetic resonance (MR) tomography or confocal microscopy (CM). Abnormal deformation in geometrical architecture of blood vessels and veins may denote certain pathologies and thus the need for clinical or even surgical treatment. With the development of computational intelligence methods, image-based diagnostic procedure can overcome the restrictions of qualitative image analysis. It arises that a method for automatic recognition of the pathology type or its localization would provide significant support to a diagnostician. In a general setting, vascular system can be viewed at three levels, depending on a vessel diameter size. The width of the largest, first-order vessels reaches almost 3cm. Diameter of medium sized veins and arteries are equal to approx. 20–50mm. At the bottom-most level there remain venules and arterioles, and capillaries, whose size does not exceed 9μm [2]. At each mentioned level, vascularity images involve specific processing techniques. Diverse approaches result from the relation between vessel size and image resolution (see Fig. 1). In the case of major veins and arteries, whose width surpasses image raster size a few times, it is possible to perform vessel tracking. This in turn leads to obtaining geometrical (volume, shape, etc.) information about vasculature. When medium vessel diameters (comparable to raster size) are under consideration, texture analysis must be involved. Describing a vascular tree in the terms of texture parameters allows their classification with respect to the number of branches, blood viscosity or input and output flow volume [13]. MR imaging of the smallest vessels allows quantitative analysis e.g. through a perfusion study. This can be performed in the Dynamic Contrast Susceptibility examination [6].
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 87–94. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
88
A. Klepaczko, M. Kocinski, and A. Materka
Medium vessel comparable to raster
Vessel diameter exceeds raster size
Raster size greater than vessel diameter
Fig. 1. Relation between image resolution and a vessel size
This research undertakes the issue of texture analysis applied to mediumsize vessel images. It has been shown that there exists direct relationship between texture features and certain vascularity classes [13]. This observation holds for any of the above mentioned classification criteria (number of branches, blood viscosity, flow volume). However, the experimental results presented so far in the literature were derived with the use of supervised classification and feature selection techniques. It can be argued that such an approach can artificially bias the constructed classification rule toward a predetermined assumptions inherent in a training data set. Thus, the experimental findings grounded on supervised learning methods should be further validated in the unsupervised manner. Since any classification task requires statistically representative data sample, the number of instances per class must be large accordingly. Due to the insufficient amount of real MR or CM images containing vessel trees and available to the authors, in the experimental part of this study synthetic tree images are investigated instead of the real ones.
2 Vascular Tree Generation As stated before, in this paper synthetic rather than real vascularity images are examined. This results from either the difficulty in obtaining such images from MRI data or rareness of clinical inspection using either confocal microscope [10] or any other 3D imaging modality. However, it can be assumed that classification strategy tested on synthesized vessel trees shall also apply to real ones. This approach motivated other studies reported in the literature, e.g. in [3,4,5]. The tree generation scheme employed in this research utilizes the algorithm proposed in [4]. Its detailed presentation would exceed the scope of this paper and thus only its main concepts will be recalled below. 1. A vessel is represented by a tube of fixed radius along its entire length. 2. Tree growth is parameterized by following input variables: • number of output branches, • blood viscosity,
Cluster Analysis in Application to Quantitative Inspection
89
• input and output blood flow, • input and output blood pressure, • perfusion volume. By varying one of the above arguments with the others kept constant it is possible to obtain a series of vascularity classes differing with respect to a chosen parameter. 3. In a single step of a tree construction one new branch is added. This action starts with random selection of a new vessel endpoint (within a predefined organ shape; in this study sphere was used). Then the algorithm searches for its nearest branch. At the center point of that branch a bifurcation is made. The parameters for all branches are adjusted so that the following rules hold [5]: • matter preservation, • Poiseuilles law, • bifurcation law. 4. The output of the generation procedure is a vector description of a vascular tree. Such a description can be viewed as a list of 1) start and end points (expresed in 3D Cartesian coordinates) which define length and direction of a vessel axis, and 2) vessel radius. Having vector description of a vascular tree, it is possible to construct its 3D raster image. Real-valued data are converted into discretized coordinates using the following strategy. Each voxel in the scene is assigned some intensity value which is proportional to a vessel volume present in it. In the two extreme cases, when a voxel is either totally filled with a vessel or it does not belong to vascularity tree at all the intensity becomes maximal or zero, respectively. To deal with the intermediate cases it is assumed that a voxel can be divided into a number of subvoxels (in this research 3 subvoxels are used in each direction). Then, if a voxel is occupied only partially, its intensity is adjusted to the value proportional to the number of subvoxels whose middle points belong to a vessel. This concept is visualized for the two–dimensional case in the Fig. 2. An example of a raster tree image (Maximum Intensity Projection) generated using the above outlined procedure is depicted in Fig. 3. The last issue that should be taken into account during vascularity image generation is noise introduced by any real-world image acquisition technique.
0
3
9
6
Fig. 2. Subpixel analysis in calculation of pixel intensities
90
A. Klepaczko, M. Kocinski, and A. Materka
Fig. 3. Example of a generated vascular tree raster image (MIP)
In the case of MRI, the noise signal can be modelled using Rice distribution [6] given by the following probability density function x xν −(x2 + ν 2 ) f (x|ν, σ) = 2 exp I0 , (1) σ 2σ 2 σ2 where x denotes a given voxel intensity, ν and σ are the distribution parameters, while I0 is the modified Bessel function of the first kind and order 0.
3 Vascularity Classification Three basic steps are involved in each data classification task: a) feature generation, b) feature selection and c) inferring a rule that classifies feature vectors into separate categories. As described in the introduction, this study aims at developing a framework that will objectively substantiate the use of texture model for vascular tree classification. Thus, the whole process of data exploration should be approached using unsupervised feature selection in step b) and cluster analysis in step c). Supervised analysis reveals that linear separation of different vascularity classes is perfect in the case of images without noise (see Fig. 4a). Hence the choice of the clustering technique requires no special considerations and the simple k –means algorithm should suffice to properly cluster the data. However, selection of relevant discriminative texture parameters in the unsupervised manner remains a challenging task. 3.1
Texture Analysis
Submitting an image to a classification procedure requires formulating its appropriate description tractable by a computer–implemented algorithm. Image characteristics must reflect significant patterns of the visualized objects. Thus, it has been proposed to represent a vessel tree using texture parameters [12].
Cluster Analysis in Application to Quantitative Inspection
91
The literature describes several texture characterization models [11]. Choosing arbitrarily the proper one for a particular image type and application remains a difficult task. The common strategy is thus calculation of many texture parameters derived from various concepts. Such an approach bases on the assumption that at least some of the computed features will reflect important regularities inherent in image. Within this study three texture models are taken into consideration: • • •
Co-occurance matrix (260 features), Run-length matrix (25 features), and Image gradient (5 features).
Texture parameters computation can be performed e.g. using one of the freely available software packages [14,16,17]. Among these, the MaZda suite, developed with the authors contribution, seems to offer the most versatile functionality [14], especially when 3D analysis is involved. It allows defining volumes of interest (VOI) for an image, calculating texture features within a VOI and classifying feature vectors using either supervised or unsupervised methods. 3.2
Unsupervised Feature Selection
The major problem in unsupervised feature selection is the lack of unambiguous criterion for identifying attributes that allow accurate classification. To cope with this problem, most methods described in the literature employ various — though conceptually similar — measures of clustering quality, assuming that correctly classified data form high quality clusters (i.e. well separated and internally compact). However, as it has already been shown in [15], this concept happens to fail since true data vector classes often exhibit opposite properties. Remaing in the context of texture classification, this situation takes place if analyzed images convey insufficient information about visualized structures — either because of low image resolution or the presence of noise (see Fig. 4b). In order to avoid the above mentioned problem it is proposed to select features using — apart from a quality measure — yet another clustering evaluation criterion. It can be observed that relevant attributes allow creating similar cluster structures for two or more independent data sets. Statistical learning theory describes such a behavior of a clustering algorithm using the notion of stability [9]. Thus, in the research, it is proposed to optimize: • •
average silhouette value as a quality measure [1], and hamming distance between clusterings as a stability estimate [7].
While the first criterion can intuitively be described as a fraction of inter- to intraclass scatter, calculating the second one needs a more detailed explanation. Having two data sets S1 and S2 , let π1 , π2 denote their corresponding clustering results. S1 together with its cluster labels can be considered as a training sample and used to construct a classifier C (in the supervised
92
A. Klepaczko, M. Kocinski, and A. Materka
(a)
(b)
Fig. 4. Example scatter plot of selected texture parameters computed for noise-free (a) and noisy images (b)
manner). Then, C is invoked on the set S2 to produce the vector of class labels λ2 . Eventually, the hamming distance between π1 and π2 is defined as dH β (π1 , π2 ) =
2 [L10 + L01 ], Q(Q − 1)
(2)
where L10 denotes the number of data vector pairs xi , xj ∈ S2 which belong to the same class in accordance with λ2 but are differently clustered in π2 . L01 symbolizes the opposite condition. In order to check how a given feature subset influences clustering stability data vectors, described only by the chosen attributes, must be simply split into disjoint samples S1 and S2 . This allows direct calculation of (2). The search for the most relevant attribute subset can be performed using the hybrid genetic algorithm [8] that ensures efficient feature space exploration. Since it is a single objective optimization strategy, the both mentioned criteria should be combined to form one features relevance test Υ (Ξ) = ϕ(Ξ)β(Ξ),
(3)
where Ξ denotes an investigated feature subset, ϕ(Ξ), β(Ξ) are clustering quality and stability measures calculated for a data set restricted to Ξ. It must be noted, that (2) actually estimates instability and thus β(Ξ) = 1 − dH β (Ξ).
4 Experiments For the need of the experiments two collections of vessel trees were generated. The first collection was created with the assumption of constant input flow while in the case of the second one it was the output flow that remained unchanged. Also in both collections the same value of blood viscosity (3.6 cp) was used for every tree object. Within a collection the generated trees
Cluster Analysis in Application to Quantitative Inspection
93
can be divided into five subgroups, each containing 32 instances, differing among each other with a pre-specified number of output branches: 3000, 3500, 4000, 4500, and 5000. Then, tree vector descriptions were converted into raster images. Every image occured in five variants corresponding to different noise level controlled by the Rice distribution parameter σ. The five values that were used in the study were 0 (no noise), 1, 3, 5 and 10 in a 256 levels image intensity range. Finally, in order to obtain numerical descritption of the trees, the images were submitted to texture analysis with spherical VOI covering all vessels. Every dataset was submitted to the unsupervised feature selection procedure twice. In the first run only the clustering quality measure guided the search, whereas in the second turn the criterion (3) was used. After feature selection, data vectors were clustered using the k –means algorithms and classification error was calculated. The obtained results are presented in Tab. 1. Table 1. Clustering error after feature selection [%] Evaluation criterion
0
ϕ Υ
0.00 0.00
ϕ Υ
0.00 0.00
Rice distribution parameter σ 1 3 5 Constant input flow 26.25 34.38 46.25 3.13 6.88 31.25 Constant output flow 0.63 10.00 18.50 0.00 0.63 4.38
10 53.13 45.00 26.25 26.25
5 Results Discussion Two main conclusions emerge from the analysis of the experimental results. First of all, the texture description of vessel tree images indeed encapsulates intrinsic vascularity patterns. Small errors of unsupervised classification achieved for the less noisy or noise-free images (independently from a criterion employed for feature selection procedure) objectively confirms the correspondence between texture features and tree growth parameters. Secondly, it has been demonstrated that using solely clustering quality measure as a feature selection criterion may occur insufficient to find good attribute subspace ensuring proper class separation. In the reported experiments this took place when images were corrupted by noise. Objectively different clusters are then neither compact nor isolated. Taking into account the notion of clustering stability helps resolving the problem. In that case, the classification error remains small until the impact of noise becomes dominant.
94
A. Klepaczko, M. Kocinski, and A. Materka
Acknowledgements This work was supported partially by the Polish Ministry of Science and Higher Education grant no. 1205/DFG/2007/02. The first author is a scholarship holder of the project entitled “Innovative education. . . ” supported by the European Social Fund.
References 1. Struyf, A., Hubert, M., Rousseeuw, P.J.: Computational Statistics & Data Analysis 26, 17–37 (1997) 2. Keener, J., Sneyd, J.: Mathematical Physiology. Springer, Berlin (1998) 3. Bezy-Wendling, J., Kretowski, M., Rolland, Y.: Biology and Med. 33, 77–89 (2003) 4. Karch, R., Neumann, F., Neumann, M., Szawlowski, P.: Annals Biomedical Engineering 31, 548–563 (2003) 5. Kretowski, M., Rolland, Y., Bezy-Wendling, J., Coatrieux, J.-L.: Comput. Methods Programs Biomed. 70, 129–136 (2003) 6. Tofts, P.: Quantitative MRI of the Brain: measuring changed caused by desease. John Wiley & Sons, Chichester (2003) 7. Lange, T., Braun, M.L., Roth, V., Buhmann, J.: Neural Computation 16, 1299– 1323 (2004) 8. Oh, I.S., Lee, J.S., Moon, B.R.: IEEE Trans. PAMI 26(11), 1424–1437 (2004) 9. von Luxburg, U., Ben-David, S.: Towards a statistical theory for clustering. In: PASCAL Workshop on Statistics and Optimization of Clustering (2005) 10. Dickie, R.: Microvascular Research, 20–26 (2006) 11. Materka, A.: What is the texture? In: Hajek, M., Dezortova, M., Materka, A., Lerski, R. (eds.) Texture Analysis for Magnetic Resonance Imaging, pp. 11–43. Med4 Publishing, Prague (2006) 12. Kocinski, M., Materka, A., Lundervold, A.: On The Effect of Vascular Tree Parameters on 3D Texture of Its Image. In: Proc. ISMRM-ESMRMB Conference, Berlin (2007) 13. Kocinski, M., Materka, A., Lundervold, A., Chekenya, M.: Classification of Vascular Tree Images on Numerical Descriptors in 3D. In: Tkacz, E., Komorowski, D., Kostka, P., Budzianowski, Z. (eds.) Proc. 9th International Conference SYMBIOSIS 2008, Kamien Slaski (2008) 14. Szczypinski, P., Strzelecki, M., Materka, A., Klepaczko, A.: Comput. Methods Programs Biomed. (2008), doi:10.1016/j.cmpb.2008.08.005 15. Klepaczko, A., Materka, A.: Clustering stability-based feature selection for unsupervised texture classification. Machine Graphics & Vision (in press, 2009) 16. http://www.keyres-technologies.com (2009) (visited: January 2009) 17. http://www.maths.bris.ac.uk/~ wavethresh/LS2W (2009) (visited: January 2009)
The Comparison of Normal Bayes and SVM Classifiers in the Context of Face Shape Recognition Adam Schmidt Poznan University of Technology Institute of Control and Information Engineering str. Piotrowo 3a 60-965 Poznan, Poland
[email protected] Summary. In this paper the face recognition system based on the shape information extracted with the Active Shape Model is presented. Three different classification approaches have been used: the Normal Bayes Classifier, the Type 1 Linear Support Vector Machine (LSVM) and the type 2 LSVM with a soft margin. The influence of the shape extraction algorithm parameters on the classification efficiency has been investigated. The experiments were conducted on a set of 3300 images of 100 people which ensures the statistical significance of the obtained results.
1 Introduction In recent years the face recognition has been among the most actively developed methods of biometric identification. This popularity is caused by two main reasons. Firstly, the face appearance is an important clue used by humans in visual identification. Secondly, the automatic face recognition is a non-invasive method and can be used without subject’s knowledge or permission. Defining the proper feature set plays a crucial part in the development of an efficient face recognition system. The features used should be discriminating enough to appropriately model the characteristic description of a person. Moreover, it is important to ensure that the selected features can be extracted under varying conditions of the image registration. The Active Shape Model (ASM) introduced by Cootes et al. [1] [2] is a powerful shape extraction method. It is especially well suited to the analysis of complex, flexible objects such as human faces. The success of the ASM encouraged many researchers to further improve this method. Zhao et al. [3] proposed the Weighted Active Shape Model, which utilized the information on the contour points stability. After fitting to the image contour points were projected to the shape space in a way that minimized the reconstruction error of points with the smallest movement. Additionally the authors introduced an image match measure of the whole shape rather than its particular points. This approach facilitated choosing the final contour. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 95–102. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
96
A. Schmidt
Zuo and de With [4] noticed that the Active Shapes needed precise initialization. To provide good initial conditions for their algorithm they matched a face template to the gradient image. They also resigned from using the gradient profiles to fit contours to the image. Instead they used a N × N pixels neighborhood decomposed with the Haar wavelets. The system developed by Ge et al. [5] strived to improve the robustness of the Active Shapes to the face pose. As the first processing stage two face detectors were used: one detecting the frontal faces and another detecting face profiles. Their responses were used to estimate the head rotation. The rotation angle was used to decide which of the ten previously trained Point Distribution Models should be used. Wan et al. [6] observed that different parts of face contour model were differently disturbed by face pose changes and decided to split the face model into two submodels: one representing only the face outline and the second modelling the eyes, eyebrows, nose and mouth. To further increase the robustness of the system three independent models were created for the frontal and profile face views. The genetic algorithm with chromosomes describing both submodels parameters and the similarity transformation was used to fit contours to the image. The fitness function was based on the both submodels match measures and the third component describing the correctness of the submodels relative position. The whole procedure was computationaly expensive and authors admit that the slight improvement in contours quality was paid with a few times longer processing time. Cristinnace and Cootes [7] used the Haar-like features and boosted classifiers to improve the accuracy of the ASM. They also proposed using boosted regression instead of classification to precisely locate contour points. This method outperformed the classic ASM w.r.t. both the accuracy and the processing time. However, those authors focused on improving the accuracy of the model, i.e. reducing the difference between manually and automatically marked contours, without giving any thought to possible applications of the method. The goal of this research was to find out if the shape of the face and its features (i.e eyes, mouth, nose and eyebrows) extracted with the Active Shape Model (ASM) contains sufficient information for the successful face recognition. Three different classification methods were used: the Normal Bayes Classifier (NBC) [8] [9], Linear Support Vector Machine [10] [11] of the Type 1 (LSVM-C) and Type 2 (LSVM-ν). Moreover, the influence of ASM parameters such as the subspace dimension, the gradient profile length and the type of tangent projection on the efficiency of the recognition system has been investigated. This paper starts with a short review of the ASM. After that the setup of the experiment and the data used are described. The results are presented in the section 5 and the last section contains the concluding remarks and the future work.
The Comparison of Normal Bayes and SVM Classifiers
97
2 Active Shape Model The ASM is a shape extraction method statistically modelling the distribution of plausible shapes and the appearance of the contour points neighbourhood. It consists of two submodels: the Point Distribution Model (PDM) and the Local Structure Model (LSM). The PDM is used to represent the variablity of shapes in the training set. The exemplary shapes are aligned to a common coordinate frame (by using the Generalized Procrustes Analysis). After that they can be projected on the space tangent to the mean shape in order reduce the nonlinearities. The tangent projection can be done by either elongating the shape vector (preserving its direction) or by projecting it in the direction parallel to the mean (minimal change). Then the Principal Components Analysis is used to reduce the dimensionality of the model and to supress the noise. The extracted shape is represented as a t-dimensional vector in the subspace defined by the selected Principal Components. The purpose of the LSM is to model the typical neighborhood of contour points. It is achieved by sampling the gradient along the profiles perpendicular to the contour and passing through the model points. For each point the mean profile and the covariance matrix are estimated according to the training set. The complexity of the model can be set by changing the length of sampled profiles. This length is defined by the parameter k, which corresponds to the number of pixels sampled on each side of the contour. While fitting the shape to the image particular contour points are moved to the positions which minimize the Mahalanobis distance between the sampled profiles and those in the LSM. The shape is extracted by consecutive moving of the contour points with the LSM and regularizing the shape with the PDM until convergence. To improve the robustness of the method the Gaussian image pyramid is created. The shape is first fitted to the coarsest image. After that it is scaled to the next pyramid level and the fitting procedure is repeated. This multi-resolution framework enables large movements during the initial phases and more subtle shape adjustments on the final pyramid stages.
3 Experiment The experiments were conducted on a set of 3300 high resolution (2048×1536) images of 100 people (33 per person). For each person 22 images contained near-frontal views of face with neutral expression, the other 11 images featured faces with stronger pose deviations or non-neutral expressions (Figure 1). The images were divided into three sets: Learning Set (LS) containing a half of the near-frontal images, Testing Set A (TSA) - the other half of near-frontal images and Testing Set B (TSB) - all of the non-standard views.
98
A. Schmidt
a)
b)
Fig. 1. Examples of the images with shapes extracted with k=6, t=40 and no projection: a) TSA, b): TSB
The Comparison of Normal Bayes and SVM Classifiers
99
To facilitate creating of the PDM and LSM models the near-frontal images of 30 people were manually annotated with 166 points forming the face contour model. Three PDMs (differing in the tangent projection method) and ten LSM (with different length of the sampled profiles) were created. The shapes were extracted for each combination of the tangent projection method (no projection, elongation or parallel projection), the number of principal components used (t = {10, 15, . . . , 50}) and the gradient profile length (k = {1, 2, . . . , 10}). The 5 level Gaussian pyramid was used and the ASM model was initialized with manually annotated eyes positions. For each parameters combination shapes extracted from the LS set were used to train three classifiers: the NBC, the LSVM-C and the LSVM-ν. The SVMs were trained using 11-fold cross-validation to find optimal C or ν parameters. Those classifiers were than applied to the shapes from the TSA and TSB sets and the recognition rate (RR) was measured.
4 Results Figure 2 presents the RR as a function of k and t parameters with no tangent projection used. The maximuml RR for different classifier and tangent projection method pairs are gathered in the Table 1. The maximum RR=98.4% in set TSA was obtained with the LSVM-ν for k = 8, t = 20 and parallel tangent projection. The best results (RR=66.4%) in the set TSB were obtained with the LSVM-C for k = 10, t = 30 and projection by elongation. All classifier types gave similar, close to 100%, results in the set TSA. The RR in the set TSB were severely smaller and the best results of the NBC were about 5% smaller than results of the SVM classifiers. Increasing the length of sampled profiles up to k = 6 significantly improved the RR. Further increase did not provide any noticeable efficiency boost. The dimensionality of the PDM had strong influence on the RR of the NBC and LSVM-C classifiers. Initial increase (up to t = 20 for the TSA and t = 30 for the TSB) led to significant increase of the RR. For the t =< 10, 20 > the LSVM-ν classifier performed better than the other two types. Table 1. Maximum recognition rate for different classifiers and tangent projection methods
Tangent projection None Elongation Parallel
TSA
TSB
NBC LSVM-C LSVM-ν
NBC LSVM-C LSVM-ν
97.7% 97.1% 97.6%
60.1% 60.3% 61.6%
97.7% 97.1% 98.1%
98.3% 98.1% 98.4%
65.8% 66.4% 65.3%
66.6% 65.7% 65.7%
100
A. Schmidt
TSA NBC
TSB NBC
100
RR[%]
RR[%]
100
50
0
0 40
40
10 t
5 k TSA LSVM-C
20
5 k TSB LSVM-C
20
RR[%]
100
50
0
50
0 40
40
10 5 t k TSA LSVM- n
10 5 t k TSB LSVM- n
20
20
100
RR[%]
100
RR[%]
10 t
100
RR[%]
50
50
0
50
0 40
10 t
20
5 k
40
10 t
20
5 k
Fig. 2. Recognition rate of the NBC, SVM-C and SVM-ν classifiers, no tangent space projection. Left side: TSA, right side: TSB.
Incorporating any type of the tangent space projection in the PDM did not produce any observable effect (Table 1).
The Comparison of Normal Bayes and SVM Classifiers
101
5 Conclusions The presented results showed that the shape of face and its features carries enough information for efficient and reliable face recognition. The influence of the ASM parameteres on the recognition rate has been assessed and the unimportance of the tangent space projection has been proved. The high recognition rate obtained by the LSVM classifiers proves that classes form linearly separable clusters. Similar efficiency of the NBC classifier suggests that the decision surfaces can be described by hyperellipsoids. The significant drop of the RR in the set TSB shows that the proposed system was susceptible to the face pose variations and expression changes. In the real world application the learning sets should contain images taken under different registration conditions to improve the reliability of the system. The shape information could be also used to estimate the face pose and to identify the facial expression in order to facilitate correct image registration. Although the SVMs outperformed the NBC in the set TSB all classifiers had similar RR in the set TSA. This encourages to use the NBC classifer which is easier to train (in fact training is simply achieved by estimating each class mean shape and covariance). The NBC is also easily scalable introducing new classes or removing existing does not involve retraining whole classifier. It is also possible to adaptively change the estimates of the mean and covariance to respond to the changes in subjects appearance. Future work will concentrate on improving the robustness of the system w.r.t the face pose and expression changes. Further plans involve experimental installation of a system in a real world conditions and testing its efficiency in unconstrained environment.
References 1. Cootes, T., Cooper, D., Taylor, C., Graham, J.: Active shape models - their training and application. Computer Vision and Image Understanding 61(1), 38–59 (1995) 2. Cootes, T., Taylor, C.: Statistical models of appearance for computer vision. Technical report, University of Manchester, Wolfson Image Analysis Unit, Imaging Science and Biomedical Engineering (2004) 3. Zhao, M., Li, S., Chen, C., Bu, J.: Shape Evaluation for Weighted Active Shape Models. In: Proc. of the Asian Conference on Computer Vision, pp. 1074–1079 (2004) 4. Zuo, F., de With, P.: Fast facial feature extraction using a deformable shape model with haar-wavelet based local texture attributes. In: Proc. of ICIP 2004, pp. 1425–1428 (2004) 5. Ge, X., Yang, J., Zheng, Z., Li, F.: Multi-view based face chin contour extraction. Engineering Applications of Artificial Intelligence 19, 545–555 (2006) 6. Wan, K.-W., Lam, K.-M., Ng, K.-C.: An accurate active shape model for facial feature extraction. Pattern Recognition Letters 26(15), 2409–2423 (2006) 7. Cristinacce, D., Cootes, T.: Boosted Regression Active Shape Models. In: Proc. British Machine Vision Conference 2007, vol. 2, pp. 880–889 (2007)
102
A. Schmidt
8. Theodoridis, S., Koutroumbas, K.: Pattern Recognition. Elsevier/Academic Press, Amsterdam (2003) 9. Duda, R., Hart, P., Stork, D.: Pattern Classification. John Wiley & Sons, Chichester (2001) 10. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Berlin (2000) 11. Burges, C.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2, 121–167 (1998)
Detection of Interest Points on 3D Data: Extending the Harris Operator Przemyslaw Glomb Institute of Theoretical and Applied Informatics of PAS Baltycka 5, 44-100 Gliwice, Poland Tel.: +48 32 231 73 19
[email protected]
Summary. We consider a problem of interest point detection, i.e. location of points with standing out neighborhood, for a 3D mesh data. Our research is motivated by the need of general, robust characterization of a complexity of the mesh fragment, to be used for mesh segmentation and description methods. We analyze the reasoning behind traditional Harris operator for 2D images [4] and propose several possible extensions to 3D data. We investigate their performance on several sets of data obtained with laser digitizer.
1 Introduction 3D data, in the form of triangular meshes and point clouds, is becoming increasingly popular. It is the natural output of laser digitizers, which are the tool of choice for many important applications, i.e. cultural heiritage archieving and civil engineering support. It is flexible and powerful data representation for computer graphics, for rendering and visualization purposes. Its usage and processing, however, is not always straightforward, as it has a radically different structure from other popular representations (usually grid ordered, i.e. images). This motivates research for extension of methods that have proven effectiveness on those other domains for 3D data. Interest point detection is one such a family of methods. Their aim is to single out the subset of points in the original data set which have, in general, neighborhood of above average complexity. Originally developed for digital images, they have many applications for (among others) matching and recognition purposes. There have been numerous propositions for their formulation; they have been recently a subject of comparision and performance evaluation [9, 7, 10, 8]. In reported conclusions, the Harris operator [4] is often quoted among the most effective methods. Specifically, [9] find Harris operator equivalent or better than other detectors with respect to repeatability and information content criteria; [7] did not favor any of the analyzed operators, but note Harris’ performance, especially for cluttered and occluded scenes; [10] find in experimental setting that it is more robust to noise than other analyzed M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 103–111. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
104
P. Glomb
method; [8] report it in top three detectors, and best performing for lighting change and camera focal length. While the Harris operator has been available for three decades now, and due to its simple formulation and generality, continues to find applications today, its principle has not been applied (to the best of author’s knowledge) to operators processing 3D mesh data. There have been a number of attempts to define interest operator for mesh data. [11] propose a heuristic algorithm using geodesic distance to locate ‘extrema’ and ‘saddle points’; locations which share popular, distinct mesh shape; the detections are used for segmentation. A similar approach is adopted in [5]. The approach of locating regions with special shape is popular; [1] defines ‘peaks’, ‘pits’, and ‘saddle regions’ are identified based on mean and Gaussian curvature; it is used as object descriptor for detection and retrieval purposes. A more complex method is proposed by [3]; use a shape index derived from principal curvatures to select a subset of points; the regions selected have a high degree of either convexity or concavity. The description of a region is further refined to contain the centroid coordinates, surface type index (‘peak’, ‘ridge’, ‘valley’, etc); the results are used for object recognition. There are several reasons for extending the Harris operator for 3D meshes: 1. It has been proved to be effective, general and robust method in the image processing domain; 2. It has a simple, elegant formulation which does not require many parameters or training; 3. It does not require prior assumptions on data structure; does not require the definition of geometric features (‘valleys’, ‘saddles’, etc); 4. It is able to produce characterisation of local neighborhood variations, which can be the basis for general feature index approach to 3D similarity search and indexing [2]; In this article we propose several possible extensions to the Harris operator. We consider the following propositions: 1. 2. 3. 4.
Using Using Using Using
Gaussian function constructed from cloud of points; derivative of fitted quadratic surface; Hausdorff distance in place of derivative computation; approximation of the integral of fitted surface differences.
The propositions are formulated and discussed in the next section. Third section contains experimental investigation; several performance indicators are derived from experiments with 3D data obtained with laser scanner to compare the propositions. Last section contains conclusions.
2 Extending the Harris Interest Operator for Mesh Data This section presents proposed extensions of traditional (2D) Harris operator for 3D mesh data. First proposed in [4], it is defined for images, that is image
Detection of Interest Points on 3D Data: Extending the Harris Operator
105
functions in the form f : R2 → R. It is based on the observation of the difference function g (MSE error) of two neighborhoods of two neighbouring image points a = [x, y] and a + Δa g(a, Δa) =
2 w(b) f (a + b + Δa) − f (a + b)
(1)
b
where the range of b in conjuction with window function1 w : R2 → 0, 1 defines the neighborhood area. Using Taylor expansion restricted to the first order approximation we can write ) * b w(b)fx fx b w(b)fx fy g(a, Δa) ≈ Δa Δa = Δa DΔa (2) b w(b)fy fx b w(b)fy fy where fx =
∂f (a + b) ∂x
fy =
∂f (a + b) ∂y
(3)
are directional first order derivatives of the image function (usually they are replaced with directional filters). The key observation is that the matrix D is built independently of Δa from the values of derivatives around the point a. This way it captures local image information related to the neighbourhood structure, in the form related to the computation of autocorrelation function. [4] proposes to observe eigenvalues of this matrix; both large values suggest large variation of the local image structure in both directions. To avoid expensive eigenvalue computation, approximation function is introduced 2 h(a) = det(D) − k tr(D)
(4)
with the constant k ∈ 0.4, 1.5. The equation (4) produces Harris operator value. For mesh data, our input is a set of vertices V ⊂ R3 . We know, however, that it is produced by digitizing a surface of a 3D object, so for a given v0 ∈ V we may reasonably expect its neighborhood points to be samples of a continous surface of intrinsic dimensionality of 2. We define a neighborhood of v0 as a subset V ⊂ V constructed of all points v ∈ V, obtained by starting in v0 and following the edges (from the mesh edge graph) while the Euclidean distance v0 − v ≤ r, where r is given parameter. We can preprocess the neighborhood set of points V . Specifically, we can compute their centroid, and translate it to [0, 0, 0] ; and perform the regression to establish best fitting plane, and rotate the set so that plane normal points to [0, 0, 1] ; this is to produce maximum spread (range of coordinate 1
Gaussian window function is commonly used.
106
P. Glomb
values) is in the Oxy plane. The set preprocessed this way will be denoted with V , and will be useful later. The key observation here is, that we compute Harris operator from a given point neighborhood indexed in two dimensions. The above preprocessing allows us to approximate that indexing on a point set V by using x and y coordinates. The remaining question is how to define the derivatives which are averaged to form matrix D. We now present the four propositions for that. Extension option 1: using Gaussian function We are given a set of m points A = {ai }, ai ∈ R3 . We assign a Gaussian to each point and sum them to get the function f (a) f : R3 → R
f : a →
m
e−
|a−ai |2 2σ2
.
(5)
i=1
The function can be viewed as extrapolation of the original sampled data set. Gaussian function is the rather standard choice here; it has been successfully applied in similar approaches for potential functions, kernel based operators, and others. The σ parameter choice is an important decision, usually on the empirical basis. Non-uniform choice of this parameter can be used to incorporate specific scanner errors (related to beam angle). The computation of the function value can be optimized using information in the mesh edges. We can easily compute the derivative of (5) −(a[i] − aj [i]) − |a−aj |2 ∂f (a) = e 2σ2 2 ∂i σ j=1 m
fi =
(6)
where fi stands for derivative in the i-th direction (x, y or z), and a[i] the i-th coordinate of a. With equation (6) and the preprocessing scheme we can compute the matrix D, and hence the operator value, as in 2D version. We note that this propositions could be extended to 3D data without preprocessing, but we leave it in this form for easier comparision with other propositions. Extension option 2: using fitted quadratic surface Within the set of points V we can fit a quadratic surface in the form [6] z(x, y) = c10 x + c01 y +
c20 2 c02 2 x + c11 xy + y . 2 2
(7)
From this representation, fx and fy derivatives are readily available, which make further computation of the operator straightforward. This is done at
Detection of Interest Points on 3D Data: Extending the Harris Operator
107
the cost of deriving operator not from actual point data, but from simplified model; this can have both good and bad consequences, some of which will be discussed in the experiment section. Extension option 3: using Hausdorff distance The third approach relies on the Hausdorff distance between two nonempty point sets A and B + dH (A, B) = max sup inf |a − b|, sup inf |a − b| . (8) a∈A b∈B
b∈B a∈A
When computing the D matrix, we perform a weighted average of the derivative data for different points. This approach proposes to replace computation of the derivative with computation of Hausdorff distance at each point. For given v ∈ V , the computation of fx is done as follows: two points are located, one in [1, 0, 0] and the other in [−1, 0, 0] directions, then their separate neigbourhoods are compared. We can easily compare with Hausdorff distance two preprocessed neighborhoods of two points, as the result will depend only on their structure, not on the global geometry. Similar operation is done for fy . Extension option 4: using fitted surface differences This approach is a combination of options (2) and (3). The computation is done as in (3), but instead of Hausdorff distance, the difference between fitted quadrics is used. Two point sets A and B are compared with their fitted quadratic surfaces zA and zB by computing approximation of the integral 2 dQ (A, B) = zA (x, y) − zB (x, y) dxdy (9) −r≤x,y≤r
3 Experiments We here describe the experiments investigating performance of the proposed Harris extension options. We work on two data sets, scanned with Minolta VI-9j scanner, each containing about 50000 vertexes. The sets (,,Sabines” and ,,Circuit”) are visualized on figure 1 and the selected properties of point neighborhoods with r = 5 are presented in table 1. Both meshes, although different on first impression, have a common structure on local level. ,,Circuit” in comparison with ,,Sabines” is smaller (hence more points for given neighborhood) has more abrupt changes, its regions are more difficult for approximation, either with plane, or with quadric. The fitted quadric curvature distribution is similar.
108
P. Glomb
Fig. 1. The two datasets used: the ,,Circuit” set and ,,Sabines” set Table 1. Selected data for the neighborhoods for r = 5 mesh name
point num.
plane ap. MSE
quad. ap. MSE
quad. curv. sd
,,Sabines” ,,Circuit”
88 ± 31 200 ± 66
0.728 ± 0.734 1.073 ± 0.843
0.178 ± 0.392 0.361 ± 0.346
0.02 0.023
Several options for the fragment radius r were analyzed. It turned out that that it is practically constrained to the range of 3, 7. Smaller values than 3 tend to produce neighbourhoods with too small number of points for quadric estimation. Larger values then 7 produce too diverse meshes, which are usually difficult to approximate with quadric. The results presented here are for r = 5. The Harris parameter value 0.4 was used throughout the experiments. The σ value for Gaussian function (option 1) was taken as 10, after empirical investigation of the range 0.01, 100. The experiments consisted in observation of different neighborhoods and the values associated to them by different versions of Harris operator. This was done essentially on qualitative basis; most of the quantitative approaches were designed for 2D images, and are related to image specific phenomena. The only numeric approach adopted is error resillence, which is analyzed below. The software was prepared in C++. Generally, Harris operator as defined in this paper ‘favors’ (assigns high values to) standing out structures, most notably peaks, saddle points, and complicated combination of peak/valley (see examples on figure 2). The degree of curvature is correlated with Harris operator value. The operator value may be biased (only negative values observed), perhaps tuning the Harris parameter would improve this condition. The different options have their specific advantages and/or disadvantages: 1. is worst in terms of overall performance. While elegant, and promising to be easily extendable to 3D, it is too sensitive to individual geometry within given neighbourhood to be able to reliably function for distance function.
Detection of Interest Points on 3D Data: Extending the Harris Operator
109
Fig. 2. The example surfaces. On the left, two surfaces with one of the highest values; in the middle, two random middle-valued; on the right, two random small valued fragments. Circular markers have been superimposed for easing location of central point.
2. is best within the limits given by the fitting operation. Only noticed problems are with bad fit (rarely observed for those data set), and structure too complicated for quadric (common with high r). However even in this cases performance is satisfactory (in terms of locating standing out structures). 3. is worth further investigation. The specific nature of Hausdorff metric makes it most effective where neighborhood is not smooth, but contains abrupt changes, singularities, missing elements, which commonly appear when scanning small details. The effect of rotation in preprocessing reduces its sensitivity for smooth peaks and ridges; analysis without this element will be a subject of further work. 4. is similar to above; the results are less sensitive to individual point variations due to averaging effect of numerical integral approximation. We also performed a noise stability test for options (1)–(3). A random error e was added to each mesh coordinate, and relative operator difference h−h h was observed (h is the no-error value, he is the result for degraded mesh). The result is presented on figure 3. The error affects most option (2); this (paradoxically) is an asset, as it reflects consistently noise degradation level. As a side note, in this framework, computation of traditional Harris operator (equation 4) is no longer justified, as eigenvalue computation produces neligible increase in computation cost. This saves non-straightforward and essentially heuristic choice of the parameter. The four methods vary widely in computational cost. For computing 500 Harris values, option (1) needs 4.949 seconds, option (2) 0.95s (including fitting time), option (3) 178.928s (mostly due to Hausdorff distance computation; could possibly be improved by a faster implementation) and option (4) 3.907s. After all analysis, option 2 is recommended as best performing in almost all cases, and least performance costly.
110
P. Glomb 1 3D Harris option 1 3D Harris option 2 3D Harris option 3
relative Harris operator error
0.8
0.6
0.4
0.2
0
-0.2 0
0.05
0.1 mesh noise level
0.15
0.2
Fig. 3. Error sensitivity of the 3D Harris propositions
4 Conclusions We’ve proposed and experimentally evaluaded several propositions for extending 2D Harris operator to 3D mesh data. Among the proposed approaches, option (2) stands out as versatile, and effective method for interest point detection on mesh data. It is suitable for various mesh segmentation and indexing frameworks. This work was supported by the Polish Ministry of Science and Higher Education research project N N516 1862 33 ‘Visual techniques for the multimodal hierarchic 3D representations of cultural heritage artifacts’. The original of ,,Sabines” was made available by Museum of Gliwice.
References 1. Akagunduz, E., Ulusoy, I.: 3D object representation using transform and scale invariant 3D features. In: Proc. of the IEEE 11th International Conference on Computer Vision, pp. 1–8 (2007) 2. Bustos, B., Keim, D.A., Saupe, D., Schreck, T., Vrani´c, D.V.: Feature-based similarity search in 3D object databases. ACM Computing Surveys 37(4), 345– 387 (2005) 3. Chen, H., Bhanu, B.: 3D free-form object recognition in range images using local surface patches. Pattern Recognition Letters 28, 1252–1262 (2007) 4. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proc. of the 4th Alvey Vision Conference, pp. 147–151 (1988) 5. Katz, S., Leifman, G., Tal, A.: Mesh segmentation using feature point and core extraction. The Visual Computer 21(8-10), 649–658 (2005) 6. Meek, D.S., Walton, D.J.: On surface normal and gaussian curvature approximations given data sampled from a smooth surface. Computer Aided Geometric Design 17, 521–543 (2000) 7. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., van Gool, L.: A comparison of affine region detectors. International Journal of Computer Vision 65(1/2), 43–72 (2005)
Detection of Interest Points on 3D Data: Extending the Harris Operator
111
8. Moreels, P., Perona, P.: Evaluation of features detectors and descriptors based on 3D objects. International Journal of Computer Vision 73(3), 263–284 (2007) 9. Schmid, C., Mohr, R., Bauckhage, C.: Evaluation of interest point detectors. International Journal of Computer Vision 37(2), 151–172 (2000) 10. Vincent, E., Lagani´ere, R.: Detecting and matching feature points. Journal of Visual Communication and Image Representation 16, 38–54 (2005) 11. Zhou, Y., Huang, Z.: Decomposing polygon meshes by means of critical points. In: Proc. of the 10th International Multimedia Modelling Conference, pp. 187– 195 (2004)
New Edge Detection Algorithm in Color Image Using Perception Function Wojciech S. Mokrzycki1 and Marek A. Samko2 1 2
Uniwersytet Warmi˜ nsko–Mazurski, Olsztyn
[email protected] Uniwersytet Warmi˜ nsko–Mazurski, Olsztyn
[email protected]
Summary. In this paper several edge detection methods in color images have been decribed: those using monochromatic image detectors as well as those operating on specific color image features. The paper presents a new algorithm using perception function that can be applied to edge detection in color images. The algorithm involves transforming the image into the perception function which then allows applying the Canny/Deriche algorithm in the edge detection process.
1 Introduction Contour detection in images is one of the main elements of the image processing. It is a very important operation used in many applications (such as image segmentation systems, recognition systems or scene analysis systems), because edges convey important information about objects in the image. Human vision system studies shows that it extracts edges at the beginning and after that recognizes and identifies objects on the basis of detected edges. Edge detection in monochromatic images is well known. There is a lot of described detectors used for detection edges in such images. List of them includes gradient algorithms, second derivative detectors and other, more sophisticated as LoG, DoG or Canny’s algorithm. Contour detection in color images is more complex. Image function is not one dimensional as in monochromatic images, but rather three dimensional. It makes contour detection process more complicated. In this article we want to describe shortly algorithms used to detect edges in color images. We also want to present our algorithm based on perception function and experiments we have done using this algorithm1 .
2 State of the Art 2.1
Use of Monochromatic Image Detectors
There are a lot of algorithms used for contour detection in color images. Some of them are simple and similar to algorithms used for monochromatic 1
Related research was financially supported from research grant no 4311/B/T02/2007/33 of the Polish Ministry of Science and Higher Education.
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 113–118. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
114
W.S. Mokrzycki and M.A. Samko
images. Others are quite complex. We want to describe shortly the most popular methods of contour detection in color images. The simplest method is using well known agorithms used for edge detection in monochromatic images. In color image the value of the image function is not a scalar value f (x, y) : 2 → but rather three–element vector representing the color of the pixel f (x, y) : 2 → 3 . But monochromatic images detectors could still be used after implementing some necessary operations. This is a transformation of three dimensional color space into one dimensional space. As human vision studies show, most information about image has luminance component: L(x, y) = 0.299R(x, y) + 0.587G(x, y) + 0.114B(x, y). On the basis of this knowledge color image is converted into luminance image. Thanks to it we obtain one dimensional image function. Next such image is given as an input of specified monochromatic image detector. Another method is using each of the three color components (R, G and B) as separate, independent monochromatic image. Each of the three images created in such a way is taken as an input of monochromatic image edge detector. At the output of this detector we receive three edge maps (one for each color component). Maps are then composed into one resultant image according to the equation: m(x, y) = max(mR(x, y), mG(x, y), mB(x, y))
(1)
where: mR, mG, mB are edge maps for R, G and B component respectively. Vector Gradient Operator Vector Gradient Operator makes use of gradient operator idea modified in such a way that it could work not in scalar space, but in three dimensional space. In such 3D space every pixel has three color components and could be considered as a vector c [6]: ⎡ ⎤ cR c = ⎣ cG ⎦ (2) cB Vector gradient value is calculated as euclidean distance between vector of central pixel (in 3x3 pixels window) and vectors of eight neighbour pixels: V G = maxD1 , D2 , ..., D8 where: Di = ||c0 − ci || is euclidean distance between c0 and ci . Vector Gradient Operator’s main disadvantage is sensitiveness for impulse and Gaussian noise [6]. 2.2
Vector Directional Gradient
In this algorithm pixel color is represented as a three element vector. There are used also filter masks similar to Prewitt masks [6]:
New Edge Detection Algorithm in Color Image
⎤ ⎡ −1 0 1
1⎣ −1 0 1 ⎦ = H− 0 H + ΔH = 3 −1 0 1 ⎤ ⎡ ⎤ ⎡ V− −1 −1 −1 1⎣ 0 0 0 ⎦=⎣ 0 ⎦ ΔV = 3 1 1 1 V
115
(3)
(4)
+
Using these masks we could count estimated value of gradient (in 3D space) for central pixel (x0 , y0 ). Local value of changes of the image function for pixel (x0 , y0 ) in vertical and horizontal directions is calculated from following expressions [6]: ΔH(x0 , y0 ) = H+ (x0 , y0 ) − H− (x0 , y0 )
(5)
ΔV (x0 , y0 ) = V+ (x0 , y0 ) − V− (x0 , y0 )
(6)
In the next step value of Vector Directional Gradient is calculated as: V DG =
ΔV (x0 , y0 )2 + ΔH(x0 , y0 )2
(7)
Vector Directional Gradient is less sensitive to image noise than Vector Gradient Operator. Thanks to it detected edges are stronger. This property of VDG algorithm is caused by counting mean value of pixel color (masks usage) [6]. 2.3 Cumani’s Algorithm This algorithm is based on Cumani’s idea to compute zero–crossings in the second directional derivative of a color image. In this approach color image is considered as two dimensional vector field f (x, y) with three components R, G and B. Cumani defined [2] the square local contrast S(P, n) of a point P (x, y) as squared norm of the directional derivative of f in the direction of the unit vector n = (n1 , n2 ) [3]. This contrast is given by: ∂f ∂f ∂f ∂f ∂f ∂f ∗ · n1 · n1 + 2· ∗ · n1 · n2 + ∗ · n2 · n2 = ∂x ∂x ∂x ∂y ∂y ∂y = E· n21 + 2· F · n1 n2 + H· n22
S(P, n) =
where: E=
(8)
∂f ∂f ∂f ∂f ∂f ∂f ∗ ,F = ∗ ,H = ∗ ∂x ∂x ∂x ∂y ∂y ∂y
values and n± the corresponding eigenvectors of matrix Let α)± be extreme * E F . Edge points in image are detected by computing zero–crossings A= F H of Ds (P, n) considering the sign of Ds along a curve tangent to n+ at P [3]. Ds (P, n) is the first directional derivative in point P and could be written as: Ds (P, n) = ∇α+ · n+ = Ex n31 +(Ey +2Fx )n21 n2 +(Hx +2Fy )n1 n22 +Hy n32 (9)
116
W.S. Mokrzycki and M.A. Samko
where indices x and y denote the corresponding derivatives with respect to x and y. The main disadvantage of this algorithm is the difficulty in locating edge because of the ambiguity of the gradient direction. There are a lot of other more or less complex algorithms used to detect contour in color images. In the next section we want to describe our concept of a new algorithm.
3 The Concept of Algorithm In this section we want to describe our concept of contour detection algorithm using perception function. It uses standard monochromatic Canny’s edge detector after converting image from 3D, color space to 1D space. We have color image in RGB space: f RGB (x, y). We convert this image in perception function according to the equation: f p (x, y) = 0.3R(x, y) + 0.6G(x, y) + 0.1B(x, y)
(10)
where: R(x, y), G(x, y), B(x, y) are values of R, G and B color component of pixel (x, y). Now we could use perception function in the same way as in monochromatic 1D image. Next we could use any contour detector to obtain contour map f k . For this purpose we use Canny/Deriche algorithm, but it could be any edge detector as well. Then we multiply contour image f k by each of the color components R, G and B of input image f RGB (x, y). Thanks to it we obtain component contours: f kR , f kG and f kB . Last step of this algorithm is to combine component contours into one color contour image.
Fig. 1. Lena–input and output images
New Edge Detection Algorithm in Color Image
117
Fig. 2. Castle–input and output images
Fig. 3. Wawel–input and output images
4 Experiments We have tested our algorithm on three input images. In the tests we have used various settings of Canny/Deriche algorithm. The original images and images obtained after using our algorithm are presented in figures 1, 2 and 3. As we can see color edges are visible between regions. The color of the edge depends on colors of the regions that are split by this edge.
5 Conclusions In this paper we have briefly described various methods of color image edges detection. Some of them use standard monochromatic edge detectors whereas
118
W.S. Mokrzycki and M.A. Samko
some operate on specific color image features. We have also described a new method of contour detection in color images based on perception function. The results of the experiment proved that contour maps produced with this method could be very similar to the maps obtained using other algorithms. Our method bases on converting color image to perception function. Contour detection process is performed using standard, well known Canny/Deriche algorithm. Thanks to it our method is easy to understand and use.
References 1. Zhou, J., Peng, J., Ding, M.: Improved Codebook Edge Detecion. Graphical Models and Image Processing 57(6), 533–536 (1995) 2. Cumani, A.: Edge Detection in Multispectral Images. Graphical Models and Image Processing 53, 40–51 (1991) 3. Koschan, A.: A Comparative Study On Color Edge Detection. In: Li, S., Teoh, E.-K., Mital, D., Wang, H. (eds.) ACCV 1995. LNCS, vol. 1035, pp. 574–578. Springer, Heidelberg (1995) 4. http://graphics.stanford.edu/~jowens/223b (accessed on: 23.06.2007) 5. http://robotics.stanford.edu/~ruzon (accessed on: 3.10.2006) 6. http://stuff.blackcomb.pl/PRZEDMIOTY/GRAFIKA_KOMPUTEROWA/DOKUMENTY/ detekcja_krawedzi_obrazu.pdf (accessed on: 20.11.2008)
A Comparison Framework for Spectrogram Track Detection Algorithms Thomas A. Lampert and Simon E.M. O’Keefe Department of Computer Science, University of York, York, U.K. {tomal,sok}@cs.york.ac.uk
Summary. In this paper we present a method which will facilitate the comparison of results obtained using algorithms proposed for the problem of detecting tracks in spectrograms. There is no standard test database which is carefully tailored to test different aspects of an algorithm. This naturally hinders the ability to perform comparisons between a developing algorithm and those which exist in the literature. The method presented in this paper will allow a developer to present, in a graphical way, information regarding the data on which they test their algorithm while not disclosing proprietary information.
1 Introduction Acoustic data received via passive sonar systems is conventionally transformed from the time domain into the frequency domain using the Fast Fourier Transform. This allows for the construction of a spectrogram image, in which time and frequency are the axes and intensity is representative of the power received at a particular time and frequency. It follows from this that, if a source which emits narrowband energy is present during some consecutive time frames a track, or line, will be present within the spectrogram. The problem of detecting these tracks is an ongoing area of research with contributions from a variety of backgrounds ranging from signal processing [1] to image processing [2, 3, 4]. This research area is interesting and essential as it is a critical stage in the detection and classification of sources in passive sonar systems and the analysis of vibration data. Applications are wide and include identifying and tracking marine mammals via their calls [5, 6], identifying ships, torpedoes or submarines via the noise radiated by their mechanics [7, 8], distinguishing underwater events such as ice cracking [9] and earth quakes [10] from different types of source, meteor detection and speech formant tracking [11]. The field of research is hindered by a lack of
This research has been supported by the Defence Science and Technology Laboratory (DSTL)1 and QinetiQ Ltd.2 , with special thanks to Duncan Williams1 for guiding the objectives and Jim Nicholson2 for guiding the objectives and also providing the synthetic data.
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 119–126. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
120
T.A. Lampert and S.E.M. O’Keefe
a publicly available spectrogram datasets and, therefore, the ability to compare results from different solutions. A majority of real world datasets used for this research contain proprietary data and therefore it is impossible to publish results and descriptions of the data. When synthetic spectrograms are used, there is a large variation between authors of the type of data which are tested upon. Also, between applications, there is a large variation in the appearance of tracks which an algorithm is expected to detect. For example, a whale song is very different to a Doppler shifted tonal emitted from an engine. It is therefore unclear whether an algorithm which is developed for use in one application will be successful in another. It is the aim of this paper to address these issues by enabling authors to quantitatively compare an algorithm’s performance without publishing actual data. The Spectrogram Complexity Measure (SCM) presented in this paper allows the visual representation of the complexity of detecting tracks within spectrogram images without publishing the actual nature of the data. This is achieved through two means, forming a measure of spectrogram track detection complexity, and, forming a distribution plot which allows an author to denote which spectrograms are within the detection ability of an algorithm. This allows researchers to determine and publish in which ranges their algorithm is successful and which it fails. Using this measure in publications will also allow, as close as possible, comparisons with other methods, something which is lacking in this field. As with any classification/detection scenario, ground truth data is needed to form a training/testing phase. In this area the ground truth data is commonly in the form of a list of coordinate locations where track pixels are located or a binary image. We propose to use this data in conjunction with the spectrogram images to calculate the proposed measures. This paper is laid out as follows: in Section 2 the metrics and measures are presented, along with a method to convert binary map grounds truth data into coordinate lists. In Section 3 example spectrogram images and their scores are presented along with an example distribution using the proposed measures. Finally, in Section 4 we conclude this paper.
2 Method In this section we present track and spectrogram complexity metrics and combine them to form a two-dimensional distribution. Prior to presenting the metrics we propose a simple algorithm to convert ground truth binary images into ordered lists of (x, y) coordinates depicting the track location in a spectrogram. 2.1
Ground Truth Conversion
An example of a ground truth binary map is presented in Fig. 1. Before the proposed metrics can be applied this data needs to be converted into
A Comparison Framework for Spectrogram Track Detection Algorithms
121
350 300
Time
250 200 150 100 50
130
260
390
520
650
780
910
1040
Frequency
Fig. 1. Ground truth binary map, 1 (black) depicts signal locations and 0 (white) noise locations
an ordered list of (x, y) coordinates depicting track locations. The modified region grow algorithm presented in Alg. 1 can be used to achieve this, where M and N are the height and width of the spectrogram (respectively) and minDistP oint(c, d) is a function which determines the index of the point in vector d which is closest to c. It is assumed that only the centre (strongest) frequency bin for each track is marked and therefore that one frequency bin (for each track) is marked in each time frame.
Algorithm 1 Ground truth conversion procedure Convert Template(template) initialise c = 0 initialise l = 0 for m = 1 : M do for n = 1 : N do if templaten,m = 1 then lc = GROUP TRACK(template, n, m) template(lc ) = 0 inc(c) end if end for end for return l end procedure procedure Group track(template, n, j) track = {[n, j]} current point = [n, j] for m = j + 1 : M do d = template:,m > 0 x = minDistP oint(current point, d) track = track ∪ {[x, m]} current point = [x, m] end for return track end procedure
122
T.A. Lampert and S.E.M. O’Keefe
Spectrogram Complexity 0
Hard
Hard SNR Easy Shape 1
2
3
SNR (dB)
4
Medium
5
6
7
8
9
Easy SNR Hard Shape
Easy 10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Shape Complexity
Fig. 2. Intended classification boundaries of the proposed distribution
2.2
Spectrogram Complexity Measure (SCM)
It is assumed that a straight vertical track is much easier to detect than a straight track with unknown gradient which are, in turn, both more simple to detect than a sinusoidal or random type tracks. Also, independent to this, a set of high SNR tracks are much easier to detect than a set of low SNR tracks. Thus, we have the following factors which complicate the detection of tracks within a spectrogram image: • •
Signal-to-Noise Ratio (SNR) Complexity of tracks – Gradient – Curvature
As above, these factors are naturally grouped into two subsets of conditions; SNR related complexity and track shape related complexity. This separation allows for the construction of a two-dimensional plot on which each axis represents one of these factors; the y-axis SNR and the x-axis track complexity. Such a construction is shown in Fig. 2. We reverse the SNR axis to reflect the increasing difficulty as SNR decreases, in this manner the hardest detection problems (e.g. sinusoidal tracks in low SNR spectrograms) are located in the top right and the easiest (e.g. straight, vertical tracks in high SNR spectrograms) at the bottom left of the distribution and variations are captured between these extremes. Signal-to-Noise Ratio Complexity After applying Alg. 1 we have an ordered set, l, containing pixel locations of each track in the spectrogram S. We can define an additional set, h which
A Comparison Framework for Spectrogram Track Detection Algorithms
123
contains all the coordinate pairs possible in S and therefore n = h\l, as the pixel locations not included in l. The average intensity value of these sets can then be calculated as P¯l and P¯n . Thus the SNR of S, in terms of decibels, is calculated using Eq. (1). ¯ Pl SN R = 10 log10 ¯ (1) Pn This measure forms the y-axis of our plot in Fig. 2. Track Complexity Using the ordered list of coordinates depicting track locations derived using Alg. 1 we can calculate metrics to measure the detection complexity of each track within a spectrogram. As mentioned above, we propose that the specific features of a track which complicate its detection are its gradient and shape. The averaged absolute values of a tracks’ first and second derivatives will measure the slope and curvature respectively. The first derivative can be discretely approximated [12] using Eq. (2) and the second derivative using Eq. (3). Averaging these over all the tracks present in a spectrogram therefore forms a measure of the overall complexity of detecting the tracks contained. N −1 δlc 1 ≈ |lc (y) − lc (y + 1)| δy N y=1
(2)
N −1 1 δ 2 lc ≈ |lc (y − 1) − 2lc (y) + lc (y + 1)| δy 2 N y=2
(3)
where N is the number of points forming the track and lc (y) is the yth point on the track lc . Therefore, the gradient measure will be 0 for a straight vertical track and positive for a sloped track. Whilst the shape measure will have the value 0 for straight and straight but sloped tracks and positive for a track which exhibits curvature. Combining and averaging these measures over all the tracks in a spectrogram forms a spectrograms’ track complexity measure, Eq. (4). TC =
* C ) 1 δlc δ 2 lc + 2 C c=1 δy δy
(4)
where C is the number of tracks and lc is the cth track present in a spectrogram. When these measures are used in conjunction with the distribution plot presented in Fig. 2 (with axes representing the SN R and T C), these measures allow us to visually represent the complexity distribution of a set of
124
T.A. Lampert and S.E.M. O’Keefe
spectrograms. Using this it is now possible to perform comparisons of algorithms and the test data which they are tested upon without disclosing the data set itself. When this diagram is included within a publication it is possible to denote which of the spectrograms an algorithm can successfully detect the tracks within by highlighting the corresponding SCM points. This will allow an author to make quick judgements on whether a newly developed algorithm has comparable or superior performance.
3 Results Three spectrogram images of varying complexity are presented in Fig. 3. The complexity measurements of these spectrograms, according to the presented metrics, are; spectrogram 1 (top) SN R = 3.23 dB & T C = 0, spectrogram 2 (middle) SN R = 2.31 dB & T C = 0.3618 and spectrogram 3 (bottom) SN R = 6.24 dB & T C = 0.8920 (for readability sake the SNRs in the plot are not representative of these figures). The track complexity for the template presented in Fig. 1 is T C = 0.8046, lower than spectrogram 3 as it contains a section of straight sloped track. We also include an example of a SCM plot determined on a set of 2096 synthetic spectrograms, Fig. 4. The complexity measurements taken from the three spectrograms in Fig. 3 have been included in this plot as (from top to bottom) diamond, square and circular points. It can be seen in Fig. 3 that, of the three, spectrogram 1 contains the easiest track complexity to detect, straight vertical tracks. This is reflected in the measure and, therefore, the distribution plot (Fig. 4) - the point is located in the far left of the track complexity axis. However, the SNR of this spectrogram is low and therefore the point is located near the top in
Time (s)
350 300 250 200 150 100 50 130
260
390
520
650
780
910
1040
650
780
910
1040
650
780
910
1040
Frequency (Hz)
Time (s)
350 300 250 200 150 100 50 130
260
390
520
Frequency (Hz)
Time (s)
350 300 250 200 150 100 50 130
260
390
520
Frequency (Hz)
Fig. 3. Examples of spectrograms which have varying complexity. The results of each measure are: top - 1st derivative = 0 and 2nd derivative = 0, middle - 1st derivative = 0.1206 and 2nd derivative = 0.2412, and, bottom - 1st derivative = 0.2990 and 2nd derivative = 0.5930.
A Comparison Framework for Spectrogram Track Detection Algorithms
125
0
1
2
3
SNR (dB)
4
5
6
7
8
9
10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Track Complexity (S)
Fig. 4. A SCM distribution, an example of a distribution obtained by applying the proposed measures on a synthetic set of spectrogram images
the SNR axis. spectrogram 2 has a medium track complexity as it exhibits an increase in frequency (sloped track) and a similar SNR to spectrogram 1 therefore it is located in a similar area but further to the right in the track complexity axis. And finally, spectrogram 3 represents a more complicated sinusoidal track which exhibits high curvature and, therefore, its position in the track complexity axis is towards the far right. Its SNR is relatively high and in this axis it is located below the centre. As desired, the differences of these track shapes are reflected in the proposed measures and distribution plot.
4 Conclusion In this paper we have presented a novel technique to allow the publication of results which have been derived from sensitive data. This allows for the comparison of results which have been determined using different sets of data and which exhibit different complexities. Also, this method allows authors to indicate whether an algorithm which is developed for a specific application will be successful in another. We have achieved this by devising a plot and metrics which express the complexity in terms of SNR and track shape. We have presented some example spectrograms and their complexities according to our metrics and have included these on a distribution plot derived from a sample data set of synthetic spectrograms with varying track appearances and SNRs. It has been shown that the measures successfully separate the differing complexities of each of the spectrograms and that this is reflected in the distribution plot. It is our hope that authors adopt this method to allow the comparison of algorithms developed in the area of spectrogram track
126
T.A. Lampert and S.E.M. O’Keefe
detection without disclosing sensitive data sets or without the availability of a publicly available data set. Nota bene we have presented measures which determine the difficulty of detecting features contained within a spectrogram image, as such, these measures are independent of the FFT resolution used to derive the spectrogram.
References 1. Paris, S., Jauffret, C.: A new tracker for multiple frequency line. In: Proc. of the IEEE Conference for Aerospace, vol. 4, pp. 1771–1782. IEEE, Los Alamitos (2001) 2. Lampert, T.A., O’Keefe, S.E.M.: Active contour detection of linear patterns in spectrogram images. In: Proc. of the 19th International Conference on Pattern Recognition (ICPR 2008), Tampa, Florida, USA, pp. 1–4 (December 2008) 3. Abel, J.S., Lee, H.J., Lowell, A.P.: An image processing approach to frequency tracking. In: Proc. of the IEEE Int. Conference on Acoustics, Speech and Signal Processing, March 1992, vol. 2, pp. 561–564 (1992) 4. Martino, J.C.D., Tabbone, S.: An approach to detect lofar lines. Pattern Recognition Letters 17(1), 37–46 (1996) 5. Morrissey, R.P., Ward, J., DiMarzio, N., Jarvis, S., Moretti, D.J.: Passive acoustic detection and localisation of sperm whales (Physeter Macrocephalus) in the tongue of the ocean. Applied Acoustics 67, 1091–1105 (2006) 6. Mellinger, D.K., Nieukirk, S.L., Matsumoto, H., Heimlich, S.L., Dziak, R.P., Haxel, J., Fowler, M., Meinig, C., Miller, H.V.: Seasonal occurrence of north atlantic right whale (Eubalaena glacialis) vocalizations at two sites on the scotian shelf. Marine Mammal Science 23, 856–867 (2007) 7. Yang, S., Li, Z., Wang, X.: Ship recognition via its radiated sound: The fractal based approaches. Journal of the Acoustic Society of America 11(1), 172–177 (2002) 8. Chen, C.H., Lee, J.D., Lin, M.C.: Classification of underwater signals using neural networks. Tamkang J. of Science and Engineering 3(1), 31–48 (2000) 9. Ghosh, J., Turner, K., Beck, S., Deuser, L.: Integration of neural classifiers for passive sonar signals. Control and Dynamic Systems - Advances in Theory and Applications 77, 301–338 (1996) 10. Howell, B.P., Wood, S., Koksal, S.: Passive sonar recognition and analysis using hybrid neural networks. In: Proc. of OCEANS 2003, September 2003, vol. 4, pp. 1917–1924 (2003) 11. Shi, Y., Chang, E.: Spectrogram-based formant tracking via particle filters. In: Proc. of the IEEE Int. Conference on Acoustics, Speech and Signal Processing, April 2003, vol. 1, pp. I–168–I–171 (2003) 12. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Prentice-Hall, Inc., Upper Saddle River (2006)
Line Detection Methods for Spectrogram Images Thomas A. Lampert, Simon E.M. O’Keefe, and Nick E. Pears Department of Computer Science, University of York, York, U.K. {tomal, sok, nep}@cs.york.ac.uk
Summary. Accurate feature detection is key to higher level decisions regarding image content. Within the domain of spectrogram track detection and classification, the detection problem is compounded by low signal to noise ratios and high track appearance variation. Evaluation of standard feature detection methods present in the literature is essential to determine their strengths and weaknesses in this domain. With this knowledge, improved detection strategies can be developed. This paper presents a comparison of line detectors and a novel linear feature detector able to detect tracks of varying gradients. It is shown that the Equal Error Rates of existing methods are high, highlighting the need for research into novel detectors. Preliminary results obtained with a limited implementation of the novel method are presented which demonstrate an improvement over those evaluated.
1 Introduction Acoustic data received via passive sonar systems is conventionally transformed from the time domain into the frequency domain using the Fast Fourier Transform. This allows for the construction of a spectrogram image, in which time and frequency are are variables along orthogonal axes and intensity is representative of the power received at a particular time and frequency. It follows from this that, if a source which emits narrowband energy is present during some consecutive time frames a track, or line, will be present within the spectrogram. The problem of detecting these tracks is an ongoing area of research with contributions from a variety of backgrounds ranging from signal processing [1], image processing [2, 3, 4] and expert systems [5]. This problem is a critical stage in the detection and classification of sources in passive sonar systems and the analysis of vibration data. Applications are wide and include identifying and tracking marine mammals via their calls [6, 7], identifying ships, torpedoes or submarines via the noise
This research has been supported by the Defence Science and Technology Laboratory (DSTL)1 and QinetiQ Ltd.2 , with special thanks to Duncan Williams1 for guiding the objectives and Jim Nicholson2 for guiding the objectives and also providing the synthetic data.
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 127–134. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
128
T.A. Lampert, S.E.M. O’Keefe, and N.E. Pears
radiated by their mechanics [8, 9], distinguishing underwater events such as ice cracking [10] and earth quakes [11] from different types of source, meteor detection and speech formant tracking [12]. The key step in all of the applications and systems is the detection of the low level linear features. Traditional detection methods such as the Hough transform and the Laplacian line detector [13] degrade in performance when applied to low SNR images such as those tested in this paper. Therefore, it is valuable to conduct an evaluation of the standard line detection methods to evaluate performance, determine weaknesses and strengths which will give insight into the development of novel detection methods for application to this area. We also evaluate the performance of two novel feature detectors, the “bar” detector and a Principal Component Analysis (PCA) supervised learning detector [2]. The problem is compounded not only by the low Signal to Noise Ratio (SNR) in spectrogram images but also the variability of the structure of tracks. This can vary greatly, including vertical straight tracks, sloped straight tracks, sinusoidal type tracks and relatively random tracks. A good detection strategy, when applied to this area, should be able to detect all of these. A variety of standard line detectors have been proposed in image analysis literature, e. g. the Hough transform, Laplacian filter and convolution. There are methods from statistical modelling such as Maximum Likelihood detection. Nayar et al. [14] describe a more recent, parametric detector, proposing that a feature manifold can be constructed using a model derived training set (in this case a line model) which has been projected into a lower dimensional subspace through PCA. The closest point on this manifold is used to detect a feature’s presence within a windowed test image. This paper is laid out as follows: in Section 2 we present the detection methods which have been evaluated with respect to spectrogram images and outline a novel bar detector. In Section 3 the results of these feature detectors applied to spectrogram images are presented and discussed. Finally, we present out conclusions in Section 4.
2 Method The following feature detection methods found in the literature are applied to spectrogram track detection in this evaluation: the Hough Transform applied to the original grey scale spectrogram image, the Hough transform applied to a Sobel edge detected image, Laplacian line detection [13], parametric feature detection [14], pixel value thresholding [13], Maximum Likelihood Estimator (MLE) [15] and convolution of line detection masks [13]. Together with these we also test two novel methods; the bar method, presented below, and PCA based feature learning which is described in [2]. Parametric Feature Detection [14] was found to be too computationally expensive to fully evaluate (on such a large data set), although initial experimental results proved
Line Detection Methods for Spectrogram Images
129
Time (s)
350 300 250 200 150 100 50 130
260
390
520
650
780
910
1040
650
780
910
1040
650
780
910
1040
Frequency (Hz)
Time (s)
350 300 250 200 150 100 50 130
260
390
520
Frequency (Hz)
Time (s)
350 300 250 200 150 100 50 130
260
390
520
Frequency (Hz)
Fig. 1. Examples of synthetic spectrogram images exhibiting a variety of feature complexities at a SNR of 16 dB
promising. Three examples of synthetic spectrogram images are presented in Fig. 1. 2.1
Bar Detector
Here we describe a simple line detection method which is able to detect linear features at a variety of angles, widths and lengths within an image. It is proposed that this method will also correctly detect linear structure within 2D non uniform grid data, and, can easily be extended to detect structure within 3D point clouds. The method allows for the determination of the detected line’s angle. This method can also be easily extended to detect a variety of shapes, curves, or even disjoint regions. Initially we outline the detection of a line with a fixed length, along with its angle, and subsequently we extend this to the determination of its length. We define a rotating bar which is pivoted at one end to a pixel, g = [xg , yg ] where g ∈ S, in the first row of a time updating spectrogram image, S where s = [xs , ys ], and extends in the direction of the l previous observations, see Fig. 2. The values of the pixels, F = {s ∈ S : Pl (s, θ, l) ∧ Pw (s, θ, w)}, Eq. (1), which lie under the bar are summed, Eq. (2). Pl (s, θ, l) ⇐⇒ 0 ≤ [cos(θ), sin(θ)][s − g]T < l , w , Pw (s, θ, w) ⇐⇒ ,[−sin(θ), cos(θ)][s − g]T , < 2
(1)
where θ is the angle of the bar with respect to the x axis (varied between - π2 and π2 radians), w is the width of the bar and l is its length. To reduce the computational load of determining Pw (s, θ, l) and Pl (s, θ, l) for every point in the spectrogram, s can be restricted to xs = xg − (l + 1), . . . , xg + (l − 1) and ys = yg , . . . , yg + (l − 1) (assuming the origin is in the bottom left of the spectrogram) and a set of masks can be derived prior to runtime to be convolved with the spectrogram.
130
T.A. Lampert, S.E.M. O’Keefe, and N.E. Pears
S w
Time (s)
F
l
θ
Frequency (Hz)
g
Fig. 2. The bar operator with width w, length l and angle θ
B(θ, l, w) =
1 f |F |
(2)
f ∈F
The bar is rotated through 180 degrees, calculating the underlying summation at each Δθ. Normalising the output of B(θ, l, w), Eq. (3), forms a brightness invari¯ l, w) [14], which is also normalised with respect to the ant response, B(θ, background noise. ¯ l, w) = B(θ,
1 [B(θ, l, w) − μ(B)] σ(B)
(3)
Once the rotation has been completed, statistics regarding the variation of B(θ, l, w) can be calculated to enable the detection of the angle of any underlying lines which pass through the pivoted pixel g. For example, the maximum response, Eq. (4), or the standard deviation, Eq. (5). ¯ l, w) R(l) = arg max B(θ,
(4)
θ
π 2 1 ¯ ¯ 2 R(l) = (B(θ, l, w) − μ(B)) N π
(5)
θ=− 2
¯ is the mean response and N = π . Assuming that noise present where μ(B) Δθ in a local neighbourhood of a spectrogram image is random the resulting responses will exhibit a low variation and therefore a low standard deviation. Conversely, if there is a line present the responses will exhibit a peak in one configuration along with having a high standard deviation, as shown in Fig. 3. Repeating this process, pivoting on each pixel, g, in the first row of a spectrogram image allows for the detection of any lines which appear during time updates. Thus, we have outlined a method for the detection of the presence and angle of a fixed length line within a spectrogram.
Line Detection Methods for Spectrogram Images
131
B(θ,l)
6 4
30
2 25
0 −2 20 −1.5 −1
15 −0.5 0
θ
l
10 0.5 1
5 1.5
Fig. 3. The mean response of the bar when it is centred upon a vertical line 21 pixels in length (of varying SNRs) and rotated. The bar is varied in length between 3 and 31 pixels.
We now extend this to facilitate the detection of the length, l, the detection of the width, w is synonymous to this and therefore we concentrate on the detection of lines with a fixed width w = 1 for simplicity. To detect the line’s length we repeat the calculation of R(l) over a range of lengths l. Evaluating these as l is increased it is possible to detect when the length of any underlying ¯ l) in Eq. (4) and (5) is dependent on line has been exceeded. The term B(θ, the length of the bar, as this increases, and extends past the line, it follows that the peak in the response R(l) will decrease, Fig. 3. The length of a line can thus be detected using Eq. (6). l0 = arg max R(l)
(6)
l
A test to determine whether l0 corresponds to a line or noise is defined as R(l0 ) > t, where t is a threshold above the response obtained when the bar is not fully aligned. In this sense only one line is detected per bar rotation, however, implementing a recursive arg max would allow multiple detections.
3 Results In this section we present a description of the test data and the results obtained during the experiments. 3.1
Data
The methods were tested on a set of 730 spectrograms generated from synthetic signals 200 seconds in length with a sampling rate of 4,000 Hz. The spectrogram resolution was taken to be 1 sec with 0.5 sec overlap and 1 Hz per FFT bin. These exhibited SNRs ranging from 0 to 8 dB and a variety of track appearances, ranging from constant frequencies, ramp up frequencies (with a gradient range of 1 to 16 Hz/sec at 1 kHz) to sinusoidal (with periods ranging from 10, 15 & 20 sec and amplitudes ranging from 1 - 5% of the
132
T.A. Lampert, S.E.M. O’Keefe, and N.E. Pears
centre frequency). The test set was scaled to have a maximum value of 255 using the maximum value found within a training set (except when applying the PCA detector when the original spectrogram values were used). The ground truth data was created semi automatically by thresholding (where possible) high SNR versions of the spectrograms. Spurious detections were then eliminated and gaps filled in manually. 3.2
Results
The parameters used for each method are as follows. The Laplacian and convolution filter sizes were 3x3 pixels. The threshold parameters for the Laplacian, bar, convolution and pixel thresholding were varied between 0 and 255 in steps of 0.2. Using a window size of 3x21 pixels the PCA threshold ranged from 0 to 1 in increments of 0.001. The Bar method’s parameters were set to w = 1 and l = 21. The class probability distributions for the MLE were estimated using a gamma pdf for the signal class and a exponential pdf for the noise class. The PCA method was trained using examples of straight line tracks and noise only. The Receiver Operator Curves (ROC) were generated by varying a threshold parameter which operated on the output of each method - pixel values above the threshold were classified as signal and otherwise noise. The ROC curves for the Hough transforms were calculated by varying the parameter space peak detection threshold. True Positive Rates (TPR) and False Positive Rates (FPR) were calculated using the number of correctly/incorrectly detected signal and noise pixels. The MLE detector highlights the problem of high class distribution overlap and variability; achieving a TPR of 0.0508 and a FPR of less than 0.0002. This rises to a TPR of 0.2829 and a FPR of 0.0162 when the likelihood is evaluated within a 3x3 pixel neighbourhood (as no thresholding is performed ROCs for these methods are not presented). It can be seen in Fig. 4 that the threshold and convolution methods achieve almost identical performance over the test set. With the Laplacian and Hough on Sobel line detection strategies achieving considerably less and the Hough on grey scale image performing the worst. The PCA supervised learning method proved more effective than these, and performed comparably with thresholding and line convolution, marginally exceeding both within the FPR range of 0.4 - 0.15. As previously mentioned, the PCA method was trained using vertical, straight track examples only, limiting its sinusoidal and gradiant track detection abilities. It is thought that with proper training, this method could improve further. Due to time restrictions, we present preliminary results obtained with the bar method, fixing the bar length to 21 and a width of 1. At a FPR of 0.5 it compares with the other evaluated methods, below a FPR of 0.45 this method provides the best detection rates. It is thought that the Hough on edge transform outperformed the Hough on grey scale transform due to the reduction in noise occurring from the application
Line Detection Methods for Spectrogram Images
133
1
0.9
0.8
True Positive Rate
0.7
0.6
0.5
0.4
Threshold Convolution Laplacian Random Guess Hough & Sobel Hough & Grey PCA Bar
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
False Positive Rate
Fig. 4. Receiver Operator Curves of the evaluated detection methods
of an edge detection operator. However, both of these performed considerably less well than the other methods due to their limitation of detecting straight lines.
4 Conclusion In this paper we have presented a performance comparison of line detection methods present in the literature applied to spectrogram detection. We have also presented and evaluated a novel line detector. Preliminary testing shows performance improvements over standard line detection methods when applied to this problem. These results are expected to further improve when the multi scale ability is employed. Thresholding is found to be very effective and it is believed that this so because spectrograms with a SNR of 3 dB or more constitute 70% of the test database, circumstances which are ideal for a simple method such as thresholding. When lower SNRs are encountered however it is believed that thresholding will fall behind more sophisticated methods. Also, thresholding only provides a set of disjoint pixels and therefore a line detection stage is still required. It is noted that the PCA learning method was trained using examples of straight tracks but was evaluated upon a data set containing a large number of tracks with sinusoidal appearance, reducing its effectiveness. The evaluation of standard feature detection methods has highlighted the need to develop improved methods for spectrogram track detection. These should be more resilient to low SNR, invariant to non stationary noise and allow for the detection of varying feature appearances. Improving first stage detection methods reduces the computational burden and improves the detection performance of higher level detection/tracking frameworks such as those presented in [2, 1]. A detection method may not outperform others alone, however, it may have desirable properties for
134
T.A. Lampert, S.E.M. O’Keefe, and N.E. Pears
the framework in which it is used and therefore, in this case, provide good detection rates.
References 1. Paris, S., Jauffret, C.: A new tracker for multiple frequency line. In: Proc. of the IEEE Conference for Aerospace, vol. 4, pp. 1771–1782. IEEE, Los Alamitos (2001) 2. Lampert, T.A., O’Keefe, S.E.M.: Active contour detection of linear patterns in spectrogram images. In: Proc. of the 19th International Conference on Pattern Recognition (ICPR 2008), Tampa, Florida, USA, December 2008, pp. 1–4 (2008) 3. Abel, J.S., Lee, H.J., Lowell, A.P.: An image processing approach to frequency tracking. In: Proc. of the IEEE Int. Conference on Acoustics, Speech and Signal Processing, March 1992, vol. 2, pp. 561–564 (1992) 4. Martino, J.C.D., Tabbone, S.: An approach to detect lofar lines. Pattern Recognition Letters 17(1), 37–46 (1996) 5. Mingzhi, L., Meng, L., Weining, M.: The detection and tracking of weak frequency line based on double-detection algorithm. In: Int. Symposium on Microwave, Antenna, Propagation and EMC Technologies for Wireless Communications, August 2007, pp. 1195–1198 (2007) 6. Morrissey, R.P., Ward, J., DiMarzio, N., Jarvis, S., Moretti, D.J.: Passive acoustic detection and localisation of sperm whales (Physeter Macrocephalus) in the tongue of the ocean. Applied Acoustics 67, 1091–1105 (2006) 7. Mellinger, D.K., Nieukirk, S.L., Matsumoto, H., Heimlich, S.L., Dziak, R.P., Haxel, J., Fowler, M., Meinig, C., Miller, H.V.: Seasonal occurrence of north atlantic right whale (Eubalaena glacialis) vocalizations at two sites on the scotian shelf. Marine Mammal Science 23, 856–867 (2007) 8. Yang, S., Li, Z., Wang, X.: Ship recognition via its radiated sound: The fractal based approaches. Journal of the Acoustic Society of America 11(1), 172–177 (2002) 9. Chen, C.H., Lee, J.D., Lin, M.C.: Classification of underwater signals using neural networks. Tamkang J. of Science and Engineering 3(1), 31–48 (2000) 10. Ghosh, J., Turner, K., Beck, S., Deuser, L.: Integration of neural classifiers for passive sonar signals. Control and Dynamic Systems - Advances in Theory and Applications 77, 301–338 (1996) 11. Howell, B.P., Wood, S., Koksal, S.: Passive sonar recognition and analysis using hybrid neural networks. In: Proc. of OCEANS 2003, September 2003, vol. 4, pp. 1917–1924 (2003) 12. Shi, Y., Chang, E.: Spectrogram-based formant tracking via particle filters. In: Proc. of the IEEE Int. Conference on Acoustics, Speech and Signal Processing, April 2003, vol. 1, pp. I–168–I–171 (2003) 13. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Prentice-Hall, Inc., Upper Saddle River (2006) 14. Nayar, S., Baker, S., Murase, H.: Parametric feature detection. Int. J. of Computer Vision 27, 471–477 (1998) 15. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. Wiley-Interscience Publication, Hoboken (2000)
Morphological Analysis of Binary Scene in APR Integrated Environment Marek Kr´otkiewicz1 and Krystian Wojtkiewicz2 1 2
Institute of Mathematics and Informatics, University of Opole
[email protected] Institute of Mathematics and Informatics, University of Opole
[email protected]
Summary. This paper describes principles of binary scene [1] morphological analysis in script based application - APR (Analysis, Processing and Recognition). The aim of the method is to find object on the scene and then to describe theirs basic features like edges, neighbors and surface [2]. The algorithm construction gives benefits in terms speed as well as to computation costs, at the same time being capable of presenting number of attributes values for scene and each of the objects. There are also some practical algorithm applications showed.
1 Introduction Morphological analysis is a very useful tool to classify objects in terms of their shape and structure. A set of attributes obtained from morphological analysis is the key element in the process of recognition and therefore, it has to be properly build, separately for each recognition case. The success may be only achieved if the classes of the objects being under recognition process are being carefully examined and the most characteristic attributes are found. Wide diversity of objects and classes makes it almost impossible to create the universal set of attributes used in recognition process. Morphological analysis presented in this article has been developed to optimize both criteria of generality and data processing performance. These criteria are opposite of one another in the sense that optimizing one of them is being done at the cost of the other. This is the outcome of the fact that the wider set of attributes is being created the more complicated the algorithm has to be used, which leads to higher computing complexity. The method presented here has been created in the APR (Analysis Processing and Recognition) environment. The main aim of this system was to deliver methods of massive picture processing not only in laboratory, but also to be implemented in the handling of real problems handling.
2 Algorithm Construction The algorithm assumes that the scene is build of objects (class ImageObject), which are edges (class ObjectEdge) and surfaces. The edges may be M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 135–141. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
136
M. Kr´ otkiewicz and K. Wojtkiewicz
divided into only one external edge and possibly many internal edges. The surface is understood as linear area of directly connected pixels [3] [4]. More specific stricture of each of the elements will be provided in the further part of the article. Five functions can be distinguished from the algorithm taking the behavioral point of view as showed on the table 1. Table 1. Outcome attribute for the scene analysis 1. GetObjects 1. 1 FindNextObject 1. 2 FindEdge <external> 1. 2. 1 FindNeighbour <external> 1. 3 FindSurface 1. 3. 1 FindEdge
1. 3. 1. 1 FindNeighbour
All the function treat surface as the eight-directional plane, which means that each pixel has 8 neighbours.The main function GetObjects is used to conduct algorithm with help of supporting functions FindNextObject, FindEdge, FindNeighbour and FindSurface. FindNextObject This function is used to find objects on the scene. It is searching through pixels on the scene until it finds pixel that is assigned to object. There may be various pixels found on the scene and they are categorized into the group of background, objects, edges. Thanks to the special organization of the functions, the object being under the investigation of the other functions, working on edges and surfaces, is being transformed in such a way that it becomes invisible to the function FindNextObject. This solutions has been simplified and have quickened the function, because it does not have to check if the pixel is part of previously investigated object. FindEdge Function FindEdge is being launched after finding object by the function FindNextObject. It is used to designate the external edge using the method of sliding the tangent on the verge of object using the function FindNeighbour. This algorithm is based on the information where was the previous pixel of the edge in the relation to the pixel being examined. It looks for the next edge pixel in the direction tilled 90 opposite to the base pixel, eg. if the last move direction was number 6, then the search of the next external edge pixel is started from number 4 and increases [Fig. 1]. The search of internal edge is computed in the same manner, but it tilts the other way.
Morphological Analysis of Binary Scene in APR Integrated Environment
137
Fig. 1. Visualisation of FindEdge function
FindSurface This function is used to mark out the surface of investigated object and it uses the information gathered by the edge analysis. It is possible that the object has holes, pixels in the color of background inside of the object, therefore the function may call the function FindEdge to designate internal edge. Outcome attributes The algorithm has been constructed in the way, that would take to minimum the computing cost and speed. It runs through the scene only once and the it results in attributes sets that describe scene and all of the objects on it. The attributes computed for the scene are showed in the table 2. During
138
M. Kr´ otkiewicz and K. Wojtkiewicz
Table 2. Outcome attribute for the scene analysis Name of the attribute
Description of the attribute
Number of pixels Width Height Filled field Empty field Length of external edge Length of internal edges Length of all edges Number of edges Number of holes Number of objects
The amount of pixels on the scene The width of the scene The height of the scene Number of pixels that are assigned to objects The sum of pixels building holes in the objects on the scene The number of pixels that build external edges of objects The number of pixels that build internal edges of objects The sum of two previous attributes Number of all separable edges Number of holes that can be designated on the scene Total number of all objects on the scene
Table 3. Outcome attribute for the object analysis Name of the attribute
Description of the attribute
Object number Filled surface Empty surface Number of holes
The unique number of the object The number of the pixels filled by the object on the scene The number of pixels that describes size of holes in the object Number of holes (empty spaces) that can be found inside the object Edge adherent Number of pixels that are assigned to the edge and at the same time build the frame of the scene Centre of gravity Coordinates of the centre of gravity that is computed for the object Start pixel Coordinates of the very first pixel found by the function FindNextObject (X1, Y1) Coordinates of top-left corner of the rectangle circumscribed on the object (X2, Y2) Coordinates of down-right corner of the rectangle circumscribed on the object Length of external edge The number of pixels that build external edge of object Length of internal edge The number of pixels that build internal edges of object Length of all edges The sum of two previous attributes Number of edges The number of all edges (external and internal) assigned to the object Internal edges The list of internal edges containing length of the edge and coordinates to its start pixel
the algorithm run, each of objects found on the scene gains its own number and unique set of attributes, which is presented in the table 3. This set is universal and may be each of the attributes may be used to compute more other attributes, specific for particular task.
Morphological Analysis of Binary Scene in APR Integrated Environment
139
3 Practical Applications Presented set of outcome base attributes may be used to build almost infinite number of more complex attributes, customized to more specific tasks. One of the classic usages may be recognizing digits and letters. On the figure 2 there are digits and some exemplary letters. Immediately one discovers that the basic attributes can be used to form rules that designates digits into appropriate subsets. The rule based on number of holes in the object segment all digits into three disjunctive sets. Another attribute, that can be build from the objects centre of gravity and the geometrical centre of the rectangle circumscribed on the object, tells about the symmetry of the object. At the same time we can compute the degree and direction of asymmetry, which can be showed as the vector AB [Fig. 3]. Using this information and by creating some more attributes, from the basic attributes, we can unambiguously assign investigated digit to one of 10 classes. Those attributes may be build in many different ways, but OCR is not the principle of this paper. Morphological analysis of binary scene has been already used to solve practical problems, for example in the Agricultural and Forestry Engineering. To be more specific, the method has been used for automatic seed classification. Cooperation in this field also covers problems of blending, detection of mechanical damage of seeds as well as problems with designating percentage rate of each item in the vegetable
Fig. 2. Digits and exemplary letters in the binary scene representation
Fig. 3. Vector AB showed on the digit 1
140
M. Kr´ otkiewicz and K. Wojtkiewicz
Fig. 4. Exemplary APR application window
mix [5] [6]. These issues are very important from the practical and economical points of view. Implementation in integrated environment of analysis and processing of images APR [Fig. 4], allows them to be solved in continuous way, applicable to industrial, massive designating morphology of investigated objects. The mechanism of scripts construction allows implementation of repeatable solution, that can easily be applied to process pictures acquired from camera or video camcorder and to prepare them for binary analysis without prior manual preparation.
4 Conclusions Morphological analysis of binary scene showed in this paper mainly was constructed to optimize computing costs and speed, which was achieved by onecourse scene analysis which provides wide range of outcome attributes. The algorithm is also capable of some more subtle properties, whose description will be subject of another paper. One of them is the ability of tracing object being inside of the other object, what is understood as the situation when one object is entirely surrounded by the other object. Such nesting is properly identified no matter of number or depth of objects. This property is quite valuable when the scene contains objects covered by each other. The most important of all is the fact, that this analysis is part of the bigger
Morphological Analysis of Binary Scene in APR Integrated Environment
141
environment build from more than one hundred analysis and image processing instructions. Practical implementation cover many domains, but presented algorithms has not been optimized for any of them. Its universality is one of the most important properties. Current activities are oriented on developing and implementing new functionalities into APR, with which the presented analysis might cooperate. The binary scene morphology algorithm will be used to create function, that would be capable of recognizing shapes of the edge lines, or to be more specific the edge line shape morphology. This means that the edge line of the objects will be morphologically analyzed and the expected outcome will contain information about types of curves that build this line.
References 1. Wladyslaw, S.: Metody reprezentacji obraz´ ow cyfrowych. Akademicka Oficyna Wydawnicza PLJ, Warszawa (1993) (in Polish) 2. Chora´s, R.: Komputerowa wizja - Metody interpretacji i identyfikacji obiekt´ ow. Akademicka Oficyna Wydawnicza EXIT, Warszawa (2005) (in Polish) 3. Kurzy´ nski, M.: Rozpoznawnie obraz´ ow. Oficyna Wydawnicza Politechniki Wroclawskiej, Wroclaw (1997) (in Polish) 4. Tadeusiewicz, R., Korohoda, P.: Komputerowa analiza obraz´ ow. Wydawnictwo Fundacji Post¸epu Telekomunikacji, Krak´ ow (1997) (in Polish) 5. Rut, J., Szwedziak, K., Tukiendorf, M.: Okre´slenie pr¸edko´sci poruszania si¸e szkodnik´ ow z wykorzystaniem komputerowej analizy obrazu. In˙zynieria Rolnicza nr 2 (90), Krak´ ow, 265–269 (2007) (in Polish) 6. Szwedziak, K.: Stanowisko do komputerowej analizy obrazu produkt´ ow rolnospo˙zywczych. In˙zynieria Rolnicza nr 13 (88), Krak´ ow, 429–435 (2006) (in Polish)
Digital Analysis of 2D Code Images Based on Radon Transform Rafal Tarlowski and Michal Chora´s Image Processing Group, Institute of Telecommunications University of Technology & Life Sciences S. Kaliskiego 7, 85-791 Bydgoszcz, Poland [email protected]
Summary. In this paper a novel method of analyzing and recognizing 2D barcode images is proposed. Our system is based on Radon transform. Our own code specification has been created. The major contribution of this work is dominant orientation determination based on Radon transform. The whole process of using 2D barcodes is presented and evaluation results are shown.
1 Introduction One of important issue in industry is digital item recognition. The easiest way to determine what kind of element we are dealing with is to put a marker on it, for example a barcode. This method enables to read identifier, placed on the item, and link it with describing information. In many cases it is needed to put additional information on the item, for example: manufacturer, name of the product or destination place (shipping company). Putting such information on barcode can make it too big to print it on the product. Necessity of putting huge amount of information on the items forced creation of two dimension (2D) code images. Goal of this article is to propose a solution to analyze and recognize 2D code images based on Radon transform. Our method processes 2D code images, acquired in testing environment with static light. Every code can be rotated and placed anywhere on the image on the white background. Each word of code image must be a square like in Data Matrix, Aztec Code or QR Code [2][3][4]. We have created our own 2D code image specification, based on Data Matrix. Our code structure is easy to encode and decode, but has no elements or techniques to determine any bit errors. Our method can change rotated and noised code images, to form that enables to read it easily by the decoder.
2 Code Specification Proposition of our code (Fig. 1) is based on Data Matrix. Smallest logical part of image is one black or white square (quant), which are interpreted as M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 143–150. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
144
R. Tarlowski and M. Chora´s
bits. Black quants are interpreted as logical ”1” and white are interpreted as logical ”0”. Eight squares placed in one line are code word (CW), which are interpreted as byte. Our code image contains five blocks presented in Figure 1: 1. 2. 3. 4. 5.
Outline. Information field one code word (CW). End code word. Code filling part. Data field.
Our 2D code image contains a characteristic outline folding with alternate white and black squares on top and on the right side of code, and solid black lines on bottom and on the left side of image. Thanks to this feature it is easy to determine basic image orientation (0,90,180,270). Outline can also be used to determine size of one black or white square (quant) on the image. Our code image is always a square. It is helpful in quantum size determination. Information field contains number of code word columns in the code image. It is binary coded number. In example image (Fig. 1) we can find two columns of code words, and in information field contains 00000010b (2d). In data filed we use only one coding type ASCII. Each CW need to be changed to a alphanumeric sign got from the ASCII table, number of sign is encoded in the CW. To read all the data contained on the code image, decoder needs to read squares line by line. If whole image is not filled with data, it is needed to place filling part of code. This part contains alternating black and white squares. Filling image with such element is helpful for orientation detection. Thanks to this we eliminate blank code elements with no information that can be used for Radon transform. In order to perform experiments we have created an application for code image generation (showing this application is out of scope of this paper).
Fig. 1. Code image example, used for experiments
Digital Analysis of 2D Code Images Based on Radon Transform
145
3 General Description of 2D Barcode Image Analysis System Our proposition of image processing of 2D code image consists of the following steps: 1. Preliminary processing. 2. Dominant orientation determination based on Radon transform. a) Dominant orientation reading. b) Image rotation 3. Quantization a) Quant size reading. b) Image quantization. 4. Data reading from image. All steps of image processing will be described in details in further chapters, besides point 4. This step of data reading is connected with code specification which is not a subject of this article. Code specification description was needed to reveal that our proposition is for code images based on square like quant. 3.1
Preliminary Processing
Acquired 2D code images often contain noise. In order to remove them and unnecessary information (for example from RGB image), we had to threshold image to get only black and white pixels. To make an image easier to process, we also changed it to a BMP format. Threshold is subsidiary to the brightness of a bitmap. We assumed that images are always made in the same lighting, so we could designate it empirically. Secondary objective of preliminary processing is to prepare image for rotation. It is needed to place a code image in the centre of a bitmap. To achieve this goal, we need to scan top line of an image, seeking the first occurrence of black pixel. Repeating this action from the each side of an image gives information of image location and enables to chop the unnecessary white areas of bitmap. Figure 2 presents the segmentation process. Gray lines show the image scanning lines. Circles show the first occurrence of black pixel on each scanning side. This helped to determine lines to chop, shown as a dotted gray lines. Bitmap, after preliminary processing, can be used for further image processing which goal is to determine the dominant orientation on the basis of the Radon transform. 3.2
Radon Transform for Dominant Orientation Determination
In order to determine dominant orientation it is needed to use Radon transform, defined as:
146
R. Tarlowski and M. Chora´s
Fig. 2. Segmentation process
R(ρ, θ) =
+∞ +∞ f (x, y)δ(ρ − x cos θ − y sin θ)dxdy
(1)
−∞ −∞
Because analyzed images are in digital form, we need to discrete the Radon transform. We have used the solution proposed by [1], and we can use the form: Sρ,θ −1
R(ρr , θt ) ≈ Δs
g(xak , yka )
(2)
k=0
where: xak = [ρr cos θt − sk sin θt − xmin ]
(3)
yka = [ρr sin θt + sk cos θt − ymin ]
(4)
and:
Δs =
1 |cos θt | , sin θt 1 |sin θt | , sin θt
≤ sin π4 > sin π4
(5)
Authors of [1] proposed to count Sρ,θ to prevent getting (x, y) coordinates from (3) and (4) out of the image size. For simplicity we accepted to set this value to M, where M is image size. It is easier and faster to skip some iteration of the loop and check if the (x, y) coordinates are out of rage, than to count the Sρ,θ . To find the dominant orientation, we need to find maximum value of Radon transform. Result of transformation is in three dimensions and because of this it is not an easy task. In our image recognition we do not need the information about the distance between image and the projection line. That is why it is possible to change the transformation result to two dimensions. We need to
Digital Analysis of 2D Code Images Based on Radon Transform
147
find the maximum value for each radius to get the two dimension result. Thanks to this we get the radius and the value function. Searching maximum value in 2D is an easy task. Counting Radon transform for large images takes a lot of time, so we decide to analyze only samples of image. Authors of [1] defined their solution for square images, so the samples need to be squares. That is why the next step of image analyze is getting samples. Each sample is pre-processed before it can be used to find the dominant orientation. First, we check if image contains at least one black pixel, if not, we drop the sample and get another. Second step is to make an edge image of each sample. Thanks to this our sample has the most important information that is needed to determine the dominant orientation- edges. Third: we use the Radon transform on each sample and determine dominant orientation. After that we pick up the most common value and image can be rotated for further analyze. 3.3
Quantization
Quantization is the last step before reading information placed on the code image. We need to perform quantization, because image can still be distorted after rotation. Some edges of squares can be ragged, and quantization helps to determine if square should be black or white. Sample image before quantization is shown in Fig. 3. First and the most important task before quantization is to determine the size of black or white square, which is the quant of code image.
Fig. 3. Image before quantization
148
R. Tarlowski and M. Chora´s
Fig. 4. Image after quantization process
To reach this goal it is needed to scan the image from each side. We need to move the pointer of each line until we get the first black pixel. Black pixels are counted and pointer is moved until it reach white pixel. After that, pointer is moved to the next black pixel and the counting process begins again. Average size of black square is the searching value. This process is highly connected with code specification, so sometimes it needs to be changed. When quant size (B) is known, it is possible to start the quantization process. We need to scan the original (for quantization process) image with squares which width and height is B. Starting from the left side of image we need to scan areas, jumping on x axis, until jump is possible, by B pixels. If jump cannot be performed, we need to jump on y axis by B pixels and start the process from left side of an image. Counting white and black pixels on each area determines colour of square that should be put on the quantized image. If we have more black pixels on the area, we need to place black square, and white square if area contains more white pixels. After this process, image is sharp with data ready to be read (decoded). Process of reading data is highly connected to code image specification, therefore it will not be described in this article. Sample image after quantization process is shown in Fig. 4.
4 Experimental Results In order to check quality of the proposed method, we created image base contained 100 photos of 25 different code images, which specification was defined in section 2 of this article.
Digital Analysis of 2D Code Images Based on Radon Transform
149
Fig. 5. Verification: test 1 results
Fig. 6. Verification: test 2 results
Images contain from 10 to 42 columns of squares. Each code image was made in a different orientation in resolution 640x480. All images were analyzed with the method presented in the article. Data encoded in code images is known, just to check the quality of method. In the first verification test we were comparing number of correctly decoded letters with all letters encoded on the image. Figure 5 contains results of the test. We were investigating percent of images which correct rate was in 100%, 85% and 70%, to all images taken to the experiment. In the second verification test we were investigating percent of good images (those which correct rate was above the set threshold) compared to number of columns on the code image. Results are shown in Fig. 6.
5 Summary The major contribution of this paper is development of new, effective method for analyzing 2D code images based on Radon transform. Transformation was used to determine the dominant orientation of the code image. During experiments, we have noticed that proposed method is good for images with 10 to 18 columns of squares. For such code images with dimensions 640x480, correct rate was at constant level. For images with 26 an 34 columns, it would be a good idea to put self correction code in 2D code image. Those images that were not decoded properly (with 10 and 18 columns), were corrupted because of harsh acquisition conditions. Now we work on enhancing our code specification with correction mechanisms.
150
R. Tarlowski and M. Chora´s
References 1. Hejazi, M.R., Shevlakov, G., Ho, Y.-S.: Modified Discrete Radon Transforms and Their Application to Rotation-Invariant Image Analysis, pp. 429–434. IEEE, Los Alamitos (2006) 2. Information technology – Automatic identification and data capture techniques – Data Matrix bar code symbology specification - ISO/IEC 16022:2006 3. Information technology – Automatic identification and data capture techniques – Aztec Code bar code symbology specification - ISO/IEC 24778:2008 4. Information technology – Automatic identification and data capture techniques – QR Code 2005 bar code symbology specification ISO/IEC 18004:2006
Diagnostically Useful Video Content Extraction for Integrated Computer-Aided Bronchoscopy Examination System Rafal J´ o´zwiak1, Artur Przelaskowski1, and Mariusz Duplaga2 1
2
Institute of Radioelectronics Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland [email protected], [email protected] II Department of Internal Medicine and Department of Cardiac Surgery, Jagiellonian University School of Medicine, ul. Skawinska 8, 31-066 Krakow, Poland [email protected]
Summary. The problem of diagnostically important content selection in bronchoscopy video was the subject of our research reported in this paper. The characteristic of illegible and redundant bronchoscopy video content was presented and analyzed. A method for diagnostically important frame extraction from noninformative, diagnostically useless content was proposed. Our methodology exploits region of interests segmentation, features extraction in multiresolution wavelet domain and frame classification. SVM with optimized kernels and quality criteria was applied for classification. Effectiveness of proposed method was verified experimentally on large image dataset containing about 1500 diversified video frames from different bronchoscopy examinations. Obtained results with mean sensitivity above 97% and mean specificity about 94% confirmed high effectiveness of proposed method.1
1 Introduction During last years presentation and storage of different diagnostic examination results in the form of digitized image and video sequences has become more and more popular. Together with modern 3D visualization methods, advanced medical video techniques have shown a marked improvement in diagnosis and treatment efficiency but also in the fields of medical education, training and presentation. Recent advances in video technology allow visual inspection for diagnosis or treatment of the inside of the human body without or with a very small scars. This trend is highly correlated with modern medicine requirement for lowering the invasiveness of the medical procedures with simultaneous preserved effectiveness and safety [1]. The introduction of the more sophisticated diagnostic and therapeutic endoscopy methods is 1
This work is a part of Bronchovid project, supported by a scientific R13 011 03 Grant from Ministry of Scientific Research and Information Technology, Poland.
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 151–158. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
152
R. J´ o´zwiak, A. Przelaskowski, and M. Duplaga
closely related to this trend. Even though endoscopic examination procedures still impose direct contact with patient and results interpretation process is highly subjective and strongly depends on individual physician skills and knowledge. The awareness of the fact that medical errors themselves may be meaningful cause of morbidity and mortality, accelerated searches for effective countermeasures [2]. Considering this facts, it is worth seeing that usage of information technology tools may be an valuable remedy for the quality improvement of diagnostic procedures carried out during endoscopic examination. 1.1
Bronchoscopy
Bronchofiberoscopy is a basic diagnostic procedure used in pulmonology, involving visualization of the inside of a patient’s lung. It becomes nearly obligatory when lung neoplam is suspected in the patient. The procedure is usually accompanied by other diagnostic modalities enabling sampling of pathologic tissues for pathologic evaluation. The whole examination is carried out with use of the bronchoscope - the endoscopy tool, formerly rigid, now mainly flexible in form of thin, slender tube made of fiberoptic materials. The distal end of the fiberbronchoscope can be angulated from a lever at the head, allowing the fiberbronchoscope to be easily maneuvered within the airway [3]. The modern video technology exerted an influence also on bronchoscopy examination procedures and equipments - the new video fiberbronchoscopes have an integral video camera system at the distal end, which is illuminated by optical fibers, and finally the real-time video image is observed on the screen by the physician. Unusually important factor is wide skill level variation between different physicians in performance and interpretation. Assessment of bronchoscopic skills has not been standardized and currently relies solely on procedure logbooks and subjective letters of competency [4]. The skill variation influence can be meaningfully reduced by making full use of modern information technology tools, elaborated in form of computer-aided diagnosis and detection systems. Objective support of computer-based aiding tools that store, retrieve, communicate, select, emphasize, analyze, recognize, understand and visualize image-based diagnostic content can make significant improvements in the process of different types medical procedures supporting. 1.2
Bronchoscopy Video Content
Bronchoscopic examinations demonstrate many common features with natural video sequences, e.g. general image features and natural content perception, color space, textural features, data dynamics, dominant objects properties. Different parts of bronchoscopic examinations are distinguished by diversified movement characteristics - from slow motion to dynamic video with fast camera movement across variable diagnostic content. Among many different computer-based aiding procedures dedicated to modern
Diagnostically Useful Video Content Extraction
153
Fig. 1. a) Examples of non-informative, useles frames (up) and informative, diagnostically important frames (down); b) Example of wavelet coefficients distribution for informative (up) and non-informative (down) frame
bronchoscopy requirements, selection of diagnostically important content plays a crucial role especially for the purpose of automatic summarization, fast scrolling and further automatic or semi-automatic computer-aided lesions detection. In bronchoscopy video there are typically a significant number of illegible, redundant video fragments, manifesting itself in a form of unreadable and insignificant frames. This frames do not poses any diagnostic information, the content cannot be recognized and interpreted by no means, thus we can describe this frames as a non-informative frames [5]. The appearance of this frames is strongly diversified due to different origin and unlike arising mechanism. Essentially, we can distinguished: out-offocus frames (occurred due to wrong camera position - too far/too close focus into/from mucosa of bronchi), blurred (motion blur due to rapid camera movement through the intrabronchi space), bloody (due to pathology presence or as a result of sampling of suspected tissues for pathologic evaluation) and bubbled (as a result of camera lens cleaning). All this non-informative frames compose dominant part of redundant and diagnostically useless content, which should be fast an effectively discarded. Examples of non-informative and informative, diagnostically important frames are presented on Figure 1a. 1.3
CAD Tools
Existing works concerning different forms of computer aiding tools for bronchoscopy are mainly concentrated on advanced visualization aspects like virtual bronchoscopy [6] [7] or video (camera motion) tracking [8] [9]. In [5] authors proposed efficient method for informative frame classification based on DFT transformation, texture features extraction and data clustering. Presented results with 97% accuracy on colonoscopy examinations were very promising. Even though authors suggested that presented DFT based method is domain independent and can be theoretically used to for all types of endoscopic video, we noticed that in case of bronchoscopy examinations the difference in Fourier spectrum distribution between informative and
154
R. J´ o´zwiak, A. Przelaskowski, and M. Duplaga
non-informative frames is significantly less diversified, which seem to pose a major limitation in practical application. This limitation can be overcome with the advantage of multiresolution image representation provided by wavelet decomposition. The multiresolution nature of the Discrete Wavelet Transform (DWT) is proven to be a powerful tool for the efficient representation and analysis of the image information content [10] and found a practical application in many CAD tools [11] [12]. The discrete wavelet transform of an image produces a multi-resolution, hierarchical representation where each wavelet coefficient represents the information content of the image at a certain resolution in a certain position. The significant content of the image manifest itself in diversified wavelet coefficient values - the wavelet coefficients have a high amplitude around the image edges and in the textured areas within a given spatial orientation [10]. The content of the wavelet plane can be easily summarized through various statistic analysis of wavelet coefficients, providing new, alternative, sometimes better and more suitable description of salient image features, relevant for further specific applications. The purpose of our work was to investigate the usefulness of wavelet domain textural features for diagnostically important content selection, taking account of the needs of computer-aided bronchoscopy examination system. Suggested algorithm for informative frame extraction exploits region of interests segmentation, features extraction and frame classification. We verified our method experimentally.
2 Material and Methods Informative frames contain relevant and significant diagnostic information. From the point of view of computer vision algorithms the most important is attendance of boundaries corresponding to real anatomical structures and varied patterns related with natural, histological properties of bronchi. The presence of this information is clearly reflected in wavelet coefficients domain, where especially structure boundaries become sharply outlined due to energy concentration near image singularities (large wavelet coefficients at multiple scales and three different orientations). For non-informative, unreadable frames, which do not contain any useful diagnostic information, wavelet representation is characterized by uniform distribution of wavelet coefficients. The visual comparison of this two types of frames (informative and non-informative) and their wavelet coefficients distribution is presented on Figure 1b. 2.1
Method Description
Proposed method consists of three main steps, which we describe further in details.
Diagnostically Useful Video Content Extraction
155
Region of Interests Selection and Preprocessing The size of the original frame is 720 x 576. The bigger part of the frame is composed of black background. In first step we select region of interests to extract only important part of the frame, which contain image of the bronchi. The algorithm is based mainly on simple thresholding, which allow to select a mask connected with diagnostically useless background, and cropping original frame according to mask size. Unfortunately such an image contain characteristic black corners, which are highly undesirable for the sake of further wavelet decomposition (this corners additionally accumulate energy during decomposition). Thus we additionally select maximally useful area, centered in relation to previously cropped image, to avoid taking into consideration mentioned above four black corners. Next, selected region of interest is converted to grayscale and normalized (constant component removal). Multiresolution Decomposition and Feature Extraction For each frame ROI we perform a DWT at three levels of decomposition (3 scales). Applying one level wavelet transform we receive lowpass subband LL and three highpass (details) subbands HL, LH, HH corresponding to the horizontal, vertical and diagonal directions respectively. Lowpass subband LL represents sub-sampled, averaged version of the original image and is recursively decomposed on next wavelet decomposition levels (scales). We also introduced a conception of assembling the wavelet coefficients from all detail subbands (at each scale) to one max-subband (operation of maximum from all subbands at each scale). We used different forms of wavelet energy based features (energy, normalize energy across scales and subbands), histogram-based features from normalized wavelet coefficients across scales. Moreover, entropy features (joint, memoryless) and homogeneity, correlation, energy and contrast of successive scale co-occurrence matrix of quantized coefficients. Finally we select a set of features containing histogram features (mean, variance, skewness, kurtosis, entropy, energy) and join histogram features (mutual information between max-subbands across scales).We used MATLAB Wavelet Toolbox for wavelet analysis, with Daubachies orthogonal wavelet family with 4 vanishing moments (db4) as a compromise between computational simplicity and good performance [13]. Calculating mentioned above seven features for all three decomposition scales we obtained a 20 elements feature vector, which was used for frame classification. Frame Classification SVM classifier form MATLAB Bioinformatics Toolbox, with linear kernel function and Least-Squares (LS) method to find the separating hyperplane was applied for classification procedure. Described method was visually presented on Figure 2.
156
R. J´ o´zwiak, A. Przelaskowski, and M. Duplaga
Fig. 2. Graphical illustration of proposed method
3 Experiments and Results Proposed method was implemented in MATLAB. To assess the performance we prepared three different data sets. Each of them consists of set of positive (informative) and negative (non-informative) frames. Non-informative frames were represented by different types of unreadable frames, described previously in section 1.2. In case of informative frames in first two data sets this frames were represented by a readable, diagnostically important frames, representing different parts of bronchi (containing both pathology and healthy regions). Only the third data set was slightly different - this time all positive cases were represented only by frames, representing different types of bronchi lesions or anatomical changes indicating for lesion attendance (i.e. bleedings, pelidroma, purulent secretion, tumour mass, bronchi contraction). All data sets were prepared and described by experienced pulmonologist. First data set was used as a training set while other two were used as a test sets for method performance assessment. Detail description of all data sets is presented in Table 1. From the variety of measures available to assess different Table 1. Characteristic of three data sets used to assess the performance of proposed method name
total number of frames
number of positive frames
number of negative frames
training set test set 1 test set 2
733 740 832
382 383 229
351 357 603
Diagnostically Useful Video Content Extraction
157
Table 2. Quality assessment of classification results for proposed method name
sensitivity
specificity
precision
accuracy
test set 1 test set 2 mean
0.9582 0.9913 0.9748
0.9412 0.9289 0.9351
0.9459 0.8346 0.9374
0.9500 0.9455 0.9478
Fig. 3. Classification example results for test set 2 a) true positive example b) false negative example c) true negative example and d) false positive example
characteristic and quality of classification algorithms we used widely known and often employed in biomedical application measures that are built from a confusion matrix which records correctly and incorrectly recognized examples for each class. Those metrics are: sensitivity, specificity, precision and accuracy. Table 2 shows the summarized results for the proposed method. The mean sensitivity 0.97 and mean specificity 0.94 with overall mean accuracy about 0.95 confirmed high efficiency of propose method for informative, diagnostically useful frame selection. Our method has definitely better sensitivity then specificity. This is extremely important for the sake of diagnostic content selection where in fact we cannot omit any frame which potentially contain symptoms of pathology. It is worth seeing that for test set 2, which contains only frames with pathology among positive frames, sensitivity is equal 0.99 (we noticed only 2 false negatives). Errors are mainly due to fine blur effect combined with relatively small participation of significant structures boundaries in entire frame content (compare Fig. 3a and 3b). Lower specificity is caused mainly due to diversified characteristic of non-informative frames. Among false positive cases we observed many frames containing bubbles (which causes significant reflection of light) and bright, over lighted areas defined in literature as a specular reflection. Those elements compose additional source of high-frequency details (compare Fig. 3c and 3d).
4 Conclusions and Future Work In this work a problem of informative frame selection for bronchoscopy video was raised, widely describe and analyzed. Effective method for automatic informative, diagnostically important content selection from noninformative, diagnostically useless content was proposed. Such a method
158
R. J´ o´zwiak, A. Przelaskowski, and M. Duplaga
is particularly important and plays a crucial role as a preprocessing stage for the purpose of integrated computer-aided systems. Presented algorithm exploits region of interests segmentation, multiresolution wavelet decomposition combined with features extraction in wavelet domain, and frame classification. Our experiments on large and diversified data set (containing over 1500 frames), prepared and described by experienced pulmonologist, confirmed usefulness of our method. Future perspectives of this work include further algorithm modifications, including optimal wavelet basis selection, color image processing (different color spaces) and new features searching.
References 1. Duplaga, M., Leszczuk, M., Przelaskowski, A., Janowski, L., Zieli˜ nski, T.: Bronchovid - zintegrowany system wspomagaj´zcy diagnostykˆe bronchoskopow´z. Przegl´zd Lekarski (2007) 2. Duplaga, M.: The Impact of Information Technology on Quality of Healthcare Services. In: International Conference on Computational Science, pp. 1118–1125 (2004) 3. Shah, P.L.: Flexible bronchoscopy. Medicine 36(3), 151–154 (2008) 4. Bowling, M., Downie, G., Wahidi, M., Conforti, J.: Self-Assessment Of Bronchoscopic Skills In First Year Pulmonary Fellows. Chest 132(4) (2007) 5. Hwang, S., Oh, J., Lee, J., Tavanapong, W., de Groen, P.C., Wong, J.: Informative Frame Classification for Endoscopy Video. Medical Image Analysis 11(2), 100–127 (2007) 6. Chung, A.J., Deligianni, F., Shah, P., Wells, A., Yang, G.Z.: Patient Specific Bronchoscopy Visualisation through BRDF Estimation and Disocclusion Correction. IEEE Transactions of Medical Imaging 25(4), 503–513 (2006) 7. Duplaga, M., Socha, M.: Aplikacja oparta na bibliotece VTK wspomagaj´zca zabiegi bronchoskopowe. Bio-Algorithms and Med-Systems l(l/2), 191–196 (2005) 8. Rai, L., Merritt, S.A., Higgins, W.E.: Real-time image-based guidance method for lung-cancer assessment. In: IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 2437–2444 (2006) 9. Mori, K., Deguchi, D., Sugiyama, J., Suenaga, Y., Toriwaki, J., Maurer Jr., C.R., Takabatake, H., Natori, H.: Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images. Med. Image Anal. 6, 321–365 (2002) 10. Mallat, S.: A Theory for Multiresolution Signal Decomposition: A Wavelet Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 11(7) (1989) 11. van Ginneken, B., ter Haar Romeny, B.M., Viergever, M.A.: Computer-aided diagnosis in chest radiography: a survey. Med. Img. 12(20), 1228–1241 (1989) 12. Yusof, N., Isa, N.A., Sakim, H.A.: Computer-aided Detection and Diagnosis for Microcalcifications in Mammogram: A Review. International Journal of Computer Science and Network Security, 202–208 (2007) 13. Cena, B., Spadaccini, N.: De-noising ultrasound images with wavelets. Summer School on Wavelets, Zakopane, Poland (1996)
Direct Filtering and Enhancement of Biomedical Images Based on Morphological Spectra Juliusz L. Kulikowski, Malgorzata Przytulska, and Diana Wierzbicka Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences, 4 Ks. Trojdena Str., 02-109 Warsaw, Poland [email protected], [email protected]
Summary. In the paper a method of filtering of biomedical images aimed at their enhancement for direct visual examination or for automatic segmentation of regions covered by typical textures is presented. For this purpose morphological spectra (being a modification of the systems of orthogonal 2D Walsh functions) are used. Filtering consists in assigning relative weights coefficients to spectral components representing typical morphological micro-structures. However, direct filtering makes possible elimination of calculation of the components of morphological spectra, because filtered values of image elements are given as linear combinations of the values of the original image in fixed basic windows. The method of calculation of the transformation coefficients in details is described. Application of the method is illustrated by an example of cerebral SPECT image examination.
1 Introduction Modern imaging methods play a substantial role in various areas of medical diagnostics and therapy monitoring. Computer methods are used on several levels of medical imaging, from image reconstruction in all kinds (CT, SPECT, PET, NMR, etc.) of tomography through image enhancement and filtering, segmentation, extraction and analysis of diagnostic parameters, image compression, archiving and/or retrieval, up to pattern recognition or image understanding, contents interpretation and usability assessment [1, 2, 3]. Many of the above-mentioned problems can be effectively solved due to recognition and analysis of texture of biological tissues. Mathematical models used to texture description can be roughly divided into several groups: spectral, morphological, statistical, fractal, etc. [4, 5, 6]. However, textures are not strongly defined mathematical objects and any formal model represents no more but a sort of selected texture features. Well-fitted mathematical models of textures should, in particular, take into account such general properties of biomedical textures as their irregularity, spatial non-homogeneity and multiscalar structure. Models based on morphological spectra (MS), in our opinion, satisfy the above-mentioned conditions. Calculation of MS components is, at a first glance, similar to this of image expansion into a series of 2D Walsh M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 159–166. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
160
J.L. Kulikowski, M. Przytulska, and D. Wierzbicka
functions [7, 8, 9]. However, due to a hierarchical structure of MS and of image partition into a system of basic windows some typical operations like, in particular, filtering of textures can be easy performed. The aim of texture filtering may be both: image enhancement by additive noise suppression and discrimination of selected types of textures as a step to segmentation of regions of special diagnostic interest. Two alternative approaches to linear image filtering can be used: 1/ spectral filtering and 2/ direct filtering. Spectral filtering consists of the following steps: a) direct image transformation into its spectrum, b) weighting spectral components and c) weighed spectrum reverse transformation into a transformed image. The necessity of two (direct and reverse) spectral transformations is the main disadvantage of this type of filtering. However, the pair of spectral transformations rends a filtered image as a complete bit-map of pixel values. On the contrary, a direct filtering consists in a single calculation of a filtered pixel value as a linear combination of the values of surrounding pixels. The below-described method of direct filtering is different in the sense that its filtering coefficients are directly connected with relative weights assigned to MS components. Such approach is particularly useful in the case if the aim of filtering consists in relative suppression or enhancement of selected MS components representing particular morphological structures.
2 Basic Assumptions It is considered a monochromatic image given in the form of a I × J bitmap. The image has been divided into non-overlapping basic windows of square form and N × N size, where I, J ≥ N, N = 2n, n being a natural number chosen so as the largest typical morphological structures of the texture to be fitted in the basic windows. We call n a MS level. Elements of a typical basic window can be presented in the form of a row vector U (n) = [u11 , u12 , . . . , u1N , u2N , u22 . . . , u2N , . . . , uK1 , uK2 , . . . , uN N ],
(1)
the elements ui,j called pixel values belonging to an interval 0,1,. . . ,2k -1, where k is a fixed natural number. An n-th level MS for a given basic window can be calculated according to the formula (W (n) )tr = M (n) · (U (n) )tr ,
(2)
where tr denotes transposition of a matrix or vector; M is a N 2 × N 2 size square matrix of n-th level morphological transformation; W (n) is a N 2 component real vector of n-th level MS. The elements of M (n) are closely connected with masks used to calculation of MS components described in [7, 8, 9]. Their rows are given as masks displayed in (n)
Direct Filtering and Enhancement of Biomedical Images
161
the form of binary (+1,–1) vectors multiplied by 4−n . The rows are denoted and lexicographically ordered by the sequences of n symbols taken from the ordered set [S, V, H, X ]. The columns of M (n) correspond to the components of U (n) . Example of a 1-st order morphological transformation matrix is given below. 1 1 1 1 MS M 1) = -1 1 -1 1 = M v -1 -1 1 1 MH -1 1 1 -1 M X The higher-level morphological transformation matrices can be calculated by iteration. For example, the second-level matrix consisting of 42 = 16 rows corresponding to the spectral components: SS, SV, SH, SX, VS, VV, VH, HX, HS, HV, HH, HX, XS, XV, XH, and XX will be calculated as follows. Let us take into account the component VH. It is the V -component corresponding to four adjacent 1st -level basic windows for which the H -components have been calculated. As such it should be calculated according to the mask: -M H M H -M H M H (2)
and, at last, the row M V H takes the form: (2)
M V H = [1, 1, −1, −1 − 1, −1, 1, 1, 1, 1, −1, −1, −1, −1, 1, 1] The full matrix M (2) is given below (for the sake of simplicity the symbols + and –instead of +1 and –1 have been used):
M (2)
SS SV SH SX VS VV VH VX = HS HV HH HX XS XV XH XX
++++++++++++++++ – + – + – + – + – + – + – + – + – – – – ++++ – – – – ++++ – + – ++ – + – – + – ++ – + – – – ++ – – ++ – – ++ – – ++ + – + – – + – ++ – + – – + – + ++ – – – – ++++ – – – – ++ + – – + – ++ – + – – + – ++ – – – – – – – – – ++++++++ + – + – + – + – – + – + – + – + ++ – – ++ – – – – ++ – – ++ + – – ++ – – + – ++ – – ++ – – – – – ++++++++ – – – – + – + – – + – + – + – ++ – + – ++ – – – – ++ – – ++++ – – + – – + – ++ – – ++ – + – – +
It was shown in [9] that spectral matrices satisfy the orthogonality condition: (M (n) )tr · M (n) = 4n · I (n)
(3)
162
J.L. Kulikowski, M. Przytulska, and D. Wierzbicka
where I (n) denotes the n-th order unity matrix. This leads to the conclusion that a basic window content can be calculated as a reverse morphological transformation of its morphological spectrum: U (n) = 4−n · W (n) · M (n)
(4)
For image filtering W (n) should be modified by assigning weights to its components. Let us define a diagonal matrix of weights:
Q (n)
q1 0 0 q2 = 0 0 · · · · · · 0 0
0 0 q3 · · · 0
... ... ... · · · ...
0 0 0 · · · qR
(5)
where R = N 2 = 4n . The diagonal components of Q (n) should take nonnegative real values. A given MS component will be relatively enhanced if the corresponding weight is >1, or it will be weakened if the value is < 1. For this purpose a modified MS will be defined: V (n) = W (n) · Q (n) = [q1 · w1 , q2 · w2 , . . . , qR · wR ],
(6)
The filtered signal U (n), thus will be given by U (n), = 4−n V (n) · M (n) ,
(7)
However, a similar effect will be reached if an expression similar to (4) U (n), = 4−n · W (n) · M (n),
(8)
is used, where: M (n), = Q (n) · M (n),
(9) (n)
has been obtained by multiplication of the rows of M , respectively, by q1 , q2 , . . . , qR . Substituting in (8) U (n) · (M (n) )tr instead of W (n) (see (2)) we obtain:
where
U (n), = 4−n · U (n) · (M (n) )tr · M (n), = U (n) · C (n) ,
(10)
C (n) = 4−n · (M (n) )tr · Q (n) · M (n)
(11)
is a R × R filtering matrix. Expression (10) realizes the concept of direct filtering of biomedical images (e.g. of textures of biological tissues). This type of filtering is analogous to the widely known linear convolution filtering of images.
Direct Filtering and Enhancement of Biomedical Images
163
The filtering matrix C (1) is given below: qS +qV +gH +qX C(1) =1/4 qS −qV +gH +qX qS +qV −gH −qX qS −qV −gH +qX
qS −qV +gH −qX qS +qV +gH +qX qS −qV −gH +qX qS +qV −gH −qX
qS +qV −gH −qX qS −qV −gH +qX qS +qV +gH +qX qS −qV +gH −qX
qS −qV −gH +qX qS +qV −gH −qX qS −qV +gH −qX qS +qV +gH +qX
It can be observed that C (1) is symmetrical. Moreover, all its elements are given as only four types of linear combinations of the weights.
3 Calculation of Filtering Coefficients The last observation suggests that, possibly, a structural regularity in the higher-level filtering matrices can be used to the calculation of the elements of higher-level filtering matrices. Let us remark that an element cij of C (n) , where i, j ∈ [1, . . . , R], corresponds to the i-th term of a linear expression describing the filtered valueof j -th pixel in the basic window. Therefore, it stands at the value ui of i-th pixel of the basic window before filtering (the pixels within each basic windows being independently indexed linearly by single indices). According to (11), cij is thus determined by the i-th row of (M (n) )tr and the j -th column of Q (n) · M (n) . However, the i-th row of (M (n) )tr is identical to the i-th column of M (n) , while the j -th column of Q (n) · M (n) is the j -th column of M (n) whose elements have been multiplied, correspondingly, by the diagonal components of Q (n) . . Example. Let us find the filtering coefficients of 2nd level MS for the pixel u11 in a 2nd level basic window. The 2nd level MS is defined on 4×4 basic windows, the pixel u11 being the third one in the 3rd row of the basic window (11 = 4 + 4 + 3). The filtering coefficients thus are given by the 11-th column of C (2) . It is defined by a scalar product of the 11th row of (M (2) )tr (or of a transposition of (2) the 11th column M 11,∗ of M (n) ) and the consecutive columns of M (n) whose components are weighed by the components of Q (2) . For better visualization (2) the columns of M (2) are presented below in a transposed form; M ∗,11 will be thus presented as a row-vector: tr (M (2) )tr ∗,11 ) = [1, −1, −1, −1, 1, 1, −1, −1, 1, −1, 1, 1, 1, −1, 1, −1].
As an example of filtering coefficients calculation let us take into consideration the coefficient c11,5 . For its calculation the 5th column of M (2) should be taken into account. We present it in the form: (2)
(M ∗,5 )tr = [1, −1, 1, 1, −1, −1, −1, −1, −1, 1, 1, 1, 1, −1, −1, −1].
164
J.L. Kulikowski, M. Przytulska, and D. Wierzbicka (n)
(2)
A comparison of the corresponding components of M 11,∗ and M 5,∗ leads to a vector of coincidence: V 5,11 = [1, 1, −1, −1, −1, −1, 1, 1, −1, −1, 1, 1, 1, 1, −1, 1] whose component +1 indicates that the pair of the corresponding compo(2) (2) nents of (M ∗,11 )tr and (M ∗,5 )tr has the same sign, and -1 that the signs are opposite. Finally, the filtering coefficient c5,11 is given by a scalar product: c5,11 = (v 5,11 , Q (2) ) = qSS + qSV − qSH − qSX − qV S − qV V + qV H + qV X + −qHS − qHV + qHH +qHX +qXS + qXV − qXH + qXX . In similar way other filtering coefficients can be calculated. The value of u11 will be then given as a sum: u11 = 1/16 · (c1,11 · u1 + c2,11 · u2 + c3,11 · u3 + · · · + c16,11 · u16 ). A complete collection of parameters necessary to calculate filtered values of elements of a 2nd level basic window consists of 16 tables, similar to this given above. In fact, the number of tables is only 7 due to the symmetry: ci,j ≡ cj,i for i, j ∈ [1, . . . , 16] and identity of the diagonal elements cii .
4 Application to Image Enhancement Effective image filtering and/or enhancement depends on an adequate choosing the vector Q (n) of weight coefficients. First, it is necessary to take into account that the filtered values can not be negative and should not exceed the admissible maximum pixel value, otherwise they are cancelled before visualization. A typical situation when image enhancement is desirable arises when lesions in inner organs are to be detected and localized by examination of radiological images. The detection is possible due to the differences of textures of biological tissues. Image enhancement thus consists in reinforcement of the differences being of interest for a medical diagnosis. If a pair of selected image regions is given by a medical expert as an example of difference of tissues being of interest then it can be used to choose the adequate filtering weight coefficients. Let W’ = [w1 , w2 , w3 , . . . , wk , . . . , w R], W” = [w1 , w2 , w 3, . . . , wk , . . . , wR ], be two MS vectors representing the indicated by the expert different textures to be discriminated. Then the filtering weights can be calculated according to the principle: qk = 1 + Δk
(12)
Direct Filtering and Enhancement of Biomedical Images
165
Fig. 1. Examples of filtering of cerebral SPECT images
where (i)
(j)
Δk ∼ ||uk | − |uk ||
(13)
and ∼ is a symbol of proportionality. Absolute values have been taken because the signs of the MS components are connected with spatial shifts of the corresponding morphological structures only. Moreover, absolute value of the difference makes the weight independent on the fact in which of the two regions the given morphological structure is dominating. In Fig. 1 an example of filtering of SPECT cerebral images are shown. The aim of filtering is enhancement of differences of regions symmetrically located with respect to the main vertical axis. Such differences may indicate the suspected ischemic regions in the brain. For image filtering 1st level MS and the filtering weights: qS = 1, qV = 1, qH = 1, qX = 1.5 have been used. Two rows of images correspond to two slices of SPECT imaging; the columns represent: a/ images before filtering, b/ images after filtering, c/ difference between images shown in a/ and b/, d/ reversed (negative) difference. The images show that enhancement of the X spectral component reveals details being not visible by a direct comparison of luminance levels of the compared regions.
5 Conclusions Morphological spectra give an opportunity to enhance biomedical images by filtering modifying relative levels of selected spectral components. For this purpose direct filtering methods presented in the paper can be used. Effectiveness of image enhancement depends on an adequate choosing of the
166
J.L. Kulikowski, M. Przytulska, and D. Wierzbicka
filtering weight coefficients. The general methods presented in the paper should be adjusted to strongly defined biomedical applications.
References 1. Hsinchun, C., et al. (eds.): Medical Informatics. Knowledge Management and Data Mining in Biomedicine. Springer, New York (2005) 2. Pratt, W.K.: Digital Image Processing. John Wiley & Sons, New York (1978) 3. Wong, S.T.C. (ed.): Medical Image Databases. Kluwer Academic Publishers, Boston (1998) 4. Haddon, J.F., Boyce, J.F.: Texture Segmentation and Region Classification by Orthogonal Decomposition of Cooccurence Matrices. In: Proc. 11th IAPR International Conference on Pattern Recognition, Hague, vol. I, pp. 692–695. IEEE Computer Society Press, Los Alamitos (1992) 5. Bruno, A., Collorec, R., Bezy-Wendling, J.: Texture Analysis in Medical Imaging. In: Contemporary Perspectives in Three-Dimensional Biomedical Imaging, pp. 133–164. IOS Press, Amsterdam (1997) 6. Ojala, T., Pietikajnen, M.: Unsupervised Texture Segmentation Using Feature Distributions, Texture Analysis Using Pairwise Interaction Maps. In: Del Bimbo, A. (ed.) 9th International Conference Image Analysis and Processing, ICIAP 1997, Florence, Proc., vol. I, pp. 311–318 (1997) 7. Kulikowski, J.L., Przytulska, M., Wierzbicka, D.: Recognition of Textures Based on Analysis of Multilevel Morphological Spectra. GESTS Intern. Trans. on Computer Science and Eng. 38(1), 99–107 (2007) 8. Kulikowski, J.L., Przytulska, M., Wierzbicka, D.: Morphological Spectra as Tools for Texture Analysis. In: Kurzynski, M., Puchala, E., Wozniak, M., Zolnierek, A. (eds.) Computer Recognition Systems 2, pp. 510–517. Springer, Heidelberg (2007) 9. Kulikowski, J.L., Przytulska, M., Wierzbicka, D.: Biomedical Structures Representation by Morphological Spectra. In: Pietka, E., Kawa, J. (eds.) Information Technology in Biomedicine. Advances in Soft Computing, pp. 57–65. Springer, Heidelberg (2008)
Part II
Features, Learning and Classifiers
A Novel Self Organizing Map Which Utilizes Imposed Tree-Based Topologies C´esar A. Astudillo1, and John B. Oommen2, 1 2
Universidad de Talca, Merced 437 Curic´ o, Chile [email protected] School of Computer Science, Carleton University, Ottawa, Canada: K1S 5B6 [email protected]
Summary. In this paper we propose a strategy, the Tree-based Topology-Oriented SOM (TTO-SOM) by which we can impose an arbitrary, user-defined, tree-like topology onto the codebooks. Such an imposition enforces a neighborhood phenomenon which is based on the user-defined tree, and consequently renders the so-called bubble of activity to be drastically different from the ones defined in the prior literature. The map learnt as a consequence of training with the TTO-SOM is able to infer both the distribution of the data and its structured topology interpreted via the perspective of the user-defined tree. The TTO-SOM also reveals multi-resolution capabilities, which are helpful for representing the original data set with different numbers of points, whithout the necessity of recomputing the whole tree. The ability to extract an skeleton, which is a “stick-like” representation of the image in a lower dimensional space, is discussed as well. These properties have been confirmed by our experimental results on a variety of data sets.
1 Introduction A problem that arises in the the classification of patterns in large data sets is to capture the essence of the similarity in the samples, which implies that any given cluster should include data of a “similar” sort, while elements that are dissimilar are assigned to different subsets. One of the most important families of ANNs used to tackle the abovementioned problems is the well known Self-Organizing Map (SOM) [7], which seeks to preserve the topological properties of the input space. Although the SOM has demonstrated an ability to solve problems over a wide spectrum, it possesses some fundamental drawbacks. One of these drawbacks is that the user must specify the lattice a priori, which has the effect that the user must run the ANN a number of times to obtain a suitable
This author is a Profesor Instructor at the Departamento de Ciencia de la Computaci´ on, with the Universidad de Talca. The work of this author was done while pursuing his Doctoral studies at Carleton University. Chancellor’s Professor ; Fellow: IEEE and Fellow: IAPR. This author is also an Adjunct Professor with the University of Agder in Grimstad, Norway.
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 169–178. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
170
C.A. Astudillo and J.B. Oommen
configuration. Other handicaps involve the size of the maps, where a lesser number of neurons often represent the data inaccurately. The state-of-the-art approaches attempt to render the topology more flexible, so as to represent complicated data distributions in a better way and/or to make the process faster by, for instance, speeding up the task of determining the best matching neuron. Numerous variants of the SOM have been reported. A survey of this field is ommitted here in the interest of space and brevity, but can be found in [1]. Some approaches try to start with a small lattice [11], while others attempt to grow a SOM grid by adding new rows or columns if the input samples are too concentrated in some areas of the feature space [3]. Others follow a symmetric growing of the original lattice [5]. Alternatively, some strategies add relations (edges) between units as time proceeds, thus not necessarily preserving a grid of neurons. In particular, the literature reports methods that use a tree-shaped arrangement [6]. Also, some researchers have attempted to devise methods which “forget” old connections as time goes by [4]. There are strategies that add nodes during training [11], while others use a fixed grid [7] or arrange different SOMs in layers [3], and combinations of these principles have also been employed. On the other hand, strategies that try to reduce the time required for finding the winner neuron have also been designed. The related approaches focus on the accuracy of the resulting neurons being the ones to be modified, and on the consequent topology. It is important to remark that no single one of these approaches has been demonstrated to be a clear winner when compared to the other strategies, thus leaving the window of opportunity open for novel ideas that can be used to solve the above-mentioned problems. Our aim is to permit the user to specify any tree-like topology, preventing cyclic neighborhood correspondences. Once this topology has been fixed, the concepts of the neighborhood and bubble of activity are specified from this perspective. The question is whether the prototypes can ultimately learn the stochastic distribution and simultaneously arrange themselves with the topology that mimics the one that the user hypothesized from the outset. We show that this is indeed possible, as demonstrated by a set of rigorous experiments in which our enhanced ANN, the Tree-based Topology Oriented-SOM (TTO-SOM) is able to learn both the distribution and the desired structured topology of the data. Furthermore, a consequence of this is the fact that as the number of neurons is increased, the approximation of the space will be correspondingly superior – both from the perspective of the distribution and of the user-defined topology.
2 The Tree-Based Topology-Oriented SOM The Tree-based Topology-Oriented SOM (TTO-SOM ) is a tree-structured SOM which aims to discover the underlying distribution of the input data set X , while also attempting to perceive the topology of X as viewed
A Novel SOM Which Utilizes Imposed Tree-Based Topologies
171
Fig. 1. An example of the description of the original tree topology. The example shows an array containing the number of children for each node in the tree
through the user’s desired perspective. The TTO-SOM works with an imposed topology structure, where the codebook vectors are adjusted using a Vector-Quantization-like strategy. Besides, by defining a user-preferred neighborhood concept, as a result of the learning process, it also learns the topology and preserves the prescribed relationships between the neurons as per this neighborhood. Thus, the primary consideration is that the concept of neurons being “near each other” is not prescribed by the metric in the space, but rather by the structure of the imposed tree. 2.1
Declaration of the User-Defined Tree
The topology of the tree arrangement of the neurons plays an important role in the training process of the TTO-SOM. This concept is closely related to the results of [3, 6, 8, 9, 11, 13], but the differences are found in [1]. In general, the TTO-SOM incorporates the SOM with a tree which has an arbitrary number of children. Furthermore, it is assumed that the user has the ability to describe/create such a tree. The user who presents it as an input to the algorithm, utilizes it to reflect the a priori knowledge about the structure of the data distribution1 . We propose that the declaration of the user-defined tree is done in a recursive manner, from which the structure of the tree is fully defined. The input to the algorithm is an array that contains integers specifying the number of children for each node in the tree, if the latter is traversed in a DepthFirst (DF) manner. The position i in the array implicitly refers to the ith node of the final tree if traversed in a Depth-First manner. The contents of the array elements in the ith position refer to the number of children that node i has. An example of this is given in Fig. 1 where the input array is 2, 3, 4, 0, 0, 0, 0, 1, 0, 2, 0, 0, 2, 0, 0, and the resulting tree is shown in the 1
The beauty of such an arrangement is that the data can be represented in multiple ways depending on the specific perspective of the user. The user is also permitted to declare the tree in a Breadth-First manner [1].
172
C.A. Astudillo and J.B. Oommen
Fig. 2. The figure shows the neighborhood for the TTO-SOM. Here nodes B, C and D are equidistant to A even though they are at different levels in the tree. Observe that non-leaf nodes may be involved in the calculation.
figure itself. The formal algorithm to specify this is included in [1], and omitted here due to space considerations. 2.2
Neural Distance between Two Neurons
In the TTO-SOM, the Neural Distance, dN , between two neurons depends on the number of unweighted connections that separate them in the user-defined tree, and is defined as the number of edges in the shortest path that connects the two given nodes. Additionally it is worth mentioning that this notion of distance is not dependent on whether or not nodes are leaves or not, as in the case of the ET [12], thus permitting the calculation of the distance between any pair of nodes in the tree. More specifically, the distance between a neuron and itself is defined to be zero, and the distance between a given neuron and all its direct children and its parent is unity. This distance is then recursively defined to farther nodes as shown pictorially in Fig. 2. Clearly, if vi and vj are nodes in the tree, dN (·, ·) possesses the identity, non-negativity, symmetry and triangular inequality properties. The reader should observe in Fig. 2 that nodes at different levels could also be equidistant from any given node. Thus, nodes B, C and D are all at a distance of 2 units away from A. As in the case of the traditional SOM, the TTO-SOM requires the identification of the BMU, i.e. the closest neuron to a given input signal. To locate it, the distances, df (·, ·) are computed in the feature space and not in terms of the edges of the user-defined tree. The formal algorithmic details of how this is done is omitted here but can be found in [1]. 2.3
The Bubble of Activity
Intricately related to the notion of inter-node distance, is the concept referred as to the “Bubble of Activity” which is the subset of nodes “close” to the unit being currently examined. These nodes are essentially those which are to be moved toward the input signal presented to the network. This concept involves the consideration of a quantity, the so-called radius, which determines how big the bubble of activity is, and which therefore has a direct impact
A Novel SOM Which Utilizes Imposed Tree-Based Topologies
173
on the number of nodes to be considered. The bubble of activity is defined as the subset of nodes within a distance of r away from the node currently examined and can be formally defined as B(vi ; T, r) = {v|dN (vi , v; T ) ≤ r},
(1)
where vi is the node currently being examined, and v is an arbitrary node in the tree T , whose nodes are V . Note that B(vi , T, 0) = {vi }, B(vi , T, i) ⊇ B(vi , T, i − 1) and B(vi , T, |V |) = V which generalizes the special case when the tree is a (simple) directed path. An example of how this bubble of activity is distinct from the bubble used in the literature is found in [1]. The question of whether or not a neuron should be part of the current bubble, depends on the number of connections that separate the nodes rather than the (Euclidean) distance that separate the networks in the solution space. 2.4
The Overall Procedure
The input to the algorithm consists of a set of samples in the d-dimensional feature-space, and, additionally, as explained in Sec. 2.1, an array by which the user-defined tree structure can be specified. Observe that this specification contains all the information necessary to fully describe the TTO-SOM structure, such as the number of children for each node in the tree. Furthermore, the algorithm also includes parameters which can be perceived as “tuning knobs”. They can be used to adjust the way by which it learns from the input signals. The TTO-SOM requires a schedule for the so-called decay parameters, which is specified in terms of a list, where each item in the list records the value for the learning rate, the radius of the bubble of activity, and the number of learning steps for which the latter two parameters are to be enforced. The detailed algorithm is found in [1].
3 Experiments and Results To demonstrate the power of our method, and adopting a pedagogical perspective, the experiments reported here, were done in the 2-dimensional feature space. It is important to remark that the capabilities of the algorithm are also applicable for higher dimensional spaces, though, the visualization of the resulting tree will not be as straightforward. While both the data distribution and its structure were unknown to the TTO-SOM, the hope is that the latter will be capable of inferring them through Unsupervised Learning. Lastly, the schedule of parameters follows a rather “slow” convergence by defining steady values for the learning rate for a large number of iterations, so that we could understand how the position of the nodes migrated torwards their final configuration. We believe that to solve practical problems, the convergence can be accelerated by appropriately choosing the schedule of parameters.
174
C.A. Astudillo and J.B. Oommen
(a) First iter
(b) After 40, 000
(c) After 100, 000
(d) After 300, 000
Fig. 3. TTO-SOM-based 3-ary tree topology learnt from a “triangular” distribution
3.1
Learning the Structure
Consider the data generated from a triangular-spaced distribution as per Fig. 3. A tree topology of depth 4 was defined, where each node has exactly 3 children (40 nodes in total), and randomly initialized as per the procedure explained in Sec. 2.1. Fig. 3(a), depicts the position of the nodes of the tree after a random initialization. Once the main training loop becomes effective, the codebook vectors get positioned in such a way that they represent the structure of the data distribution, and simultaneously preserve the userdefined topology. This can be clearly seen from Fig. 3(b) and 3(c) which are snapshots after 40, 000 and 100, 000 samples, respectively. At the end of the training process (see Fig. 3(d)), the complete tree fills in the triangle formed by the cloud of input samples and seems to do it uniformly. The final position of nodes of the tree suggests that the underlying structure of the data distribution corresponds to the triangle. Observe that the root of the tree is placed roughly in the center (i.e. the mean) of the distribution. It is also interesting to note that each of the three main branches of the tree, cover the areas directed torwards a vertex of the triangle respectively, and their sub-branches fill in the surrounding space around them. A different scenario occurs when the topology is unidirectional. Indeed, we obtain very impressive results when the tree structure is the 1-ary tree as seen in Fig. 4. In this case, the user-defined a list (i.e. a 1-ary tree) as the imposed topology. Initially, the codebook vectors were randomly placed as per Fig. 4(a). Again, at the beginning, the linear topology is completely lost due to the randomness of the data points. The migration and location of the
(a) first iter
(b) After 20, 000
(c) After 50, 000
(d) After 300, 000
Fig. 4. TTO-SOM-based 1-ary tree (list) topology learnt from a “triangular” distribution
A Novel SOM Which Utilizes Imposed Tree-Based Topologies
175
codebook vectors after 20, 000 and 50, 000 iterations are given in Fig. 4(b) and 4(c) respectively. At the end of the training process the list represents the triangle very effectively, with the nodes being ordered topologically, as in [7]. This example can be effective in distinguishing our method over the ET [11], as explained in more detail in [1]. Indeed, the latter freezes the position of certain nodes (so as to enhance the computations required to yield the BMU) after a pre-specified criterion. As explained in [1], this leads to a solution that is sub-optimal when compared to the one given by the TTOSOM. The advantage of the TTO-SOM is more pronounced in the linear case when the “tree” is actually a list. 3.2
The Hierarchical Representation
Another distinct advantage of the TTO-SOM, which is not possessed by other SOM-based networks, is the fact that it has hologram-like properties. In other words, although the entire tree specified by the user can describe the cloud of data points as per the required resolution, the same cloud can be represented with a more coarse resolution by using a lesser number of points. Thus, if we wanted to represent the distribution using a single point, this can be adequately done by just specifying the root of the tree. A finer level of resolution will include the root and the second level, where these points and their corresponding edges, will represent the distribution and the structure. Increasingly finer degrees of resolution can be obtained by including more levels of the tree. We believe that this is quite a fascinating property. To clarify this, consider the triangular distribution in Fig. 5, which is the same distribution of Fig. 3. Fig. 5(a) shows how the cloud can be represented by a single point, i.e. the root. In Fig. 5(b) it is represented by 4 nodes which are the nodes up to the second level. If we use the user defined tree of four levels, the finest level of resolution will contain all the 40 nodes, as displayed in Fig. 5(d). 3.3
Skeletonization
Intuitively, the objective of skeletonization is to construct a simplified representation of the global shape of an object. In general, such a skeleton is
(a) Level 0
(b) Level 1
(c) Level 2
(d) Level 3
Fig. 5. Multi-level resolution of the results shown in Fig. 3
176
C.A. Astudillo and J.B. Oommen
expected to contain much less points than the original image and should be a thinned version of the original shape. According to the authors of [10], skeletonization in the plane is the process by which a 2-dimensional shape is transformed into a 1-dimensional one, similar to a “stick” figure. In this way, skeletonization can be seen as a dimensionality reduction technique that captures local object symmetries and the topological structure of the object. This problem has been widely investigated in the fields of pattern recognition and computer vision. SOM variations have been used to tackle the situation when points are sparse in the space [2, 14]. In [2] the authors used a GNG-like approach, while in [14], the authors recommend the use of a Minimum Spanning Tree which is calculated over the positions of the codebook vectors, followed by a post-refinement phase that adds and deletes edges. As we perceive it, the structure generated by using the TTO-SOM can be viewed as an endo-skeleton of the given data set, and on convergence, it will self-organize so as to assimilate the fundamental properties of the primary representation. It is also worth mentioning that the movement of a joint (i.e. a neuron) will have the implication of the modification of at least one edge. Thus, when a node is moved, all the edges associated with the children and parent will change accordingly, modifying the shape of the inferred skeleton. The difference between using the SOM-like philosophy [14] and the TTOSOM lies exactly here. A SOM-like algorithm will change the edges of the skeleton but only as the algorithm dictates as per the MST computed over the nodes and their distances in the “real” world; i.e., the feature space. As opposed to this, a TTO-SOM like structure can modify the skeleton as dictated by the particular node in question, but also, all the nodes tied to it by the bubble of activity, as dictated by the user defined tree, i.e., the link distances. The reader can appreciate, in Fig. 6, the original silhouette of a rhinoceros, a guitar and a human being. All three objects were processed by the TTOSOM using exactly the same tree structure, the same schedule for the parameters, and without any post processing of the edges. From the images at the lower level of Fig. 6 we observe that, even without any specific adaption, the TTO-SOM is capable of representing the fundamental structure of the three objects in a “1-dimensional” way effectively. The figures at the second level display the neurons without the edges. In this case, it can be seen that our algorithm is also capable of granting an intuitive idea of the original objects by merely looking at the points. A potentially interesting idea is that of mixing the hierarchical representation of the TTO-SOM presented in Sec. 3.2 with its skeletonization capabilities. We propose that in this case, the user will be able to generate different skeletons with different levels of resolution, which we believe, can be used for managing different levels of resolution at a low computational cost for applications in the fields of geomatics, medicine and video games.
A Novel SOM Which Utilizes Imposed Tree-Based Topologies
177
Fig. 6. Skeletonization process for the silhouettes of various shapes using the TTOSOM, namely, a rhinoceros, a guitar and a human being
4 Conclusions In the paper we have proposed a schema called the Tree-based TopologyOriented SOM (TTO-SOM) by which the operator/user is able to impose an arbitrary, user-defined, tree-like topology onto the codebook vectors of a SOM. This constraint leads to a neighborhood phenomenon based on the user-defined tree, and, as a result, the so-called bubble of activity becomes radically different from the ones studied in the previous literature. The map learnt as a consequence of training with the TTO-SOM is able to determine both the distribution of the data and its structured topology, interpreted through the perspective of the user-defined tree. In addition, we have shown that, the TTO-SOM revealed the ability to represent the original data set in multiple levels of granularity, and this was achieved without the necessity of computing the entire tree again. Lastly, we discussed the capability of the TTO-SOM to extract an skeleton, which is a “stick-like” representation of the image in a lower dimensional space. These properties has been confirmed by numerous experiments on a diversity of data sets.
References 1. Astudillo, C.A., Oommen, B.J.: Unabridged version of this paper (2008) 2. Datta, A., Parui, S.M., Chaudhuri, B.B.: Skeletal shape extraction from dot patterns by self-organization. Pattern Recognition 4, 80–84 (1996) 3. Dittenbach, M., Merkl, D., Rauber, A.: The Growing Hierarchical SelfOrganizing Map. In: Proc of the International Joint Conference on Neural Networks (IJCNN 2000), Como, Italy, pp. 15–19 (2000) 4. Fritzke, B.: A growing neural gas network learns topologies. In: Advances in Neural Information Processing Systems, vol. 7, pp. 625–632. MIT Press, Cambridge (1995)
178
C.A. Astudillo and J.B. Oommen
5. Fritzke, B.: Growing Grid - a self-organizing network with constant neighborhood range and adaptation strength. Neural Processing Letters 2, 9–13 (1995) 6. Guan, L.: Self-Organizing Trees and Forests: A Powerful Tool in Pattern Clustering and Recognition. In: Campilho, A., Kamel, M.S. (eds.) ICIAR 2006. LNCS, vol. 4141, pp. 1–14. Springer, Heidelberg (2006) 7. Kohonen, T.: Self-Organizing Maps. Springer, New York (2001) 8. Koikkalainen, P., Oja, E.: Self-organizing hierarchical feature maps. In: IJCNN International Joint Conference on Neural Networks, vol. 2, pp. 279–284 (1990) 9. Merkl, D., He, S., Dittenbach, M., Rauber, A.: Adaptive hierarchical incremental grid growing: An architecture for high-dimensional data visualization. In: Proceedings of the 4th Workshop on Self-Organizing Maps, Advances in Self-Organizing Maps, Kitakyushu, Japan, pp. 293–298 (2003) 10. Ogniewicz, O.L., K¨ ubler, O.: Hierarchic Voronoi Skeletons. Pattern Recognition 28, 343–359 (1995) 11. Pakkanen, J.: The Evolving Tree, a new kind of self-organizing neural network. In: Proceedings of the Workshop on Self-Organizing Maps 2003, Kitakyushu, Japan, pp. 311–316 (2003) 12. Pakkanen, J., Iivarinen, J., Oja, E.: The Evolving Tree — A Novel SelfOrganizing Network for Data Analysis. Neural Processing Letters 20, 199–211 (2004) 13. Rauber, A., Merkl, D., Dittenbach, M.: The Growing Hierarchical SelfOrganizing Map: exploratory analysis of high-dimensional data. IEEE Transactions on Neural Networks 13, 1331–1341 (2002) 14. Singh, R., Cherkassky, V., Papanikolopoulos, N.: Self-Organizing Maps for the skeletonization of sparse shapes. IEEE Transactions on Neural Networks 11, 241–248 (2000)
A New Feature Extraction Method Based on the Partial Least Squares Algorithm and Its Applications Pawel Blaszczyk1 and Katarzyna Stapor2 1 2
Institute of Mathematics, University of Silesia [email protected] Institute of Computer Science, Silesian University of Technology [email protected]
Summary. The aim of this paper is to present a new feature extraction method. Our method is an extension of the classical Partial Least Squares (PLS) algorithm. However, a new weighted separation criterion is applied which is based on the within and between scatter matrices. In order to compare the performance of the classification the biological and spam datasets are used.
1 Introduction Feature extraction and classification are the basic methods used to analyze and interpret the microarray and spam data in the microarray experiment and spam detection, respectively. The dataset coming from the microarray experiment contains vectors of features, belonging to certain classes. For microarray experiments these vectors are interpreted as vectors of the gene expressions whereas in the spam detection these vectors are interpreted as the ones of words in e-mails. These vectors are called samples. However, the number of samples is usually much smaller than the number of features. In this situation the small number of samples makes it impossible to estimate the classifier parameters properly and the classification results may, therefore, be inadequate. In literature this phenomenon is known as the Curse of Dimensionality. In this case it is important to decrease the dimension of the feature space. This can be done either by feature selection or feature extraction. Let us assume that we have the L-classes classification problem and let (xi , yi ) ∈ X × {C1 , . . . , CL }, x ∈ Rp where matrix of sample vectors X and response matrix Y are given by the following formulas: ⎡ ⎤ ⎡ ⎤ x11 . . . x1p 1 0 ... ... 0 ⎢ ⎥ ⎢ .. ⎥. X = ⎣ ... . . . ... ⎦, Y = ⎣ ... (1) .⎦ xn1 . . . xnp
0 0 ... 0 1
Each row of the matrix Y contain 1 in a position denoting the class label. One of the commonly used feature extraction methods is the Partial Least M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 179–186. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
180
P. Blaszczyk and K. Stapor
Squares (PLS) Method (see [6], [3], [4]). PLS makes use of the least squares regression method [5] in the calculation of loadings, scores and regression coefficients. The idea behind the classic PLS is to optimize the following objective function: (wk , qk ) = arg maxwT w=1;qT q=1 cov (Xk−1 w, Yk−1 q)
(2)
under conditions: wkT wk = qk qkT = 1
for 1 ≤ k ≤ d,
(3)
T Xj−1 wj = 0 for k = j, tTk tj = wkT Xk−1
(4)
where cov (Xk−1 w, Yk−1 q) is a covariance matrix between Xk−1 w and Yk−1 q, vector tk is the k-th extracted component, wk is the vector of weights for kth component, d denotes the number of extracted components, Xk , Yk arise from Xk−1 , Yk−1 by removing the k-th component by the following formulas: X(k+1) = Xk − tk tTk Xk
(5)
Y(k+1) = Yk − tk tTk Yk
(6)
This is the so called deflation technique. One can prove that the extracted vector wk corresponds to the eigenvector connected with the largest eigenvalue of the following eigenproblem: T T Xk−1 Yk−1 Yk−1 Xk−1 wk = λwk
(7)
Let SB denote the between scatter matrix and SW within scatter matrix respectively. It means that they are given by: L
T
pi (Mi − M0 ) (Mi − M0 ) ,
(8)
L
pi E (X − Mi ) (X − Mi )T |Ci = p i Si ,
(9)
SB =
i=1
SW =
L i=1
i=1
where Si denotes the covariance matrix, pi is a-priori probability of the appearance of the i-th class, Mi is the mean vector for the i-th class and M0 is given by: L M0 = pi M i . (10) i=1
These matrices are often used to define separation criteria. By separation criteria we mean the nonparametric function for evaluating and optimizing the separation between classes. For the PLS maximizing a separation criterion is
A New Feature Extraction Method Based on the PLS Algorithm
181
used to find such vectors of weights that provide an optimal separation between classes in the projected space. One can prove that for PLS the matrix: XkT Yk YkT Xk =
L
T
n2i (Mi − M0 ) (Mi − M0 )
(11)
i=1
for which the eigenvectors are found in each k-th step, is almost identical with the between class scatter matrix SB . These eigenvector are used as a vectors of weight for providing the appropriate separation. Hence we can say that the separation criterion in the PLS method is only based on the between scatter matrix. The disadvantage of the classic PLS method is that it does not give a proper separation between classes, particularly when the dataset is nonlinearly separated and the features are highly correlated. To provide a better separation between classes we propose a new weighted separation criterion. The new weighted separation criterion is used to design an extraction algorithm, based on the classic PLS method. In the next section we present methods used in this paper. In the subsection 2.1 we introduced a new weighted separation criterion and the algorithm for the estimation parameter in the separation criterion. The linear and nonlinear version of the new extraction algorithm is described in the subsection 2.2. The decision rule used to classify samples into classes is presented in subsection 2.3. Section 3 focused on the datasets as well as on the experimental scheme and the results obtained. In the subsection 3.2 we propose a new spam model. The final conclusions are drawn in section 4.
2 Methods 2.1
The New Weighted Separation Criterion
Let us assume that we want to find a coefficient w which separates classes the best. The existing separation criteria described in literature have some disadvantages. Some criteria can not be applied if the within scatter matrix is singular due to a small number of samples. For others the computational cost is high. In practice there are situations in which the distance between classes is small. In this case it is more important to increase the distance between classes than to decrease the distance between samples within a class, hence the influence of components denoting between and within scatters for classes is important. In this paper we propose a new weighted separation criterion, which we call the Weighted Criterion of Difference Scatter Matrices (WCDSM). Our new criterion is denoted by: J = tr (γSB − (1 − γ)SW ).
(12)
where γ is a parameter, SB and SW are between scatter matrix and within scatter matrix respectively. Applying a linear transformation criterion, condition (12) can be rewritten in the following form:
182
P. Blaszczyk and K. Stapor
J (w) = tr wT (γSB − (1 − γ)SW ) w .
(13)
which is more suitable for optimization. The next step is to optimize the following criterion: max wk
d
wkT (γSB − (1 − γ)SW ) wk ,
(14)
k=1
under the conditions: wkT wk = 1
for 1 ≤ k ≤ p.
(15)
The solution to this problem can be found with the use of the Lagrange multipliers method. To find the correct value of the parameter γ we used the following metric: ρ(C1 , C2 ) = min ρ(c1 , c2 ), (16) c1 ∈C1 ,c2 ∈C2
where Ci is the i-th class, for i ∈ {1, 2}. The value of the parameter γ was chosen by the using the following formula: γ=
mini,j=1,...,L,i =j {ρ(Ci , Cj )} . 1 + mini,j=1,...,L,i =j {ρ(Ci , Cj )}
(17)
Such a parameter γ equals 0 if and only if certain i and j classes exist for which ρ(Ci , Cj ) = 0. This means that at least one sample which belongs to classes Ci and Cj exist. If distance between classes increase, the value of γ also increases. Therefore the importance of the component SW becomes greater. 2.2
The New Extraction Method
In this section we will apply a new weighted separation criterion to design a new extraction algorithm based on PLS. Let us recall that the vector of weights in the PLS corresponds to the eigenvector connected with the largest eigenvalue of the eigenproblem (7) and the matrix (11) is almost identical to the between scatter matrix. To improve separation between classes we replace the matrix (11) with the matrix from our new separation criterion (13). The idea of the new extraction algorithm is to optimize the objective criterion wk = arg maxw wT (γSB − (1 − γ)SW )w , (18) under the following conditions: wkT wk = 1
for 1 ≤ k ≤ d
T Xj−1 wj = 0 tTk tj = wkT Xk−1
for k = j,
(19) (20)
A New Feature Extraction Method Based on the PLS Algorithm
183
We shall call this extraction algorithm - Extraction by applying Weighted Criterion of Difference Scatter Matrices (EWCDSM). One can prove that the extracted vector wk corresponds to the eigenvector connected with the largest eigenvalue for the following eigenproblem: (γSB − (1 − γ)SW )w = λw.
(21)
Also, the k-th component corresponds to the eigenvector related to with the largest eigenvector for the following eigenproblem: T Xk−1 Xk−1 (D − (1 − γ)I) t = λt.
(22)
Matrix D = [Dj ] is an n × n block-diagonal matrix where Dj is a matrix in which all elements equals 1/nnj , nj is the number of samples in the j-th class. A proper features extraction for nonlinear separable is difficult and could be inaccurate. Hence, for this problem we design a nonlinear version of our extraction algorithm. We use the following nonlinear function Φ : xi ∈ RN → Φ(xi ) ∈ F which transforms the input vector into a vector into a new, higher dimensional feature space F . Our aim is to find an EWCDSM component in F . In F , vectors wk and tk are given by the following formulas: wk = (D − (1 − γ)I)Kk wk
(23)
tk = Kk wk
(24)
where K is the kernel matrix. One can prove that the extracted vector wk corresponds to the eigenvector connected with the largest eigenvalue for problem: (Dk − (1 − γ)I) Φk ΦTk wk = λwk .
(25)
Also, the k-th component corresponds to the eigenvector connected with largest eigenvector for the following eigenproblem: Kk−1 (Dk−1 − (1 − γ)I)t = λt. 2.3
(26)
Classification
Let us assume that Xtrain and Xtest are the realizations of the matrix X for train and test datasets respectively. The idea of a training step is to extract vectors of weights wk and components tk by using the train matrix Xtrain and to store them as a column in matrices W and T respectively. In order to classify samples into classes we use train matrix Xtrain to compute the regression coefficients by using the least squares method [5] given by: −1 T Q = W PTW U ,
(27)
184
P. Blaszczyk and K. Stapor
where,
−1 U = Y Y TT TTT ,
(28)
W = X T U,
(29)
−1 P = XT T T T T .
(30)
Then we multiply test matrix Xtest by the coefficients of the matrix Q. In order to classify samples, corresponding to the Ytest matrix, we use the decision rule: yi = arg maxj=1,...,L Ytest (i, j). (31) The final form of the response matrix is the following: T
Ytest = y1 y2 · · · yL .
(32)
3 Experiments 3.1
Dataset
We applied the new extraction method to commonly accessible biological datasets: Leukemia, Colon, Carcinoma, Breast and Diabetes. We compared our method with PLS on the basis of the Leukemia dataset, available at [10]. The dataset came from a study of gene expression levels in two types of acute leukemias: acute lymphoblastic and acute myeloid. The training and test dataset contains 38 and 34 samples with 7129 genes, respectively. The colon came from Alon and is available at [8]. This dataset contains two classes of expression levels for 2000 genes. First class contains expression levels for 40 samples taken from diseased part of the Colon. The second class contains expression levels for 22 samples taken form the healthy part of the Colon. The third dataset, Carcinoma, came from Notterman and Alon and is available at [8]. This dataset contains expression levels for 7457 genes for 2 classes of patients: the sick and the healthy ones. In this dataset there are 36 samples, where 18 samples are those taken from sick patients. The next dataset, Breast [7], contains 277 samples. Each samples is characterized by 9 features. The train dataset contains 200 samples and 58 of them belongs to the first class. In the test dataset there are 23 samples belonging to the first class and 54 samples belong to the second class. The last biological dataset, Diabetes [7], contains 788 samples which are characterized by 8 features. Train dataset contains 488 samples and 170 of then belong to the first class. In the test dataset there are 98 samples belong to the first class and 202 belonging to the second class. The datasets described above constitute one of 100 sections. For the spam experiments we also used a Enron dataset publicly available to the public at [9]. This dataset contains e-mails from 6 Enron’s employers divided into 6 subset Enron1,...,Enron6.
A New Feature Extraction Method Based on the PLS Algorithm
3.2
185
Semantic Spam Model
Motivated by the EWCDSM algorithm we design the new spam model. Let us assume that a token is a sequence which can contains signs, digits, colons, semicolons, exclamation marks, apostrophes, commas and dashes. Moreover, let us assume that we have a n × p matrix X which we interpret with the collection of e-mails. Each element in X is given by the formula: xij = f tij log
n . dfj
(33)
where f tij is the frequency of j-th feature in i-th document, dfj is the number of documents in which there is j-th feature occurs. Let us also assume that we have a response matrix Y given by the following formula: )
1 ... 1 0 ... 0 Y = 0 ... 0 1 ... 1
*T
We use the EWCDSM to extract the d components. We shall call the matrix of components T and the response matrix Y the Semantic Spam Model (SSM). 3.3
Experimental Scheme and Results
In order to examine the classification performance of EWCDSM in microarray experiments, we use the following experimental scheme. First, we randomly divide each dataset into validation, train and test sets. Then each dataset is normalized. We use the jackknife method [2] and validation dataset to find the proper value of parameter d which denotes the number of extracted components. Classification performance is computed by dividing the number of samples classified properly by the total number of samples. This rate is know as a standard error rate [2]. Appropriate values of parameter d are 10 for Leukemia and 5 for Colon and Carcinoma datasets. For Breast and Diabetes, we used the nonlinear version of EWCDSM with the Gaussian kernel. Parameters for Breast dataset were tuned at C = 150 and σ = 50. For Diabetes dataset, these parameters were tuned at C = 100 and σ = 20. The result for all microarray datasets are presented in the Table 1. For spam datasets we use the following experimental scheme. First, from each datasets we randomly choose 100 spam and legitimate emails. Next we carry out the tokenization and stemming processes. Than we randomly divide each dataset into train, validation and test sets. Values of the parameter d is estimated Table 1. Classification performance (per cent) of biological datasets Leukemia Colon Carcinoma Breast Diabetes EWCDSM PLS
95,65 86,96
100,00 61,90
83,33 50,00
81,82 74,90
79,33 77,00
186
P. Blaszczyk and K. Stapor
Table 2. Classification performance (per cent), numbers of tokens for spam datasets Enron1 Enron2 Enron3 Enron4 Enron5 Enron6 performance 98,51 numbers of tokens 3564
97,01 5553
89,55 4978
94,03 5553
79,10 5195
92,54 5032
on the basis of the validation dataset. The relevant values of parameter d are 10, 40, 40, 30, 20, 20 for Enron1,...,Enron6, respectively. Finally, we create SSM models and test them. The results obtained are presented in the Table 2.
4 Conclusions We have introduced a new linear and nonlinear version of an algorithm for feature extraction. Our algorithm uses a new weighted separation criterion to find the weights vector which allows for the scatter between the classes to be maximal and for the scatter within the class to be minimal. On comparing the new criterion with the other well known ones, it can be seen that the new one can be used in a situation where the number of samples is small and the costs of computation are lowered. The new extraction algorithm can distinguish between normal and tumor samples for five different biological microarray datasets and between spam and legitimate mails for other six datasets. Moreover, we have shown that the classification performance of the proposed algorithm was significantly higher for our method than for classical the PLS method. The presented method performs well in solving classification problems. However, to draw some more general conclusions further experiments with the use of other biological datasets are necessary.
References 1. Blaszczyk, P.: Data classification by using modified partial least squares method. PhD thesis. Silesian University of Technology (2008) 2. Duda, R., Hart, P.: Pattern Classification. John Wiley & Sons, New York (2000) 3. Garthwaite, P.H.: An interpretation of Partial Least Squares. Journal of the American Statistical Association 89, 122 (1994) 4. H¨ oskuldsson, A.: PLS Regression methods. Journal of Chemometrics 2, 211–228 (1988) 5. Gren, J.: Mathematical Statistic. PWN, Warsow (1987) 6. Wold, H.: Soft Modeling by Latent Variables: The Non-Linear Iterative Partial Least Squares (NIPALS) Approach. In: Perspectives in Probability and Statistics, pp. 117–142. Papers in Honour of M. S. Bartlett (1975) 7. http://ida.first.fhg.de/projects/bench/benchmarks.htm 8. http://microarray.princeton.edu/oncology 9. http://archive.ics.uci.edu/ml/datasets/Spambase 10. http://www.broad.mit.edu
Data Noise Reduction in Neuro-fuzzy Systems Krzysztof Simi´ nski Institute of Informatics, Silesian University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland [email protected]
Summary. Some datasets require highly complicated fuzzy models for the best knowledge ability. The complicated models mean poor intelligibility for humans and more calculations for machines. Thus often the model are artificially simplified to save the interpretability. The simplified model (with less rules) have higher error for knowledge generalisation. In order to improve knowledge generalisation in simplified models it is convenient to reduce the complexity of data by reduction of noise in the datasets. The paper presents the algorithm for noise removal based on modified discrete convolution is crisp domain. The experiments reveal that the algorithm can improve the generalisation ability for simplified model of highly complicated data.
1 Introduction Neuro-fuzzy systems have the ability to produce models from presented data. The model is composed of fuzzy rules. The number of rules determines the complexity of the model. The more complex the model, the more accurately is fits the presented data. It is commonly known in data mining that perfect fitting to the presented data may lead to deterioration of the generalisation ability and poorer result for unseen data. The eagerness for better data approximation (DA) may lead to poorer knowledge generalisation (KG). For some datasets the best model (with minimal KG error) is quite complex. Often such models have more than 40 rules. This makes them practically illegible – the interpretation of such model by human is de facto impossible. Thus many researchers for sake of intelligibility cut down the number of rules. The DA and KG error grows but the model is simpler for humans to interpret and for machines to calculate. The data in dataset need more complicated model to better reflect their features. The idea arises to lower the complexity of data by reduction of noise. This idea is presented in the paper. Data noise reduction for neuro-fuzzy systems is not widely discussed in papers. The reduction of data noise in time series has been successfully applied [5]. But more general approach for data not constituting the time series is very difficult to find in literature. The paper [2] propose the algorithm for M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 187–194. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
188
K. Simi´ nski
removal of data noise. The authors apply their algorithm to image processing but they claim their approach is not restricted to image processing nor 2D datasets. The article [15] divides the data noise removal into two essential classes: (1) cleaning at the data collection stage that focuses on detecting and removing errors due to imperfect data collection process; (2) cleaning at the data analysis stage. The latter class splits into few subclasses as (a) noise removal based on outlier detection, what involves distance-based, density-based, clustering-based outlier detection and (b) hypercube approach. The problem in data noise reduction is the unknown model of data (this model the system is supposed to build) and the unknown model of noise. The noise character is unknown, we assume the data noise is a Gaussian one. The paper is organised as follows. Section 2 presents the data noise reduction approaches, section 3 presents neuro-fuzzy system with parametrized consequences. Two classes of the system are described differing in input domain partition: clustering and hierarchical partition. Finally section 4 presents the experiments.
2 Data Noise Reduction The task of data noise reduction is an ill-defined one. Intuitively it is clear that noise (unwanted component of the data set) should be removed, but it is not easy to formulate the formal definition [5]. The approach presented in this paper is inspired by the discrete convolution based reduction of noise in image processing and by the subtractive clustering (by Chiu in [4]). For image noise reduction the discrete convolution in form of a mask is used. The mask in image processing assigns the weight to each mask pixel and calculates the weighted sum of pixel values in question. The pixels that do not belong to the mask have null weight. This approach cannot by applied in our task because of following reasons: • • •
The input data do not form grid nor are distributes evenly in input domain as pixels in the picture. The discrete convolution in form of a mask is applied in crisp domain. Here the algorithm using the fuzzy approach would be preferred. The input space may have more than two dimensions.
In proposed approach the weights of all objects are calculated as an reciprocal of exponential function of distance between data examples. The new value of predicted attribute in the data tuple (object) is the weighted average of all data values. The algorithm is presented in pseudo-code in Fig. 1. The algorithm uses the denoise coefficient. The higher value of the coefficient means the less influence of distant examples. The complexity of the algorithm is O(n2 ).
Data Noise Reduction in Neuro-fuzzy Systems
189
procedure data = denoise (data, y, la, lp, coeff ) {la – number of attributes} {lp – number of examples} {coeff – denoise coefficient} {s = [s1 , . . . , sla , sy ] – variance of attributes}
s = var ([data y]) sm = mean(s) / coeff rsdiag = diag (s) for i = 1 to lp for j = i + 1 to lp r = data[i] − data[j ] eto = r T · rsdiag · r d [i][j ] = d [j ][i] = exp (−eto / sm 2 ) end for end for
{diagonal matrix} {O n2 } {vector difference between examples} {exponent squared Cartesian distance}
{O n2 }
for i = 1 to lp lp
yd [i] =
j=1
lp
d[i][j]·y[j]
j=1
{data denoising}
d[i][j]
end for return [data yd ] end procedure
{data denoised}
Fig. 1. Denoising algorithm
3 Neuro-fuzzy System with Parametrized Consequences The system with parametrized consequences is the MISO system. The rule base contains fuzzy rules in form of fuzzy implications R(i) : X is A(i) ⇒ Y is B (i) (θ),
(1)
where X = [x1 , x2 , . . . , xN ]T and Y are linguistic variables, A and B are fuzzy linguistic terms (values) and θ is the parameter vector of the consequence linguistic term. The linguistic variable Ai (A for ith attribute) is described with the Gaussian membership function: (xi − ci )2 μAi (xi ) = exp − , (2) 2s2i where ci is the core location for ith attribute and si is this attribute Gaussian bell deviation. Each region in the domain is represented by a linguistic variable A. The term B is represented by an isosceles triangle with the base width w, the altitude of the triangle in question is equal to the firing strength of the ith rule: F (i) (X) = μA(i) (X) = μA(i) (x1 ) T · · · T μA(i) (xN ), 1
N
(3)
190
K. Simi´ nski
where T denotes the T-norm and N stands for number of attributes. The localisation of the core of the triangle membership function is determined by linear combination of input attribute values:
(i) (i) (i) T T T y (i) = θT · 1, X = a , a , . . . , a (4) i 0 1 N · [1, x1 , . . . , xN ] . The fuzzy output of the system can be written as μB (y, X) =
I
Ψ μA(i) (X) , μB (i) (y, X) ,
(5)
i=1
where denotes the aggregation, Ψ – the fuzzy implication, i – the rule’s index and I – the number of rules. The crisp output of the system is calculated using the MICOG method: I g (i) (X) y (i) (X) y = i=1 , (6) I (i) (X) i=1 g where y (i) (X) stands for the location of the core of the consequent fuzzy set, F (i) – the firing strength of the ith rule, w(i) – the width of the base of the isosceles triangle consequence function of the ith rule. The function g depends on the fuzzy implication, in the system the Reichenbach one is used, so for the ith rule function g is g (i) (X) =
w(i) (i) F (X) . 2
(7)
The fuzzy system with parameterized consequences [3, 6, 7] is the system combining the Mamdani-Assilan [8] and Takagi-Sugeno-Kang [13,12] approach. The fuzzy sets in consequences are isosceles triangles (as in Mamdami-Assilan system), but are not fixed – their location is calculated as a linear combination of attribute values as the localisation of singletons in Takagi-Sugeno-Kang system. The figure 2 shows the fuzzy inference system with parametrized consequences for two-attribute objects and two fuzzy rules. For each rule the Gaussian membership values for attributes are determined. These values are then used to calculate the firing strength of the rule for this object. The membership values for attributes are T-normed (in the figure it is a minimum T-norm) in order to determine the rule’s firing strength. This value becomes then the height of the isosceles triangle in the rule’s consequent. The values of the object’s attributes are used to calculate the localisation of the triangle fuzzy set core. In the Fig. 2 these are y(1) for the first rule and y (2) for the second one. The values of the triangle set bases are tuned in system tuning procedure and are not further modified during elaborating the system’s response. The fuzzy sets of all consequences are aggregated and the crisp output is determined with MICOG procedure (eq. 6). The neuro-fuzzy systems are able to tune the parameters of the model. In this system two different methods are used for tuning. The parameters of
Data Noise Reduction in Neuro-fuzzy Systems
191
Fig. 2. The scheme of the fuzzy inference system with parametrized consequences for two rules and two attributes
the premises (c and s in Eq. 2) and the values of the support w of the sets in consequences are tuned with gradient method. The linear coefficients for the calculation of the localisation of the consequence sets are optimised with LSME iterative algorithm. The neuro-fuzzy systems differ in the method of rule extraction. The common one is the clustering. The data examples are clustered with FCM algorithm (what gives the division of the input space into regions – the premises of the rules) and then the premises of the rules are tuned with gradient method. The other way of input domain partition is its hierarchical split [11, 1, 9]. First one region is created, it is tuned (in the same way as depicted above), then the worst region is selected. The worst region is the region with the highest contribution to the total error elaborated by the system. This region is split with fuzzy C-means clustering procedure into two subregions. All regions are tuned and the next iteration starts. This approach implements the HS-47 system proposed in [11]. Sometimes it is better not to split the worst region, but to leave the existing regions untouched and add a new region (patch) for the data examples with the highest error. This idea is applied in the HS-65 system, where adding of the new region (rule) is done either by splitting the worst region as in HS-47 or by adding a new region (a patch region). The patch is a Gaussian bell calculated for
192
K. Simi´ nski
all examples with predicted attributed substituted with the normalised system error for each example. The error is normalised to the interval [0, 1].
4 Experiments The experiment testing the effectiveness of the presented method were conducted on two datasets that require complicated models (with more than 30 rules) for the best knowledge generalisation. The dataset are: 1. NewSinCos dataset is a synthetic two input, one output dataset. One hundred points from range x, y ∈ [0, 1] where randomly selected. These points created the tuples x, y, z, where z = 5 + x + y + sin 5x − cos 5y.
(8)
All tuples where divided into 10-fold cross-validation sets. 2. Hang dataset. The dataset is used in [10]. It is a synthetic dataset with two input and one output value calculated with formula 2 z = 1 + x−2 + y −1.5 , (9) where x and y are evenly distributed grid points from interval [1, 5]. There were prepared 100 point pairs. The experiments were conducted as 10-fold cross-validation. The tables 1 and 2 present the results for the dataset for ANNBFIS and two neuro-fuzzy systems with hierarchical input domain partition HS-47 and HS-65 with data noise removal and without one. The optimal models (with the lowest generalisation error) for the datasets in question have more than Table 1. The results elaborated by 10-fold cross-validation for the Hang dataset. The last row presents the mean values of results. ANNBFIS ANNBFIS RSME dncoeff RSME 0.082599 0.083361 0.098852 0.089380 0.089990 0.099293 0.063718 0.061759 0.073474 0.256514 0.099894
20 10 200 20 50 10 10 500 200 10
0.070915 0.083364 0.089181 0.089303 0.089686 0.073230 0.056346 0.061514 0.050410 0.095706
HS-47 HS-47 RSME dncoeff RSME 0.017467 0.036993 0.048746 0.164113 0.046801 0.042741 0.019733 0.022229 0.037441 0.105427
0.075966 0,054169
500 5 10 10 10 50 5 5 5 10
0.017029 0.026088 0.039788 0.017206 0.026555 0.018938 0.018227 0.022130 0.022130 0.105378
HS-65 HS-65 RSME dncoeff RSME 0.023859 0.018312 0.030323 0.022939 0.036942 0.026608 0.025664 0.013409 0.027542 0.104205
0.031347 0.032980
100 500 100 10 500 100 25 10 17 17
0.016749 0.014907 0.018323 0.019339 0.014085 0.032827 0.013120 0.031737 0.012445 0.009997 0.026291
Data Noise Reduction in Neuro-fuzzy Systems
193
Table 2. The results elaborated by 10-fold cross-validation for the NewSinCos dataset. The last row presents the mean values of results. ANNBFIS ANNBFIS RSME dncoeff RSME 0.035441 0.009875 0.019227 0.003585 0.004250 0.031313 0.008697 0.007317 0.005993 0.012381
100 100 500 200 500 100 200 200 250 500
0.013808
0.014913 0.009867 0.018600 0.003496 0.001734 0.029255 0.003972 0.005388 0.004891 0.008291
HS-47 HS-47 RSME dncoeff RSME 0.006554 0.015469 0.007479 0.003548 0.000544 0.003175 0.001010 0.000740 0.001127 0.002187
0.010041 0.004183
500 10 500 50 100 500 50 20 200 20
0.005517 0.003333 0.007478 0.001546 0.000505 0.003171 0.000852 0.000447 0.001092 0.002291
HS-65 HS-65 RSME dncoeff RSME 0.001567 0.001751 0.001427 0.000930 0.001316 0.005763 0.000657 0.002220 0.003199 0.004950
0.002623 0.002378
50 150 100 150 150 250 100 200 100 50
0.001161 0.000714 0.001095 0.000465 0.000857 0.001954 0.000559 0.000728 0.001824 0.003355 0.001271
30 rules. The number of rules has been cut down to ten to make models more interpretable. The significance of the elaborated results was tested with Wilcoxon rank-sum test for two independent samples [14]. On the significance level 1 − α = 0.95 there is a statistically significant difference in medians of results when the data noise removal was not applied and when it is used.
5 Summary The noise present in data may lead to generation of very complicated model to achieve optimal generalisation ability. The complicated model mean poor intelligibility for humans and more calculations for machines. Thus often the model are artificially simplified to save the interpretability. The simplified model (with less rules) have higher error for knowledge generalisation. In order to reduce knowledge generalisation error in simplified models it is convenient to remove noise from the data set. The paper presents the algorithm for noise removal based on modified discrete convolution is crisp domain. The experiments reveal that the algorithm can improve the generalisation ability for simplified model of highly complicated data.
References 1. Roberto, M., Almeida, A.: Sistema h´ıbrido neuro-fuzzy-gen´etico para minera¸ca ˜o autom´ atica de dados. Master’s thesis, Pontif´ıca Universidade Cat´ olica do Rio de Janeiro (2004) 2. Anastasio, M.A., Pan, X., Kao, C.-M.: A general technique for smoothing multidimensional datasets utilizing orthogonal expansions and lower dimensional smoothers. In: Proceedings of International Conference on Image Processing, ICIP 1998, October 1998, vol. 2, pp. 718–721 (1998)
194
K. Simi´ nski
3. Czogala, E., L ¸eski, J.: Fuzzy and Neuro-Fuzzy Intelligent Systems. Series in Fuzziness and Soft Computing. Physica-Verlag, A Springer-Verlag Company (2000) 4. Jang, J.S.R., Sun, C.T., Mizutani, E.: Neuro-Fuzzy and Soft Computing. Matlab Curriculum Series. Prentice Hall, Englewood Cliffs (1997) 5. Kantz, H.: Noise reduction for experimental data 6. L ¸eski, J., Czogala, E.: A new artificial neural network based fuzzy inference system with moving consequents in if-then rules and selected applications. Busefal 71, 72–81 (1997) 7. L ¸eski, J., Czogala, E.: A new artificial neural network based fuzzy inference system with moving consequents in if-then rules and selected applications. Fuzzy Sets and Systems 108(3), 289–297 (1999) 8. Mamdani, E.H., Assilian, S.: An experiment in linguistic synthesis with a fuzzy logic controller. International Journal of Man-Machine Studies 7(1), 1–13 (1975) 9. Nelles, O., Isermann, R.: Basis function networks for interpolation of local linear models. In: Proceedings of the 35th IEEE Conference on Decision and Control, vol. 1, pp. 470–475 (1996) 10. Rutkowski, L., Cpalka, K.: Flexible neuro-fuzzy systems. IEEE Transactions on Neural Networks 14(3), 554–574 (2003) 11. Simi´ nski, K.: Neuro-fuzzy system with hierarchical partition of input domain. Studia Informatica 29(4A (80)) (2008) 12. Sugeno, M., Kang, G.T.: Structure identification of fuzzy model. Fuzzy Sets Syst. 28(1), 15–33 (1988) 13. Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application to modeling and control. IEEE Trans. Systems, Man and Cybernetics 15(1), 116– 132 (1985) 14. Wilcoxon, F.: Individual comparisons by ranking methods. Biometrics Bulletin 1(6), 80–83 (1945) 15. Xiong, H., Pandey, G., Steinbach, M., Kumar, V.: Enhancing data analysis with noise removal. IEEE Transactions on Knowledge and Data Engineering 18(3), 304–319 (2006)
Enhanced Density Based Algorithm for Clustering Large Datasets Yasser El-Sonbaty1 and Hany Said2 1 2
Arab Academy for Science & Technology, Egypt [email protected] Arab Academy for Science & Technology, Egypt [email protected]
Summary. Clustering is one of the data mining techniques that extracts knowledge from spatial datasets. DBSCAN algorithm was considered as well-founded algorithm as it discovers clusters in different shapes and handles noise effectively. There are several algorithms that improve DBSCAN as fast hybrid density algorithm (LDBSCAN) and fast density-based clustering algorithm. In this paper, an enhanced algorithm is proposed that improves fast density-based clustering algorithm in the ability to discover clusters with different densities and clustering large datasets.
1 Introduction Data Mining is the science of extracting hidden and interesting patterns from large datasets. Clustering is one of the most significant techniques that is used in discovering important knowledge for patterns. The basic objective for clustering is to partition the given dataset into clusters so that objects within a cluster have high similarity in comparison to one another and very dissimilar to objects in other clusters. There are several paradigms for clustering such as partitioning paradigm, hierarchical paradigm, grid-based paradigm and density-based paradigm. Compose Density Between and Within Clusters method CDbw [1] is one of several clustering validity methods that is used for measuring compactness and separation degree for the clustering scheme. It has the ability to validate the quality of the clustering scheme that consists of several representative points in each cluster. DBSCAN algorithm [2] discovers clusters of arbitrary shapes. DBSCAN requires two input parameters, radius distance Eps and minimum density MinPts. It retrieves Eps-neighborhood of unprocessed object P (it applies region query using object P ). If Eps-neighborhood of object P contains at least MinPts then a new cluster with P as core object is created. It recursively applies region queries on P ’s neighborhood objects in order to expand the current cluster. These neighbors are directly density-reachable from object P. The algorithm terminates when no new objects can be added to any cluster. Although DBSCAN has the ability to dicover clusters of different shapes, it M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 195–203. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
196
Y. El-Sonbaty and H. Said
has several drawbacks such as, it cannot discover clusters of different densities because it uses single parameter setting (Eps and MinPts) and it is not suitable to cluster large dataset that does not fit in limited memory because it has to load the entire dataset before executing region query on object. Several algorithms enhance DBSCAN algorithm such as fast density-based clustering algorithm [3], fast hybrid density algorithm (L-DBSCAN) [4] and efficient density based clustering algorithm [5]. In this paper, a new algorithm is introduced that enhances fast densitybased clustering algorithm in the ability to discover clusters with different densities and clustering large datasets. First, the proposed algorithm derives the prototypes (leaders) using leader approach which performs single scan over the given dataset. After that, each leader will be replaced by a medoid object (the most centrally located object). Then, the dataset is distributed over these new leaders (medoids) in order to update the density map which will be used to discover clusters of different densities by following the density connectivity definition that presented in DBSCAN algorithm.
2 Related Works In this section, related algorithms of fast density-based clustering algorithm, L-DBSCAN algorithm and CDbw validity index are discussed. 2.1
Fast Density-Based Clustering Algorithm
Fast density-based clustering algorithm [3] sorts the entire dataset according to one feature then sequentially applies region query on each unprocessed object in the sorted list. If a core object P is found then it searches its neighbors for any labeled ones as these labeled objects will lead to apply region queries on them to decide whether to merge two clusters or not. When each neighbor of object P is unprocessed then object P with its neighbors will be labeled by a new cluster ID. The algorithm accelerates DBSCAN algorithm by decreasing the number of region queries required to cluster the given dataset using density connectivity definition. 2.2
L-DBSCAN: Fast Hybrid Density Based Clustering Algorithm
L-DBSCAN algorithm [4] needs four input parameters, coarse distance τ c and fine distance τ f in addition to Eps and MinPts. L-DBSCAN algorithm uses the concept of leader approach which extracts the prototypes with their neighbors (leaders record) for the given dataset. Each leader record contains the leader with its neighborhood that located at distance equal or less than distance threshold τ . Leader approach processes as follows, it maintains a set of leader L, which is initialized to be empty and is incrementally built. For each pattern x in dataset D, if there is a leader l such that distance between x and l is equal or less than distance threshold τ , then x is assigned
Enhanced Density Based Algorithm for Clustering Large Datasets
197
to the cluster represented by l otherwise x will be a new leader. The first phase of the algorithm is to construct clusters at coarse distance τ c and internally constructs sub-clusters inside each cluster using fine distance τ f . The second phase is to adapt the clusters using Eps and MinPts. It accelerates DBSCAN by constructing clusters through merging leaders based on the distances between them. 2.3
CDbw : Compose Density between and within Clusters Validity Index
CDbw validity index [1] measures the average density within clusters Intra dens(c) with the separation of clusters Sep(c), where c is the number of clusters discovered in the dataset. CDbw is defined by: CDbw = Intra Dens(c) × Sep(c)
(1)
Where Intra Dens(c) is the percentage of the neighborhood that belong to the representative objects in clusters. Sep(c) is the separation within clusters taking into account the distances between closest clusters. CDbw increases when clusters are highly dense, well separated and the densities in the areas among them are low.
3 Proposed Algorithm The proposed algorithm tries to achieve the following goals: • • •
Discovers clusters of different densities. Efficiently clusters large datasets. Obtains good clustering scheme.
The main stages of the proposed algorithm are described in Fig. 1 and will be discussed in details in the next subsections. 3.1
Find Density Map Using Leader Approach
Leader approach is applied on the given dataset as it performs single scan over the dataset in order to extract the prototypes (leaders) forming leaders record
Fig. 1. The stages of the proposed algorithm
198
Y. El-Sonbaty and H. Said
(a)
(b)
Fig. 2. (a) Synthetic dataset and (b) Shows the sorted K -dist graph
(density map). Each leader record contains the leader with its neighborhood that are located at distances equal or less than distance threshold τ . Leader approach acquires distance threshold τ in advance. Distance threshold τ is determined using K Nearest Neighbors KNN. It is applied on multiple samples of objects with suitable sample size [6]. For a given K, a function K -dist is defined as mapping each point in the sample to the distance from its K -th nearest neighbor. After that, it sorts the points of the sample in descending order according to their K -dist values forming the sorted K -dist graph [2]. The distance threshold for a sample is the K -dist (P ) where P is the point in the first valley in the sorted K -dist graph. Distance threshold τ is the average distance for the obtained distances from all samples. After that, the leader approach is applied over the given dataset in order to obtain the prototypes with their neighbors (initial density map). Fig. 2-b shows the K -dist graph (K = 4) for the given dataset shown in Fig. 2-a. The arrow refers to the point (50) and distance threshold for this sample equals 2.9063 (4-dist (50)). 3.2
Update Density Map
The leader approach has two drawbacks, it may consider an outlier as a leader and some objects are not associated to their nearest leaders. These drawbacks are fixed by updating the density map using two steps; first, a medoid object is extracted from each leader record. Secondly, the leaders are inserted into RTree and the dataset is distributed on them using τ as a distance threshold. After that, the leaders that have number of neighbors less than K are deleted and these objects are considered as noise. Then, the leaders record are sorted (according to their leader’s densities) in descending order; the first leader has the maximum number of associated neighbors and will be referred later as reference (Ref ). 3.3
Obtaining Clusters Using Modified Fast DBSCAN
In fast density-based clustering algorithm, it uses the following definitions: Definition 1: let P be an object in dataset and |NEps (P) | ≥ MinPts then all objects contained in the neighborhood of P are directly density reachable, P and all its neighborhood belong to the same cluster as object P.
Enhanced Density Based Algorithm for Clustering Large Datasets
(a)
(b)
199
(c)
Fig. 3. (a) Density connectivity, (b) Cluster i expansion and (c) Normal distribution curve
Definition 2: let P and Q be two objects, |NEps (P) | ≥ MinPts and |NEps (Q) | ≥ MinPts, if there is a core object O within the intersection of the two objects’ neighborhood, then all objects in the neighborhood of P and Q are density connected as demonstrated in Fig. 3-a. A modified fast DBSCAN is applied such that each cluster has its own parameters setting (Eps and MinPts), thus the proposed algorithm has the ability to discover clusters of different densities. Each cluster consists of several leaders as their boundaries are intersected with each other. Cluster i will be expanded using leader P (initiated by leader P ) using Epsi and MinPtsi as its parameters setting. It seeks only for other leaders located within a distance less than twice Epsi from it as in Fig. 3-b. It applies region queries on objects in the intersection area between the leaders’ boundary P and Q. When it finds object O as a core object [NEpsi (O) ≥ MinPtsi ], then leader Q is density connected to leader P and consequently the leader Q will be assigned to cluster i. The search space for the previous region query contains objects that belong to the cluster i in addition to objects associated to leader Q. The MinPtsi of cluster i is determined by the following equation: M inP tsi = 0.33 × Density(P )
(2)
Where Density(P) is the Epsi -neighborhood of leader P. In normal distribution dataset as in Fig. 3-c, there are 70% of the data are within one standard deviation away from the mean μ, so a cluster could have leaders with different densities ratio equal to 0.33. The Epsi of cluster i is determined by: Density(P ) − K (3) EP Si = τ × 2 − Density(Ref ) − K Where τ is the distance threshold that used previously in extracting the prototypes, P is the leader that initiates cluster i and K is the minimum number of neighbors associated to leader. The cluster should start by any leader that has density more than Min density that is defined by: M in Density = M in Query starter × density(Ref )
(4)
200
Y. El-Sonbaty and H. Said
When a cluster starts by a leader of density K (worst case) then it is expected that this cluster is considered as a group of noise objects. Therefore, there is a Min Query Starter which is a ratio of the Ref ’s density. The representative objects of a cluster are, the leaders which belong to that cluster and the core objects which have been discovered according to that cluster parameters setting. The remaining objects are labeled using their leaders ID.
4 Experimental Results Two measures are used in order to compare the performance of the proposed algorithm against the other algorithms. The first measure is the SpeedUp-Ratio; it is the running time ratio of the compared algorithm and the proposed algorithm. The second measure is the Validity-Index-Ratio; it is the CDbw ratio between the presented algorithm and the compared algorithm. Proposal algorithms needs two input parameters, K and Min query starter, these parameters have been tested using synthetic dataset (Fig. 2-a) that contains 12,222 objects with 10% noise. DBSCAN relevant algorithms and the proposed algorithm have discovered the right clusters. Figures 4, 5 and 6 show that the proposed algorithm gives the best clustering scheme when K = 11 and Min Query starter = 0.33. Fig. 4-a shows Speed-Up-Ratio between DBSCAN and the proposed algorithm is up to 5.1 times faster. Fig. 4-b indicates that the proposed algorithm discovers clusters of high CDbw validity index with comparison to DBSCAN with ratio up to 1.8 times. Fig. 5-a shows that the running time of the proposed algorithm is less than fast density-based with ratio up to 2.9 times and Fig. 5-b indicates that the presented algorithm produces clusters with higher CDbw index with ratio up to 3.2 times. Although the proposed algorithm takes more running time than L-DBSCAN algorithm as shown in Fig 6-a, it constructs clusters of higher CDbw validity index than those clusters found by L-DBSCAN as in Fig. 6-b. When clustering the dataset (Fig. 2-a), DBSCAN performs 11313 queries and fast density-based clustering algorithm executes 351 queries using the
(a)
(b)
Fig. 4. (a) Speed-Up-Ratio and (b) Validity-Index-Ratio between proposed algorithm and DBSCAN algorithm
Enhanced Density Based Algorithm for Clustering Large Datasets
(a)
201
(b)
Fig. 5. (a) Speed-Up-Ratio and (b) Validity-Index-Ratio between proposed algorithm and fast density-based clustering algorithm
(a)
(b)
Fig. 6. (a) Speed-Up-Ratio and (b) Validity-Index-Ratio between proposed algorithm and L-DBSCAN algorithm
(a)
(b)
Fig. 7. (a) Total number of region queries processed by the proposed algorithm and (b) Average search space required by the proposed algorithm
entire dataset as search space for each region query while, the proposed algorithm performs 71 to 132 queries (Fig. 7-a) and acquires average search space from 26% to 37% of the dataset (Fig. 7-b) using different values of K.
5 Discussions and Conclusions DBSCAN algorithm and its enhancement algorithms cannot discover clusters of different densities because there is single parameters setting (Eps and MinPts), however, the presented algorithm can discover clusters with different densities, from the cluster that initiated by the Ref leader till the cluster that started by leader of density K.
202
Y. El-Sonbaty and H. Said
The proposed algorithm loads a subset of the given dataset when applying region query (reduces the search space), however in DBSCAN and fast density-based clustering algorithms, they load the entire dataset before executing region query. So, the proposed algorithm is more suitable than DBSCAN and fast density-based clustering algorithms for clustering large datasets. The presented algorithm builds good clustering scheme in comparison to DBSCAN relevant algorithms as the majority of the representative objects are the leaders. All leaders are the medoids according to their associated neighbors. Thus, it increases the quality of the clustering scheme (high CDbw validity index). The proposed algorithm exploits density map in reducing the number of region queries and decreasing the query time. As a result, the presented algorithm demands less running time than DBSCAN and fast density-based clustering algorithms. L-DBSCAN requires less running time in comparison with the presented algorithm because L-DBSCAN does not perform any region query. It expands the cluster using the distances between leaders, however, the proposed algorithm expands the cluster using density connectivity. The validity index CDbw of the clustering scheme produced by proposed algorithm is more than the validity index CDbw of the clusters that constructed by L-DBSCAN algorithm because L-DBSCAN can consider an outlier as a leader and eventually constructs clusters with low compactness, in addition to some objects are not associated to their nearest leaders. From algorithm analysis and experimental results, the following points can be concluded: • • • • •
The proposed algorithm needs less running time than DBSCAN and fast density-based clustering algorithms for a given dataset and discovers clusters of higher CDbw validity index than those clusters produced by them. Although the presented algorithm takes more running time than L-DBSCAN, it produces clusters of better cluster scheme than clusters found by L-DBSCAN. The proposed algorithm discovers clusters of different densities. The presented algorithm is more suitable than DBSCAN and fast density-based clustering algorithms for clustering large datasets. From the experimental results, the most suitable values for k are [5 - 12] and [0.15 - 0.33] for Min Query Starter.
References 1. Halkidi, M., Vazirgiannis, M.: Clustering validity assessment using multirepresentatives. Poster paper in the Proceedings of 2nd Hellenic Conference on Artificial Intelligence, Thessaloniki, Greece (2002)
Enhanced Density Based Algorithm for Clustering Large Datasets
203
2. Ester, M., Kriegel, H.-P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the International Conference of KDD 1996 on Knowledge Discovery and Data Mining, Portland, Oregon, USA (1996) 3. Liu, B.: A fast density-based clustering algorithm for large databases. In: Proceedings of the IEEE International Conference on Machine Learning and Cybernetics, Dalian (2006) 4. Viswanath, P., Pinkesh, R.: l-DBSCAN: A fast hybrid density based clustering method. In: Proceedings of the IEEE International Conference on Pattern Recognition, Hong Kong (2006) 5. El-Sonbaty, Y., Ismail, M.A., Farouk, M.: An efficient density based clustering algorithm for large databases. In: Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence, FL, USA (2004) 6. Lind, D.A., Mason, R.D., Marchal, W.G.: Basic statistics for business and economics. McGraw-Hill Publishers, New York (2000)
New Variants of the SDF Classifier Maciej Smiatacz and Witold Malina Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Narutowicza 11/12, 80-952 Gdansk, Poland {slowhand,malwit}@eti.pg.gda.pl
Summary. This paper is addressing problems related to the construction of classifiers in which the typical vector representation of a pattern is replaced with matrix data. We introduce some new variants of the method based on the Similarity Discriminant Function (SDF). The algorithms were tested on images of handwritten digits and on the photographs of human faces. On the basis of the experiments we can state that our modifications improved the performance of the SDF classifier.
1 Introduction Classical methods of classifier construction, based on the patterns represented by feature vectors, are well known and have been thoroughly described in literature. In the typical case their input vectors are created by a specialized feature extraction module. However, since the introduction of the method of eigenfaces [1] the concept of obtaining the feature vectors directly from concatenated rows (or columns) of bitmaps representing the images has gained a lot of interest. Still, the disadvantage of such approach is that it leads to large feature vectors even for small-sized images. This in turn causes difficulties with their further processing, particularly the problems with inverting the covariance matrices. Another common drawback of many pattern recognition methods is the fact that every time we want to add a new class to the system, the problem of training appears. The training is necessary because several decision regions must be reshaped in order to assign the fragment of the feature space that would correspond to the newly added class. For example, we have to deal with it in the case of a face recognition system identifying persons entitled to enter some protected building. If the database of the personnel was repeatedly updated, then the requirement of carrying out the time-consuming training after every modification could become quite problematic. Both problems were adressed by the authors of the so-called SDF method [2] in which each class was treated separately. The criterion called Similarity Discriminant Function (SDF) was used to calculate the vector maximizing the
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 205–212. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
206
M. Smiatacz and W. Malina
similarity of patterns in each class; another interesting attribute of this method was that it used matrix patterns as input, instead of traditional feature vectors. The unusual properties of SDF algorithm motivated us to investigate its capabilities and possible disadvantages. In the following section we describe the method and propose some new variants of this approach. The results of experiments related to face identification and handwritten digit recognition are also presented.
2 Classifiers Based on the SDF Function The authors of [2] defined the Similarity Discriminant Function (SDF) measuring the similarity of matrices Yi projected onto vector d in the following way: dT (
m
m
Yi YjT )d i=1 j=1,i =j m dT ( Yi YiT )d i=1
F (d) =
(1)
where Y1 , Y2 , ..., Ym (m ≥ 2) are the matrix patterns with dimensions n1 × n2 . The vector d for which the function reaches the maximum value is the sought for projection vector that we will denote as dopt . It can be proved that the higher the value of F (d) the greater the concentration of images Y1 , Y2 , ..., Ym projected onto d. The authors of [2] propose to determine dopt by means of the algorithm presented in [3], which is capable of providing the solution even if the training set contains a small number of samples. In order to simplify the notation of our further discussion we introduce the new symbols: m m m B= Yi YjT , C= Yi YiT (2) i=1 j=1,i =j
i=1
This leads to the simpler form of equation (1): F (d) =
dT Bd dT Cd
(3)
The dopt vector must be calculated individually for each of the L classes. Then we can project source matrices onto the projection axis of each class (l) and obtain the feature vectors yi (l = 1, 2, ..., L): (l)
(l)
yi = YiT dopt
(4)
In the system described in [2] the final classification is performed in the discriminant space created on the ground of the multi-class Fisher criterion. As said by the authors, the classification space is spanned by the discriminant vectors e1 , e2 , ..., es . The feature vectors in this space are obtained as a result of the following operation:
New Variants of the SDF Classifier
207
Table 1. Possible variants of SDF function; I – identity matrix Function Numerator Denominator F (1) F1 F2 F3 F4 F5
B B C B C B+C
C I I B+C B+C I
(l)
(l)
zi = TT yi
(5)
where T = [e1 , e2 , ..., es ]. We can calculate the mean vector for each class in this space and then any classification algorithm can be used, for example the minimal distance method with Euclidean metric. In [2] only one form of the SDF was considered, i.e. (1). It seems, however, that some further research should be carried out in order to check and compare the properties of other variants of the function, obtained for different combinations of matrices used in the numerator and denominator of (3). The new variants of the SDF that we propose are listed in Table 1. The use of the identity matrix I significantly reduces computational complexity of the algorithm: if ml denotes the number of training samples from l -th class, then F requires ml (ml − 1) + ml matrix multiplications whereas F2 – only ml . The influence of this modification on the classifier efficiency, however, can only be evaluated experimentally. Another way to modify the SDF method is to introduce the changes to the structure of the source data. For example, if the images Yi were represented by matrices with dimensions n1 × n2 , where n1 > n2 , then the simplification of calculations could be accomplished by replacing the products Yi YiT with the alternative structure – YiT Yi . Moreover, we can create matrices of different sizes by concatenating their columns. This way the same information is contained within diverse data structures which influences the size of matrices (2) and therefore the computational complexity of the algorithm. As we already mentioned, the introduction of the classification space spanned by discriminant vectors, proposed by the authors of [2], may raise doubts. It increases the computational complexity of the algorithm and deprives the SDF method of its primary advantage: the possibility to ignore the between-class dependencies, which eliminates the necessity to reconstruct the whole classifier each time a new class is added. Moreover, we do not know how the transformation of the feature vectors to the new space influences the classification error if the unknown image Y is represented by L feature vectors maximizing its similarity to the respective classes. Therefore we think that it is reasonable to simplify the SDF method by eliminating its second stage, related to the creation of the discriminant space. In other words, we propose to simplify the transformation of the image Y
208
M. Smiatacz and W. Malina
and to use only one operation (3), and then the minimal distance classifier. This way the SDF method would ignore the between-class dependencies. Another option worth taking into account is to compute not one but many projection axes for each class. As a result, the unknown image Y would be represented by several feature vectors in the subspace of each class. In order (l) to calculate several vectors dopt(a) for each class (where a = 1, ..., na is the number of the projection axis) we can once again use the rank decomposition algorithm described in [3]. The construction of such multi-axis SDF classifier could be carried out in the following way. Algorithm 1 – construction of the multi-axis SDF classifier ml
1. Calculate the matrices B(l) =
ml
i=1 j=1,i =j
Yi YjT and C(l) =
ml i=1
Yi YiT
using images Yi from the corresponding class. 2. On the basis of the selected variant of the SDF (Table 1) create the projec(l) tion axes dopt(a) of each class (optionally the Gram-Shmidt orthogonalization of the base can be performed). (a) 3. For each image Yi (i = 1, 2, ..., ml ) calculate the feature vectors yi (l) using axes dopt(a) of the class to which Yi belongs: (a)
yi
(l)
= YiT dopt(a)
(6)
4. Compute the mean vectors: μ(l,a) =
ml 1 (a) y ml i=1 i
(7)
5. Repeat steps 1-4 for each class cl . Having calculated the parameters of the classes we are able to classify the unknown image Y with the help of Algorithm 2. Algorithm 2 – multi-axis SDF method, classification 1. Calculate the feature vectors y(l,a) corresponding to the subsequent axes (l) dopt(a) for each class: (l)
y(l,a) = YT dopt(a)
(8)
2. Determine the level of similarity of the image Y to each class: D(l) (Y) =
na
w(l,a) D(l,a) (Y)
(9)
a=1
where
D
(l,a)
(Y) =
(y(l,a) − μ(l,a) )T (y(l,a) − μ(l,a) )
(10)
New Variants of the SDF Classifier
209
(l)
w(l,a) – a weight that describes the contribution of the axis dopt(a) in representing the information about class cl . The coefficients w(l,a) can be (l) determined using the eigenvalues λk of matrix C(l) : (l)
w(l,a) =
λa na (l) λk
(11)
k=1
3. Classify the image according to the decision rule: x∗ (Y) = cl ⇔ D(l) (Y) < D(k) (Y)
k = 1, ..., L , k = l
(12)
The last thing we have to consider is the value of the parameter na (the number of projection axes for each class). Intuitively na = 2 or na = 3 should be sufficient.
3 Experiments In order to carry out experiments we developed the SimPaRS system, a Microsoft Windows application written in C++. In most of our experiments we used two data sets: the database of handwritten characters (we will call it the DIGIT database), and the Olivetti Research Laboratory database (ORL), containing the photographs often used in the testing of face recognition systems [4]. The testing and training sets were built using the holdout method, i.e. the data were divided into two equal parts: the first half constituted a training set, the other played the role of a testing set. The DIGIT database is a subset of the database created by NIST [5]. It contains characters written manually by 39 persons. We used the digits from 0 to 9, that is 10 classes of patterns. The characters are represented by binary (black and white) images and are scaled down to the size of 32 × 32 pixels. With the help of the SimPaRS program two subsets of the images were prepared (DIGIT1 and DIGIT2), each containing 400 patterns (40 for each class). The subsets were interchangeably used for training and testing. The ORL database includes 400 photographs showing the faces of 40 persons (40 classes). The images have the dimensions of 92 × 112 points and are stored in the 8-bit format (256-level grayscale). These data are, in our opinion, not diverse enough to evaluate the real-life face recognition system, nevertheless they allowed us to test the algorithms on non-binary images. Two collections of the ORL photographs were prepared (ORL1 and ORL2) and then used interchangeably as the training and the testing set. Each set contained 40 classes, and each class in a set was represented by only 5 images. The resolution of the images was reduced to the dimensions of 46 × 56 pixels. Table 2 shows the experimental results obtained for the original SDF method tested on DIGIT and ORL databases. As we can see, the efficiency
210
M. Smiatacz and W. Malina
Table 2. Correct classification rates for the original SDF algorithm
of the original algorithm is surprisingly low for the digits and significantly better in the case of the face recognition (especially on the training set). The inferior performance of the original method increased our doubts regarding the discriminant transformation (5). Therefore, we discarded this transformation completely: the minimal distance classifier was constructed directly from vectors yi (4), containing 32 components for DIGIT databases and 46 in the case of ORL. This way the average recognition rates raised to about 50% for digits and 80% for face images. The results, however, were still unsatisfactory. The aim of the next experiment was to test the effectiveness of the multiaxis SDF method and to find out what is the optimal number of axes na . Additionally, we wanted to check if the Gram-Schmidt orthogonalization of the axes [6] improves the results. We found out that: • •
in the case of handwritten digits increasing the number of axes improves the quality of classification while in the case of the faces introducing more than na > 2 axes does not lead to better results; the better recognition rates were obtained for non-orthogonal systems, which seems quite surprising.
Having analyzed the results of the experiment we decided to use the following configuration in the next set of tests: na = 5 non-orthogonal axes for DIGIT databases, na = 2 non-orthogonal axes for ORL databases. In the next experiment we wanted to check what kind of the source data transformation would give the best classification results. The other goal was to compare the simplified variants of the SDF method (Table 1). During the tests the source patterns were transformed in such a way that the matrices (2), appearing in the examined criterions, took the dimensions from 2 × 2 to 128 × 128. For 2 × 2 and 4 × 4 matrices, used in the experiments regarding handwritten digits, the number of axes na was reduced. The results obtained for selected SDF variants when DIGIT1 database was used as a training set are listed in Table 3; Table 4 presents the recognition rates achieved when the training was carried out on ORL1 database. The best results are emphasized with the grey background. Our experiment showed the surprising regularity: the efficiency of the classification (for the training and
New Variants of the SDF Classifier
211
Table 3. Handwritten digit recognition; ttr – training time, tte – testing time
Table 4. Face recognition results; ttr – training time, tte – testing time
the testing set) was, in most of the cases, higher when the dimensions of the matrices, used to calculate the axes, were smaller. This means that in this case the original image is not the optimal structure, at least for the data we used. As we can see from Table 3, such modifications typically lead to longer testing times – in comparison with the original method. Tables 3 and 4 show that for the modified data the classifier using F2 or F5 outperforms the one based on F (1). It is also worth noticing that our improvements increased the correct recognition rates for handwritten digits from less than 30% (Table 2) to 85% (Table 3).
212
M. Smiatacz and W. Malina
4 Conclusion The authors of the original SDF method certainly tried to deal with the problems of pattern classification in a very innovative way: in their algorithm each class is treated individually and the projection axis maximizing the similarity of the patterns belonging to the given class is computed. Moreover, the method uses matrices – not vectors – as input structures. Despite the results reported in their work, however, our experiments showed that the original variant of the SDF classifier performs quite disappointingly. The modifications that we introduced, i.e. the new forms of the similarity criterion as well as the changes of the input matrix size, led us to the algorithm which is faster and far more accurate. In addition, thanks to the removal of the secondary discriminant transformation, the resultant method ignores the between-class dependencies so the cost of re-training after the addition of a new class is minimal. Undoubtedly there are methods than recognize faces or handwritten characters more accurately than the SDF algorithm. It was not our goal, however, to create a specialized application-oriented real-life system. Instead, we managed to improve the performance of the method that – thanks to its special advantages – can appear useful in particular cases.
References 1. Turk, M., Pentland, A.: Eigenfaces for Recognition. J. Cognitive Neuroscience 3(1), 71–86 (1990) 2. Cheng, Y.-Q., Liu, K., Yang, J.-Y.: A Novel Feature Extraction Method for Image Recognition Based on Similar Discriminant Function (SDF). Pattern Recognition 26(1), 115–125 (1993) 3. Cheng, Y.-Q., Zhuang, Y.-M., Yang, J.-Y.: Optimal Fisher Discriminant Analysis Using the Rank Decomposition. Pattern Recognition 25(1), 101–111 (1992) 4. Samaria, F., Harter, A.: Parameterisation of a Stochastic Model for Human Face Identification. In: Proc. 2nd IEEE Workshop on Applications of Computer Vision (1994) 5. Garris, M.D., Wilkinson, R.A.: NIST Special Database 3. Handwritten Segmented Characters. National Institute of Standard and Technology, Gaithesburg, MD, USA (1992) 6. Okada, T., Tomita, S.: An Optimal Orthonormal System for Discriminant Analysis. Pattern Recognition 18(2), 139–144 (1985)
Time Series Prediction Using New Adaptive Kernel Estimators Marcin Michalak Silesian Insitute of Technology, Institute of Computer Science, ul. Akademicka 16, 44-100 Gliwice, Poland [email protected]
Summary. This short article describes two kernel algorithms of the regression function estimation. First of them is called HASKE and has its own heuristic of the h parameter evaluation. The second is a hybrid algorithm that connects SVM and the HASKE in such way that the definition of local neighborhood bases on the definition of the h–neighborhood from HASKE. Both of them are used as predictors for time series.
1 Introduction Estimation of the regression function is the one of the basic problems that deal with the machine learning discipline [27] [28]. The evaluation of regression function consists of describing dependencies in the observed data set. Those relations can be overt (obvious for the expert) or hidden. Methods of the regression function estimation can be divided into two groups: parametrical, that consist of finding optimal values of the finite number of parameters, and nonparametrical, that don’t assume any class of the function estimator. Very common regression functions estimators are spline functions [17] [12], radial basis functions [19], additive (and generalized additive) models [13], the LOWESS algorithm [30] or kernel estimators [25] [29] with support vector machines [7] [32]. Methods of the nonparametric regression can be also used for the time series analysis, on the condition that the method of the smoothing parameter h had been modified [14]. This article describes two new methods of the time series prediction, that are based on the certain regression function estimation. The first method is a kernel estimator (HASKE ) with the adaptive definition of the h–neigh– borhood. The second one is a hybrid algorithm that connects the mentioned kernel estimator and the support vector machine (HKSVR). Both of the methods base on the similar heuristic of the h parameter evaluation.
2 Estimation of the Regression Function 2.1
Kernel Estimators
Kernel estimators [25] [29] make it possible to evaluate the value of the regression function as an approximation of the conditional expected value M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 213–220. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
214
M. Michalak
of the explained variable Y upon the condition, that the explanatory variable X took the value x [19]: f (x) = E(Y |X = x) (the onedimensional case). With the assumption, that the joint distribution of pairs (X, Y ) is estimator can be written as: f (x) = x−xthe Nadaraya–Watson x−x ncontinous, n i i / i=1 K h , where n is a number of train pairs, K is i=1 yi K h a kernel function and h the smoothing parameter. Similar kernel estimators are: Gassera–Muller [22] [18], Priestley – Chao [31] [18], Stone–Fan [4] [18]. The function must meet some criterions [20] to be used as the kernel function. One of the most popular is the Epanechnikov kernel [24] [20]: √ −1 K(x) = 3(1 − x2 )/41[−1;1] , and the normal kernel K(x) = 2π exp(−x2 /2). The second step of the kernel estimator creating is the selection of the smoothing parameter h. As it is described in [23] and [6], the selection of h is more important than the selection of the kernel function. Small values of the h cause that the estimator fits data too much. Big values of this parameter h lead to the estimator that oversmooths dependencies in the analysed set. The most popular method of evaluation the h parametr is the analysis of the approximation of the Mean Integrated Square root Error. Optimization 4 1/5 the MISE in respect of h gives: h0 = R(K)1/5 (σK R(f ))−1/5 ·n value
∞ . The 4 2 of the expression σK depends on the kernel function K: σK = −∞ x2 K(x)dx.
∞ The expression R(·) = −∞ f 2 (x)dx is unknown, so it is being replaced by some estimators. It leads to the following expression: 34)n1/5 h0 = 1, 06 min( σ , R/1,
(1)
Details of derivations can be found in [23]. More advanced methods of h evaluation can be found in [6] [4] [2] [5] [3]. 2.2
Support Vector Machines
Support Vector Machines (SVM ) were defined in [7] and later in [26] [28]. Although it was invented as a classification tool SVM is also used for the regression problem [32]. In this method, the estimated function fn (x) should minimize the following objective function: J(w, ξ) = ||w||/2+C i=1 (ξi + ξi∗ ) with constraints: ⎧ ⎨ yi − wxi − b ≤ ε + ξi wxi + b − yi ≤ ε + ξi∗ (2) ⎩ ξi , ξi∗ ≥ 0 For each train object the pair of the Lagrange multipliers αi , α∗i are obtained. They make it possible to evaluate the value of the regression function as f (x) = ni=1 wx + w0 where w = ni=1 (αi − α∗i )xi and w0 = −w (xr + xs )/2 where xr and xs are support vectors (vector are called support when one of its Lagrange multipliers is not equal to zero). Detailed calculations can be found in [9]. The presented model of the support vector regresion SVR can be called global. There are also a number of its modifications, that use a local learning
Time Series Prediction Using New Adaptive Kernel Estimators
215
Fig. 1. The same time series in two spaces
paradigm. The algorithm presented in [8] uses kN N as a local training set. The other algorithm says that the value of the parameter depends on local covariance matrix (Σi ), calculated on the basis of training points from the neighborhood of point xi [34].
3 HASKE Algorithm Kernel estimators can be used as a prediction tool, after the modification of the data space [14]. Instead of the set of (t, xt ) pairs the space of (xi , xi+pmax ) pairs is created, where pmax is a maximal prediction horizon. Fig. 1 shows the same time series (G series from [21]) in two spaces. HASKE (Heuristic Adaptive Smoothing parameter Kernel Estimator ) belongs to the group of kernel estimators , so it needs the value of the h parameter. The aim of the heuristic that evaluates h parameter, is to find the value, that gives the least estimation error on the tune set. The definitions of the train, tune and test set and the definition of the heuristic are described in the following subsections. 3.1
Train, Tune and Test Sets
The process of supervised learning needs the definition of the train, test and sometimes tune set. For the typical time series prediction task three mentioned sets are consecutive sequences of objects (e.g. train: {x1 , xf }; tune {xf +1 , xf +r }; test {xf +r+1 , xf +r+pmax }, f, r, pmax ∈ N). It can be also described as a simple implementation of the walk-forward routine [10] [11]. The moving data set into the new spaces, on the assumption that maximal prediction horizon is pm , leads to the following sets: train {(x1 , x1+pm ), . . . , (xn−3pm , xn−2pm )}; tune {(xn−3pm +1 , xn−2pm +1 ), . . . , (xn−2pm , xn−pm )}; test {(xn−2pm +1 , xn−pm +1 ), . . . , (xn−pm , xn )}.
216
3.2
M. Michalak
HASKE — Kernel Estimator of the Time Series
It is common, that for some extreme test points there are no train points, that have positive value of the kernel function. Their h–neighborhood ( hxi — i (h–neighboorhood of the test point xi ) the set of points x that K( x−x h ) > 0) is an empty set and it usually implies that the estimator value is 0 and the regression error increases significantly. To avoid this effect authors decided to increase the value of h adaptively. The incrementation should eliminate the unwanted effect but also prevents the oversmoothing. For this purpose the regression error on the tune set is being observed. The parameter μ modifies h in such a way: h = μh. Starting from μ = 1 this parameter is increased by the defined step as long as the regression error on the tune set decreases. Then as the optimal value of h is concerned the value μh that gives the smallest regression error on the tune set. This procedure is perfomed for any phase of the analysed time series period. The sample result for the G series is shown on the Fig. 2. To assure the independence the μ value from the phase of the period, it is evaluated as the median of μi values, evaluated for every phase of the series period. If we assume that rmse(t, p, h) is the p consecutive time series values estimation error from the time interval [t + 1, t + p], with the usage of the h smoothing parameter value, the μ value will became: μ = med{arg min rmse(t − i, pmax , h · m), i = 0, 1, ..., pmax − 1}
(3)
m
The heuristic of h evaluation assures nonempty h–neighborhoods but causes little effect of oversmoothing. It means, that some time series values can be underestimated. Underestimation α of the single value y can be expressed
250
200
RMSE
150
100
50 0 0 1
5 1.5
2
2.5 μ
10 3
3.5
4
period phase
15
Fig. 2. The dependence of the regression error (RMSE ) on the period phase and μ parameter
Time Series Prediction Using New Adaptive Kernel Estimators
217
as the fraction of the estimation result and the real value α = y /y, and the underestimation on the tune set as a median of individual values αi : α = med αi , i = 1, . . . , k. Finally, the equation of the estimated value of the time series value at t + pm , with α as an underestimation on the tune set, can be presented as follows: xt+pm = f (xt ) = α−1
m−p m
yi K
i=1
m−p m xi − xt xi − xt / K h·μ h·μ i=1
(4)
4 The HKSVR Estimator Support Vector Machines were used for time series prediction repeatedly [10] [33] [15] [16]. It made authors invent a hybrid method, that joins both the HASKE model (its heuristic of h and h–neighborhood evaluation) and SVM, called HKSVR (Hybrid Kernel and Support Vector Regression). First step of the algorithm is the choice of the δ value, that defines the way of transformation of time series into the modified space. Then, the heuristic algorithm evaluates the new value of h parameter for the train set, on the basis of the smaller train set and the tune set. Finally, for every test point its h–neighborhood is treated as a train test for a local SVM.
5 Results All regression experiments correspond to the rule of unbiased prediction. As the measure of the ex post error the square root of mean square of difference between the predicted time series value and the real one was used (Root k Mean Squared Error, RMSE): err = [k −1 i=1 (yi − y i )2 ]1/2 where k means the number of objects in the test set or the maximal prediction horizon. 5.1
HASKE Model Results
The HASKE model was used as a prediction tool for four different time series: M , N (synthetic) and G, E (from Box & Jenkins [21]). Table 1 shows the comparison of results of one period of the series prediction between the HASKE results and the standard kernel estimator result. Table 1. Comparison of Nadaraya–Watson estimator and HASKE series period duration kernel estimation HASKE M 17 42,77 8,74 N 8 49,09 4,6 G 12 275,26 17,18 E 11 33,37 36,76
218
M. Michalak
Table 2. The dependence of the estimation accuracy increase on the prediction horizon and the harmonic nth maximal harmonic 2 3 4 5 6 7 8 9 10
1 -91 -1 -298 -46 -44 -13 -2 -33 114
2 -40 23 13 112 -24 4 3 -49 152
prediction 3 4 5 -124 -117 -53 6 14 4 40 25 11 -19 -17 -11 3 17 26 -34 -28 6 0 -11 266 403 347 -6 -143 0 0
horizon p 6 7 8 9 10 -149 -25 25 3 1 14 91 45 19 16 69 48 19 -8 -41 -10 -24 -27 -20 -47 21 40 0 -146 -11 -2 -5 -4 0 0 208 189 136 104 102 0 0 0 0 0 -85 0 0 0 0
Table 3. Statistical description of the estimation improvement with the usage of the HKSV R model horizon 1 2 3 4 5 6 7 8 9 10 avg -45,94 21,74 14,60 25,55 26,94 7,38 34,97 21,66 -5,38 2,27 std. dev. 109,55 67,61 158,03 127,72 92,35 98,71 69,16 47,66 63,88 42,91
5.2
HKSVR Model Results
The analyzed time series is the Warsaw stock index WIG20 [1]. As a data sample closing values from 10.04.2003 to 23.07.2007 quotation were used. The value of δ parameter was evaluated on the basis of the Fourier time series analysis. For each harmonic its amplitude was calculated and harmonics with the ten highest amplitudes were chosen to define the δ parameter values. The δ for the first harmonic (the highest amplitude) took value 1078, so the period of 1078 days was rejected from the further analysis. The Table 2 shows decreases of estimations errors as the result of using HKSVR instead of SVM (rounded to integer). It means that positive values speaks for the benefit of HKSVR (the increase of prediction accuracy). It can be observed that the HKSVR model works better for short-term forecasts, exluding next-day forecasts. Cumulatively, in 39 cases HKSVR gave better results, 13 times the same and 38 times worse. But excluding nextday forecast and 9,10–day forecast the quotient will take the value 32:7:24. The improvement obtained with the usage of the HKSVR model can be also described as a mean prediction improvement in reference to SVM result. For each prediction horizon the mean improvement and its standard deviation was calculated and the results are shown in the Table 3.
Time Series Prediction Using New Adaptive Kernel Estimators
219
6 Conclusions This article describes two new kernel estimators of the regression function. Presented estimation models were developed as the results of the research that focused on the time series prediction, especially on using kernel methods in time series prediction [14]. The main advantage of the described HASKE model is the decrease of the estimation error, that was caused by an empty h–neighborhoods of some test points. Experiments, performed on synthetic and real time series shows, that HASKE model of estimation gives better results than standard kernel estimators. The definition of neighborhood, defined in the HASKE, occurred also effective as an improvement of the standard SVM regression model. Results of the prediction of the WIG20 closing values were better for the HKSVR model of regression than for the SVM model.
References 1. WIG20 historical data, http://stooq.pl/q/d/?s=wig20 2. Gasser, T., Kneip, A., Kohler, W.: A Flexible and Fast Method for Automatic Smoothing. Annals of Statistics 86, 643–652 (1991) 3. Terrell, G.R.: The Maximal Smoothing Principle in Density Estimation. Annals of Statistics 85, 470–477 (1990) 4. Fan, J., Gijbels, I.: Variable Bandwidth and Local Linear Regression Smoothers. Annals of Statistics 20, 2008–2036 (1992) 5. Terrell, G.R., Scott, D.W.: Variable Kernel Density Estimation. Annals of Statistics 20, 1236–1265 (1992) 6. Turlach, B.A.: Bandwidth Selection in Kernel Density Estimation: A Review. Universite Catholique de Louvain, Technical report (1993) 7. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proc. of the 5th annual workshop on Computational Learning Theory, Pittsburgh, pp. 144–152 (1992) 8. Fernandez, R.: Predicting Time Series with a Local Support Vector Regression Machine. In: Proc. of the ECCAI Advanced Course on Artificial Intelligence (1999) 9. Smola, A.J., Scholkopf, B.: A tutorial on support vector regression. Statistics and Computing 14, 199–222 (2004) 10. Cao, L.J., Tay, F.E.H.: Svm with adaptive parameters in financial time series forecasting. IEEE Trans. on Neural Networks 14, 1506–1518 (2003) 11. Kaastra, I., Boyd, M.: Designing a neural network for forecasting financial and economic time series. Neurocomputing 10, 215–236 (1996) 12. Friedman, J.H.: Multivariate Adaptive Regression Splines. Annals of Statistics 19, 1–141 (1991) 13. Hastie, T.J., Tibshirani, R.J.: Generalized Additive Models. Chapman and Hall, Boca Raton (1990) 14. Michalak, M., St¸apor, K.: Estymacja j¸adrowa w predykcji szereg´ ow czasowych. Studia Informatica 29 3A (78), 71–90 (2008) 15. Michalak, M.: Mo˙zliwo´sci poprawy jako´sci uslug w transporcie miejskim ´ aska, Katowpoprzez monitoring nat¸ez˙ enia potok´ ow pasa˙zerskich. ITS dla Sl¸ ice (2008)
220
M. Michalak
16. Sikora, M., Kozielski, M., Michalak, M.: Innowacyjne narz¸edzia informatyczne analizy danych. Wydzial Transportu, Gliwice (2008) 17. de Boor, C.: A practical guide to splines. Springer, Heidelberg (2001) 18. Gajek, L., Kaluszka, M.: Wnioskowanie statystyczne, WNT, Warszawa (2000) ´ 19. Koronacki, J., Cwik, J.: Statystyczne systemy ucz¸ace si¸e. WNT, Warszawa (2005) 20. Kulczycki, P.: Estymatory j¸adrowe w analizie systemowej. WNT, Warszawa (2005) 21. Box, G.E.P., Jenkins, G.M.: Analiza szereg´ ow czasowych. PWN, Warszawa (1983) 22. Gasser, T., Muller, H.G.: Estimating Regression Function and Their Derivatives by the Kernel Method. Scandinavian Journal of Statistics 11, 171–185 (1984) 23. Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Chapman & Hall, Boca Raton (1986) 24. Epanechnikov, V.A.: Nonparametric Estimation of a Multivariate Probability Density. Theory of Probability and Its Applications 14, 153–158 (1969) 25. Nadaraya, E.A.: On estimating regression. Theory of Probability and Its Applications 9, 141–142 (1964) 26. Scholkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002) 27. Taylor, J.S., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge (2004) 28. Vapnik, V.N.: Statistical Learning Theory. Wiley, Chichester (1988) 29. Watson, G.S.: Smooth Regression Analysis. Sankhya - The Indian Journal of Statistics 26, 359–372 (1964) 30. Cleveland, W.S., Devlin, S.J.: Locally Weighted Regression. Jour. of the Am. Stat. Ass. 83, 596–610 (1988) 31. Wand, M.P., Jones, M.C.: Kernel Smoothing. Chapman and Hall, Boca Raton (1995) 32. Smola, A.J.: Regression Estimation with Support Vector Learning Machines. Technische Universit¨ at M¨ unchen (1996) 33. Muller, K.R., Smola, A.J., Ratsch, G., Scholkopf, B., Kohlmorgen, J., Vapnik, V.: Predicting Time Series with Support Vector Machines. In: Gerstner, W., Hasler, M., Germond, A., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 999–1004. Springer, Heidelberg (1997) 34. Huang, K., Yang, H., King, I., Lyu, M.: Local svr for Financial Time Series Prediction. In: Proc. of IJCNN 2006, pp. 1622–1627 (2006)
The Sequential Reduction Algorithm for Nearest Neighbor Rule Based on Double Sorting Marcin Raniszewski Technical University of Lodz, Computer Engineering Department 90-924 Lodz, ul. Stefanowskiego 18/22 [email protected]
Summary. An effective and strong reduction of large training sets is very important for the Nearest Neighbour Rule usefulness. In this paper, the reduction algorithm based on double sorting of a reference set is presented. The samples are sorted with the use of the representative measure and Mutual Neighbourhood Value proposed by Gowda and Krishna. Then, the reduced set is built by sequential adding and removing samples according to double sort order. The results of proposed algorithm are compared with results of well-known reduction procedures on nine real and one artificial datasets.
1 Introduction The k Nearest Neighbour Rule (k -NN) [4, 17] is one of the most popular classification methods in Pattern Recognition. It is very simple and intuitive: the new sample is classified to the predominant class of its k nearest neighbours. The special case of k -NN is 1-NN, called the Nearest Neighbor Rule. The 1-NN is faster than k -NN classifiers with k ≥ 2 and do not require the training phase. However, the main disadvantages of 1-NN (and all k -NN) is the problem connected with the great number of samples in the reference set (training set). If the training set is large, the classification of a single sample can last long. If the time of the classification is crucial, the 1-NN becomes usefulness. The obvious solution of this problem is the reduction of the reference set: only the most representative samples from the training set should be selected to the new reference set. The concept of representativeness can be defined in a different way. The representative sample e.g. can be treat as a sample near class boundaries or from a centre of one-class cluster. The different approaches to this concept result in many reduction techniques. The main object of all these techniques is the reference set reduction, as strong as possible. However, it is also important to retain the classification quality or even improve it. The improvement is possible, when atypical or noisy sample are discarded. In the second section of this paper, the well-known reduction techniques are very briefly described. The new sequential reduction algorithm is presented M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 221–229. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
222
M. Raniszewski
in the third section. The last three sections contain experimental results, a discussion and conclusions.
2 Well-Known Reduction Techniques The historically first reduction algorithm is The Condensed Nearest Neighbor Rule (CNN) described by Hart in [8]. CNN creates the consistent reduced set, which means that all samples from the complete reference set are correctly classified using 1-NN with the reduced set. CNN is sensitive to reference set permutations. It returns different reduced sets for different permutations of the samples in the training set. Moreover, in the first phase of the algorithm, the accidental samples are selected to the reduced set. The next two algorithms: RNN and Gowda and Krishna’s may be treated as CNN improvements. The Reduced Nearest Neighbor Rule (RNN) proposed by Gates in [6] removes the samples from CNN set, which do not spoilt the consistency of this set. Gowda and Krishna described in [7] the initial sorting of the reference set. They used the Mutual Neighborhood Value (MNV). MNV for a sample x is counted in the following way: a nearest sample y from opposite class is found and MNV(x ) is a number of samples from other that y’s class, which are closer to the y than to the x. The low values of MNV are characteristic for the samples, which lie near class boundaries. After the reference set is sorted by increasing values of MNV, CNN is applied. Tomek in [18] proposed a method, which builds a subset S of the samples, which lie near the class boundaries. After creating S, the CNN is applied on S. Tomek believed that S is consistent, hence he claimed that the resultant reduced set will be also consistent. However, Toussaint in [19] presented the counter-example. Fortunately, in practice, the inconsistence of S happens very rarely. Dasarathy claimed that the proposed in [3] Minimal Consistent Set procedure (MCS) returns the consistent reduced set with the possible minimal number of samples. However, Kuncheva and Bezdec in [12], presented the counter-example: the reduction of the Iris dataset [5] is stronger than that obtained by Dasarathy’s MCS. Skalak in [15] proposed two heuristics: MC1 and RMHC-P based on: Monte Carlo and Random Mutation Hill Climbing, respectively. In RMHC-P: the parameter m denotes the cardinality of the reduced set (the reduced set is initially built from random samples), parameter n - the number of sample replacements (called mutations) (the replacement is accepted if it increases the classification quality of the reduced set). Kuncheva in [10, 11, 12] described heuristics based on Genetic Algorithms (GA). The reduced set is represented by the binary chromosome with the i-th bit set to 1 when the i-th sample is in reduced set and 0, otherwise. The fitness function J proposed in [11] uses the number of correct classifications and the cardinality of actually reduced set (the penalty term):
The Sequential Reduction Algorithm for Nearest Neighbor Rule
J(Y ) =
n(Y ) − α· card(Y ) n
223
(1)
where: n(Y ) denotes the number of correctly classified samples from training set using only reduced reference set Y to find the k nearest neighbors (sample s is classified using k -NN with Y \{s}, n - the number of all samples in the training set, α - positive coefficient (the higher is α a, the higher gets the penalty) and card (Y ) - the cardinality of Y. Authors of [2] proposed the heuristic based on Tabu Search (TS). Similarly to the GA chromosome, the binary string represents the actual solution (the reduced set). The neighbourhood of the actual solution S is the set of binary strings that differ from S by a one element. The objective function is the same as the GA fitness function (1). The initially solution can be created in two ways: condensed or constructive. The former uses CNN, the latter builds the initial subset in the following way: it starts with randomly selected samples (one sample from each class) and uses Tabu Search with disabled sample deletion until the consistent solution is obtained. Cerveron and Ferri suggest also in [2] the method called Random Restarting CNN (RRCNN). The CNN procedure is being run many times on different permutations of the reference set. As the result, the reduced set with minimal cardinality is chosen. CNN, RNN, Gowda and Krishna algorithm, Tomek’s procedure, MCS and RRCNN produce the consistent reduced set. The consistency criterion has important disadvantage, when it is used for datasets with atypical or noisy samples (most of real datasets): such samples are added to reduced set jointly with the samples from their neighborhood, what decreases as the reduction level as well as classification quality of the reduced set. The resultant reduced sets of MC1, RMHC-P, GA and TS are equivocal and rather inconsistent. Many other reduction techniques are described in [20].
3 The Sequential Double Sort Algorithm The proposed method uses initial sorting of the reference set, similarly to Gowda and Krishna algorithm. However, samples are sorted with the use of two keys: representative measure and Gowda and Krishna MNV [7]. The representative measure (rm) was defined in [14]: for a sample x it is the number of such samples (the voters), which are from the same class as x, and x is their nearest neighbour (fig. 1). Hence, the sample with the high value of the representative measure is more representative for its class and should be added to the reduced set. The samples in the reference set are sorted firstly by decreasing values of their representative measures. If two samples have the same value of representative measure, the sample with the higher value of MNV (which lies nearer class boundaries) has the priority. Hence, the points with the same values of
224
M. Raniszewski
x rm(x) = 3
Fig. 1. The representative measure (rm) of the sample x from a circle class
representative measure are additionally arranged by increasing values of MNV. The ties of representative measure and MNV are broken randomly. After double sorting of the reference set, the main procedure of building the reduced set is applied: the samples according to double sort sequence are sequentially added and removed to/from the reduced set as long as the adding or removing increases the classification quality of the reduced set. The sequential algorithm based on double sorting (The Sequential Double Sort Algorithm - SeqDSA) can be described in the following steps (Xred denotes the reduced set - initially empty, X - the complete training set and f (Xred ) - the fraction of samples from the X correctly classified using 1-NN rule operating with the Xred ): 1. Sort the samples from X according to decreasing values of rm. The samples with the same rm, sort by increasing values of Gowda and Krishna MNV. 2. Add to Xred the first sample from X. 3. Count f (Xred ). If f (Xred ) = 100%, go to step 8, otherwise: let fmax = f (Xred ). 4. Mark all samples as ”unchecked”. 5. For each sample x from X\Xred (according to double sort order): a) Add x to Xred . b) Count f (Xred ). If f (Xred ) = 100%, go to step 8, if f (Xred ) ≤ fmax , remove x from Xred and mark it as ”checked”, otherwise: let fmax = f (Xred ), mark all samples as ”unchecked” and go to step 6. 6. For each sample x from Xred (according to inverted double sort order): a) Remove x from Xred . b) Count f (Xred ). If f (Xred ) = 100%, go to step 8, if f (Xred ) ≤ fmax , add x to Xred and mark it as ”checked”, otherwise: let fmax = f (Xred ), mark all samples as ”unchecked” and go to step 7. 7. Repeat steps 5 and 6 until there are ”unchecked” samples. 8. Xred is resultant reduced set.
The Sequential Reduction Algorithm for Nearest Neighbor Rule
225
After each confirmed adding or eliminating of a sample, new Xred offers better classification quality. Thus, the algorithm is finite.
4 Experimental Results Nine real and one synthetic datasets were used to tests: Liver Disorders (BUPA) [1], GLASS Identification [1], IRIS [5], PARKINSONS Disease Data Set [1], PHONEME [16], PIMA Indians Diabetes [1], SATIMAGE [16], Wisconsin Diagnostic Breast Cancer (WDBC) (Diagnostic) [1], YEAST [13] and WAVEFORM (version1) [1]. Stratified ten-fold cross-validation [9] (with 1-NN) was used for each experiment. The datasets were reduced by ten algorithms: CNN, RNN, RRCNN, Gowda and Krishna (G-K), Tomek’s, MCS, RMHC-P, GA, TS and proposed SeqDSA. The number of permutations, in RRCNN, was set to 100. The implementation of GA and the values of parameters were the same as that Table 1. The test results: the classification qualities and the reduction levels. All fractions are presented in percentages. The number under a mean value for a specific dataset is a standard deviation. The last row ”avg” presents the average values. compl.
CNN
RNN
RRCNN
G-K
Tomek’s
class. red. class. red. class. red. class. red. class. red. class. red. qual. level qual. level qual. level qual. level qual. level qual. level BUPA
GLASS
IRIS
PARKINSONS
PHONEME
PIMA
SATIMAGE
WAVEFORM
WDBC
YEAST
avg
62,61 0,00 7,30
0,00
71,56 0,00 9,86
0,00
96,00 0,00 4,66
0,00
84,49 0,00 6,49
0,00
90,82 0,00 1,50
0,00
67,20 0,00 3,99
0,00
90,51 0,00 0,99
0,00
77,40 0,00 2,13
0,00
91,19 0,00 4,16
0,00
53,04 0,00 4,70
0,00
60,01 40,97 58,26 48,24 55,63 45,70 55,92 43,51 60,02 41,35 8,62
2,16
11,37 2,16
7,10
2,02
10,91 1,85
9,40
2,13
69,36 51,87 66,12 57,89 70,79 56,49 66,14 55,76 69,82 52,23 9,67
1,34
8,85
1,05
9,38
1,05
9,08
1,40
8,75
1,14
93,33 87,70 92,00 88,59 92,00 90,96 94,00 88,52 93,33 87,26 5,44
1,12
4,22
1,72
5,26
1,39
5,84
1,36
5,44
1,30
83,01 66,44 81,46 73,61 83,40 73,27 84,99 71,79 84,10 68,94 7,51
1,47
9,28
1,47
9,83
1,66
7,72
1,57
8,23
1,12
88,79 76,11 88,16 80,28 88,34 77,27 88,34 78,69 88,78 76,85 1,52
0,33
1,64
0,32
1,71
0,29
1,56
0,41
1,83
0,29
63,95 46,66 62,91 55,22 65,38 51,04 63,69 53,30 63,82 47,93 4,68
1,27
5,79
0,88
4,23
0,72
5,03
1,17
4,12
1,19
88,25 80,15 87,37 83,90 88,97 81,26 88,30 82,25 88,31 80,22 1,27
0,29
1,45
0,28
1,55
0,29
1,23
0,24
1,32
0,27
74,40 61,34 73,42 68,99 74,00 62,42 73,70 65,59 74,40 61,34 2,47
0,41
2,58
0,33
2,22
0,56
1,90
0,48
2,47
0,41
90,84 82,97 91,19 86,70 89,95 85,31 91,55 85,98 90,67 85,12 5,47
0,91
5,90
0,59
4,73
0,73
3,15
0,53
4,47
0,77
50,90 33,54 49,88 39,53 50,27 35,13 50,29 37,20 50,90 33,59 5,75
0,61
5,47
0,60
4,27
0,45
5,30
0,60
5,75
0,60
78,48 0,00
76,28 62,77 75,08 68,30 75,87 65,89 75,69 66,26 76,41 63,48
4,58 0,00
5,24 0,99
5,66 0,94
5,03 0,92
5,17 0,96
5,18 0,92
226
M. Raniszewski
Table 2. The test results: the classification qualities and the reduction levels. All fractions are presented in percentages. The number under a mean value for a specific dataset is a standard deviation. The last row ”avg” presents the average values. compl.
MCS
RMHC-P
GA
TS
SeqDSA
class. red. class. red. class. red. class. red. class. red. class. red. qual. level qual. level qual. level qual. level qual. level qual. level BUPA
GLASS
IRIS
PARKINSONS
PHONEME
PIMA
SATIMAGE
WAVEFORM
WDBC
YEAST
avg
62,61 0,00 7,30
0,00
71,56 0,00 9,86
0,00
96,00 0,00 4,66
0,00
84,49 0,00 6,49
0,00
90,82 0,00 1,50
0,00
67,20 0,00 3,99
0,00
90,51 0,00 0,99
0,00
77,40 0,00 2,13
0,00
91,19 0,00 4,16
0,00
53,04 0,00 4,70
0,00
55,05 49,50 60,88 94,56 66,62 94,62 68,13 87,57 67,21 94,56 12,40 2,00
7,05
1,45
9,36
0,79
6,49
1,21
6,49
1,45
65,44 60,69 67,49 85,84 67,40 92,36 68,83 77,05 69,55 85,84 9,10
1,31
11,51 3,90
8,57
0,80
8,91
2,26
7,25
3,90
92,67 89,93 96,67 94,07 96,00 96,89 96,67 93,70 95,33 94,07 7,98
1,32
4,71
1,16
5,62
0,98
4,71
1,12
6,32
1,16
83,99 74,59 81,99 95,38 80,99 98,23 81,35 82,79 82,54 95,38 8,51
1,21
6,20
1,06
6,97
0,57
7,17
2,67
5,55
1,06
87,45 81,67 85,34 96,98 81,92 90,76 84,53 99,37 85,14 96,98 1,48
0,38
1,15
0,64
1,17
0,13
1,57
0,06
1,41
0,64
64,73 57,47 72,14 95,76 68,50 92,51 69,54 94,21 73,97 95,76 6,13
0,59
4,45
2,04
3,82
0,33
3,09
0,73
5,24
2,04
88,09 85,61 89,12 94,89 86,42 90,73 87,37 99,56 88,91 94,89 1,29
0,17
1,07
0,85
1,88
0,22
1,44
0,04
1,30
0,85
74,66 71,80 81,98 97,14 77,42 90,92 82,76 99,55 84,38 97,14 2,21
0,44
2,17
0,28
2,51
0,17
1,75
0,03
1,39
0,28
91,36 86,27 91,74 98,73 91,03 94,06 93,14 98,52 91,37 98,73 5,50
0,65
3,10
0,47
4,63
0,41
2,47
0,23
3,90
0,47
49,13 40,90 55,92 90,07 48,27 91,55 57,36 96,89 56,78 90,07 6,09
0,64
2,22
0,73
3,72
0,49
3,44
0,36
3,72
0,73
78,48 0,00
75,26 69,84 78,33 94,34 76,46 93,26 78,97 92,92 79,52 94,34
4,58 0,00
6,07 0,87
4,37 1,26
4,82 0,49
4,10 0,87
4,26 1,26
proposed in [12]: the number of nearest neighbors k = 1, the number of iterations was set 200, the reduction rate (the probability of a initial setting the chromosome bit to 1) to 0,1, number of chromosomes to 20, the crossover rate (the probability of a swapping the chromosome bits in the crossover phase) to 0,5, the mutation rate (the probability of an alternating the chromosome bit in the mutation phase) to 0,025 and the fitness function (1) with α = 0,01. TS parameters are set to values proposed in [2] with one difference: the condensed initialization was used for datasets like SATIMAGE, PHONEME and WAVEFORM due to very long phase of constructive initialization for those training sets. In RMHC-P: k = 1, m is equal to the reduced set size obtained by SeqDSA (for a better comparison of the algorithms) and n was set to 200 for IRIS, 300 for GLASS and PARKINSONS, 400 for BUPA, 600 for WDBC, 700 for PIMA, 1 000 for YEAST, 2 000 for PHONEME and 3 000 for SATIMAGE and WAVEFORM.
The Sequential Reduction Algorithm for Nearest Neighbor Rule
227
The results obtained for the reduced reference sets are compared with the results of classifications received with the use of the complete reference set (column ”compl.” in tables 1 and 2). All algorithms were implemented in Java and the tests were executed on Intel Core 2 Duo, T7250 2,00 Ghz processor with 2 GB RAM.
5 Discussion In the fig. 2 the points represent the results of the tested algorithms. The dashed line indicates the average fraction of correct classifications on the complete reference set. In the bottom left side of the fig. 2, we can see the results of the algorithms based on the consistency criterion: CNN, RRCNN, RNN, G-K, Tomek’s and MCS. The resultant reduced sets offer average reduction level between 63% (CNN) and 70% (MCS) and the lower classification quality (from 75,1% for RNN to 76,4% for Tomek’s) than the average fraction of correct classifications with the use of the complete reference set (78,5%). Other well-known algorithms: GA, RMHC-P and TS result in very high average reduction level (from 92,9% for TS to 94,3% for RMHC-P). However, GA results in the average fraction of correct classifications similar to
Fig. 2. Experimental results
228
M. Raniszewski
that, obtained by algorithms based on consistency criterion (approx. 76,5%). RMHC-P has classification quality close to classification quality based on the complete reference set (approx. 78,3%). TS, as the only well-known reduction technique, from among all tested, has the average fraction of correct classification higher than that, obtained on the complete reference set (79%). The proposed SeqDSA offers the best results from all tested reference set reduction procedures. Its reduction level is equal 94,3% (very strong reduction) and average classification quality - 79,5%. The reduction phase on middle datasets like PHONEME, SATIMAGE and WAVEFORM was long for GA and TS algorithms (counted in few hours). SeqDSA reduced SATIMAGE in approx. 50 minutes. In remaining cases the reduction phase for all the reduction procedures was insignificant. It is possible that GA and TS with other values of parameters, matched to the given dataset, may result in the better reduced set. But in this case, the reduction phase is much longer, due to the validation tests for different values of the parameters.
6 Conclusions The presented Sequential Double Sort Algorithm is based on two measures: the representative measure and Gowda and Krishna MNV. The reference set is initially sorted with the use of these measures and the reduced set is built by sequential adding and removing the samples according to double sort order. The proposed method has the following advantages: 1. The high fraction of correct classification of the resultant reduced training set (average higher than fractions obtained on the complete training set). 2. Very high reduction level of training set (over 94%). 3. Unique solution (the lack of randomization). 4. The lack of parameters. However, it must be taken into consideration that for large datasets the operation time of the presented method may be a serious limitation (tens of minutes for middle datasets, like SATIMAGE).
References 1. Asuncion, A., Newman, D.J.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2007), http://www.ics.uci.edu/~ mlearn/MLRepository.html 2. Cerveron, V., Ferri, F.J.: Another move towards the minimum consistent subset: A tabu search approach to the condensed nearest neighbor rule. IEEE Trans. on Systems, Man and Cybernetics, Part B: Cybernetics 31(3), 408–413 (2001)
The Sequential Reduction Algorithm for Nearest Neighbor Rule
229
3. Dasarathy, B.V.: Minimal consistent set (MCS) identification for optimal nearest neighbor decision systems design. IEEE Transactions on Systems, Man, and Cybernetics 24(3), 511–517 (1994) 4. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, Inc., Chichester (2001) 5. Fisher, R.A.: The use of multiple measures in taxonomic problems. Ann. Eugenics 7, 179–188 (1936) 6. Gates, G.W.: The reduced nearest neighbor rule. IEEE Transactions on Information Theory IT 18(5), 431–433 (1972) 7. Gowda, K.C., Krishna, G.: The condensed nearest neighbor rule using the concept of mutual nearest neighborhood. IEEE Transaction on Information Theory IT-25(4), 488–490 (1979) 8. Hart, P.E.: The condensed nearest neighbor rule. IEEE Transactions on Information Theory IT-14(3), 515–516 (1968) 9. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. 14th Int. Joint Conf. Artificial Intelligence, pp. 338–345 (1995) 10. Kuncheva, L.I.: Editing for the k-nearest neighbors rule by a genetic algorithm. Pattern Recognition Letters 16, 809–814 (1995) 11. Kuncheva, L.I.: Fitness functions in editing k-NN reference set by genetic algorithms. Pattern Recognition 30(6), 1041–1049 (1997) 12. Kuncheva, L.I., Bezdek, J.C.: Nearest prototype classification: clustering, genetic algorithms, or random search? IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 28(1), 160–164 (1998) 13. Nakai, K., Kanehisa, M.: Expert System for Predicting Protein Localization Sites in Gram-Negative Bacteria. PROTEINS: Structure, Function, and Genetics 11, 95–110 (1991) 14. Raniszewski, M.: Reference set reduction algorithms based on double sorting. In: Computer Recognition Systems 2: the 5th International Conference on Computer Recognition Systems CORES 2007, pp. 258–265. Springer, Heidelberg (2007) 15. Skalak, D.B.: Prototype and feature selection by sampling and random mutation hill climbing algorithms. In: 11th International Conference on Machine Learning, New Brunswick, NJ, USA, pp. 293–301 (1994) 16. The ELENA Project Real Databases, http://www.dice.ucl.ac.be/ neural-nets/Research/Projects/ELENA/databases/REAL/ 17. Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 3rd edn. Academic Press, Elsevier (2006) 18. Tomek, I.: Two modifications of CNN. IEEE Transactions on Systems, Man and Cybernetics SMC-6(11), 769–772 (1976) 19. Toussaint, G.T.: A counter-example to Tomek’s consistency theorem for a condensed nearest neighbor decision rule. Pattern Recognition Letters 15(8), 797– 801 (1994) 20. Wilson, D.R., Martinez, T.R.: Reduction techniques for instance-based learning algorithms. Machine Learning 38(3), 257–286 (2000)
Recognition of Solid Objects in Images Invariant to Conformal Transformations Boguslaw Cyganek AGH University of Science and Technology Al. Mickiewicza 30, 30-059 Krak´ ow, Poland [email protected]
Summary. This paper extends the technique of object recognition by matching of the histograms of local orientations obtained with the structural tensor. The novel modification relies on building phase histograms in the morphological scale-space which allows inference of the intrinsic structure of the classified objects. Additionally the phase histograms are nonlinearly filtered to remove noise and improve accuracy. A matching measure has been devised to allow classification of rotated or scaled objects. It is endowed with the rotation penalty factor which allows control of preferable positions of objects. Additionally, the factor of false positives responses was lowered – a match is accepted only if it is significantly above the other two best matches. The method was tested and verified in the classification of the road signs and the static hand gestures. It showed high accuracy and very fast response.
1 Introduction This paper presents an extension to the recognition technique based on matching of the phase (orientation) histograms computed with the structural tensor [3]. The idea of object recognition from phase histograms relies on using the precise structural tensor (ST) for finding image areas with sufficient structure and then computing phase values of local orientations in just found places. Employing sufficiently precise gradient filters these parameters can be obtained with sufficient accuracy. Moreover, scale parameters of the ST can be controlled to allow detections and processing at different scale level. Classification is done by matching of the modulo shifted phase histograms. This allows detection of objects even if they are rotated or in different scale. Thus, the proposed method is resistant to shifts, rotations, proportional changes of scale, as well as to the different lighting conditions and small occlusions. Additionally, the method is accurate and fast. Computation of orientation histograms in edge points of objects was already proposed by McConnel [6] and then by Freeman [4]. However, there are many objects which have solid, complex or fiber-like structures and for which description solely by their edges is not sufficient since they are not like wire-bounded. Therefore instead of processing edges at one scale, we propose M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 231–238. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
232
B. Cyganek
Fig. 1. Perception of a black arrow object (a). Detection based solely on edges disregards interior of an object (b). Edges computed at a different (dilated) scale (c). The same object perceived at different morphological scales (d).
to take the structural places determined by the stick component of the ST computed at different levels of the morphological scale-space (MSS). This idea is explained in Fig. 1. An additional information on a type of an object, such as whether its interior is filled (Fig. 1a), or an object is only framed (Fig. 1b), can be advantages. However, if only edges are taken into consideration then the two will have the same phase histograms and, in consequence, will be conceived as the same object by the detector. When looking for a solid rigid 2D object, like the one presented in Fig. 1a, a histogram of phases of its edges provides important information on mutual relation and support of phases. Mutual relations will be the same regardless of object position, its rotation or scaling. Thus this invariant properties provide important information on a type of an object. The invariant will be valid under any conformal transformation. However, support of different phases depends on the cumulative lengths of segments with the same phase. These segments are constructed from the border of an object. For instance the two objects in Unfortunately, due to noise and distortions different objects can show very similar phase histograms. Therefore, if the type of an object is known, such as the one in Fig. 1a, then much stronger support can be gathered if edges are collected at different scales. This concept is depicted in Fig. 1c for coarse scale edge, and in Fig. 1d for series of images of the same object, viewed however at different scales. To take advantage of solid objects, the MSS is constructed at first and then, at each of its levels the phase support is gathered into the common phase histogram characterizing an object. MSS can be obtained by consecutive application of the morphological dilation and erosion operators with certain structuring element (usually set arbitrarily or guided by knowledge of a shape of an object) [5]. For the signal f (x) the MSS is build by the multiscale dilation-erosion operator which is defined as follows [5] ⎧ if σ>0 ⎨ d (f (x) , sσ ) m (f, sσ ) = f (x) if σ=0 (1) ⎩ e (f (x) , sσ ) if σ<0
Recognition of Solid Objects in Images Invariant
233
where σ is a scale, d(f (x), sσ )and e(f (x), sσ )are the dilation and erosion operators, respectively. As pointed out by Jackway et al. the structuring element has to comply also with additional conditions to avoid unwanted shifts of the features in the processed image. These are fulfilled by the following zero-shift structuring element sup [s (y)] = 0
y∈S
and
s (0) = 0
(2)
Let us observe that in the above we have a notion of the positive scale (σ>0), which corresponds to the morphological dilation, whereas for negative scales (σ<0) this boils down to the erosion. In a consequence, positive scales pertain to the local maxima of an image, while negative - to the local minima. Obviously, for larger modules of σ the filtered image contains less details. Due to the outlined properties of the MSS this will lead to the enhancement of the representation of the characteristic local features and, in a consequence, boosting of the phases of these features which are stored in the histograms. Compared with the method in [3] the matching measure was also changed. It now relies on the Bhattacharayya distance and contains a penalty factor for rotations which can be used if some orientations are known to be preferred, such as an up-right position for the road signs, etc. There is yet another modification which allows control of a strength of a winning histogram in the matching process. For this purpose a method adopted from the stereo matching domain is proposed. It consists in checking not exclusively an absolute value of the best match but instead a relation of values of the two best matches. If these two are sufficiently spread, then the response is accepted since the extreme is ‘unique’. Otherwise, the classifier cannot unambiguously provide the class of a test pattern. Such strategy decreases the rate of false positive responses, thus increasing precision of the system.
2 Object Recognition Process Fig. 2 lists the steps of the histogram building function. An input of the algorithm consists of reference patterns or test images, each denoted by I, and filtering level α. The nonlinear filtering of the histograms consists in suppression to 0 of all bins which are below the level set by α. Fig. 6 depicts this process. Computation of the orientation histograms (steps 2, 7 and 13) follow the algorithm presented in [3]. All information about the classified patterns comes in the form of the phase histograms. Number of their bins #K is set by a user. Thus, the quantization factor is 360◦ /#K. In all our experiments #K was 360. To find the best prototype, a histogram t of an input image is checked against all S reference histograms s which were precomputed and stored in the data base. Since we allow rotation of the test objects the matches are also computed for the modulo shifted versions of the histogram t. Thus, our recognition process can be stated as the minimization of the functional
234
B. Cyganek
1. Acquire an image I, setup filtering level α and initialize data structures. 2. Compute orientation histogram H0 (I) from the image I. 3. Filter H0 : H0 = F (H0 ,α). 4. Create a copy J of the original image I : J = I. 5. For each level i of the erosion scale do: a) Compute morphological erosion of J = Ei (J ). b) Compute orientation histogram Hi (J ). c) Filter Hi : Hi = F (Hi ,α). d) Increment histogram H0 : H0 +=Hi. 6. Create a copy J of the original image I : J = I. 7. For each level i of the dilation scale do: a) Compute morphological dilation of J = Di (J ). b) Compute orientation histogram Hi (J ). c) Filter Hi : Hi = F (Hi ,α) d) Increment histogram H0 : H0 +=Hi. Fig. 2. The histogram building procedure: An input consists of an imageI and filtering level α. An output constitutes orientation histogram H0 of an object in the morphological scale-space.
E (s, t, r) = DB (s, t%r) + Θ (s, t, r) ,
(3)
where s is a histogram of the s-th object in the database, DB is a histogram match measure, t%r denotes an r-modulo shifted histogram of the test object, r is in the range ±(0..r max ), and finally Θ(r) is a penalty factor. In our system it favors objects in the up-right position. Thus, Θ(r) is given as follows r γ rmax DB (s, t%r) , f or rmax = 0 Θ (s, t, r) = (4) 0, f or rmax = 0 where γ denotes a constant which in our experiments was set in the range from 0 to 10%. Certainly, setting γ to 0 turns off the rotation penalty factor from the optimization (3), i.e. all rotations are equally preferred. Thus, the classifier answers class s of the test object and its rotation r in respect to the best fit, as follows r arg min E (s, t, r) = arg min 1+γ DB (s, t%r) (5) rmax 1 ≤ s≤ S 1 ≤ s ≤ S −Δr ≤ r ≤ +Δr
−Δr ≤ r ≤ +Δr
The above optimization problem is simple to solve since histograms are onedimensional structures. Thus, it is not an overwhelming computational task to match all pattern histograms with all possible rotations. For instance for 40 patterns and 45 quantized rotation values (for total rotation of ±22◦ ) this results in 1800 matches of histograms which requires only few milliseconds on nowadays PCs. Recognition stage is summarized in Fig. 3. For matching of the orientation histograms in (3) the Bhattacharayya measure is proposed [2]. It
Recognition of Solid Objects in Images Invariant
235
1. Compute data base of pattern histograms Hs : s∈S. Compute histogram Ht of the test pattern t (see Fig. 2). Setup parameter γ of the rotation penalty and the separation parameter ε of the best match. 2. For each object s∈S do: a) For each rotation -Δr≤r≤ Δr do: i. Compute E(s, t, r) from (3); Store the two best values of EB1 and EB2 (EB1 > EB2 ) along with their parameters s and r. 3. If (EB1 − EB2 )> ε then return class s and rotation rfound for EB1 ; Otherwise respond: ‘Unknown pattern ’; Fig. 3. Recognition process based on accumulated morphological phase histograms. Input consists of the data base of pattern histograms Hs and the histogram of the test object H0 . Output is a class s of the pattern (histogram) s which fits the test object and its rotation, or ‘Unknown pattern’ message in case of insufficient separation of best match values.
gave the best experimental results compared with the measures cited in [3]. The Bhattacharayya measure follows from the definition of the scalar product of two vectors a and b representing probability of multinomial populations [1]. DB = cos ∠ (a, b) =
N √ √ pi qi i=1
⎛ N / ⎝ p i=1
N ⎞ N √ ⎠ q pi qi (6) = i i i=1
where N denotes dimensionality of a population and √ √ ai = pi and bi = qi ,
i=1
(7)
in which pi and qi are values of probability, so Σ i pi =Σ i qi = 1. For instance it is easy to observe that a value pi can be seen as a squared directional cosine of a population vector a to the i-th axis xi of the coordinate system of a population. Thus pi = cos2 ∠ (a, xi ) (8) which results in values 0 to 1. DB also takes values from 0 to 1. The higher the value, the closer two distributions. Thus, if DB =1, then the two populations are the same. As shown in [1], DB exhibits many advantages over other measures when applied to comparison of different distributions, especially if they are not Gaussian.
3 Experimental Results The system was implemented in C++. Tests were run on a PC type computer equipped with the Pentium IV 3.4GHZ and 2GB of RAM. Two types of object were used to test performance of the method. These are road signs which are presented in [3] and static images of hand gestures.
236
B. Cyganek
Fig. 4. An object (top row) and its stick tensor components (bottom row) at different morphological scales (dilated versions to the left, eroded to the right). Input image in the centre.
Fig. 4 depicts process of building MSS from an image of a hand. It was observed that application of the MSS in the stage of building phase histograms increases discriminative abilities of the classifier. This can be observed on the plots of the match values of histograms (Fig. 5a) which always increased after applying some levels of the scale-space in computation of the phase histograms. Value 0 of the scale in Fig. 5 means that the scale-space was not used (the same as in [3]). The question is however, how many levels of the scale should be employed. The plot in Fig. 5a provides some hints for this. For some objects after certain value of the scale level there is no further improvement in classification. This is caused by reaching internal scale level of an object which usually happens sooner for the erosion level since some details ceased to exist after few erosions. More scale levels also increases computation. In practice we found that two levels of erosions and up to three to four levels of dilations are sufficient. For example in the case of “No pass” RS, the match value fell down after reaching two scale levels (a plot with
a
b
Fig. 5. An influence of the level of the morphological scales on match value (a) and match separation value (b) for the winning patterns (road signs). The dashed line denotes a threshold value below which the classifier responds Unknown pattern to lower the false positive ratio.
Recognition of Solid Objects in Images Invariant
237
Fig. 6. Effect of nonlinear histogram filtering (noise squeezing) on response of the system. Shown two test patterns (upper row) and two responses of the system (bottom row). Nonlinear filtering of the histograms (right column) allows correct response (bottom right).
circles in Fig. 5a). This is caused by fine details which disappear in higher levels of the scale. Overall improvement in the recognition ratio, expressed in terms of the rate of true positive, is in order of 92-97% for the road signs at daily lighting conditions and 75-82% for static hand gestures. This constitutes an improvement factor of about 5-10% of the method which does not utilize the MSS [3]. To lower the level of false positives the ratio of the two matches is checked and if it is less than a certain threshold (in practice 5-10%), then the classifier responds “Unknown pattern”. The influence of the level of the scale-space on this separation ratio is different for different objects and does not always increase with higher scale level, as shown on the plot in Fig. 5b. For example for the sign “40km/h speed limit ” this ratio is below the set threshold of 10%. This is caused by many signs which contain “0” in their pictograms, e.g. “30”, “50”, “70” and so on. Because of this their histograms are very similar due to a common sub-pattern. Further improvement was achieved thanks to the elimination of noise in histograms at each level of scale processing (Fig. 2). Fig. 6 depicts an effect of nonlinear histogram filtering (noise squeezing) on response of the system. Two test patterns are shown (upper row) and two responses of the system (bottom row). Nonlinear filtering of the histograms (right) allows correct response (bottom right) whereas not filtered histograms result in a false positive.
238
B. Cyganek
4 Conclusions The paper presents an improved classifier which operates on phase histograms [3]. The main novelty is computation of the accumulated histograms in the nonlinear morphological scale-space which takes into account shape of the whole object rather than only its silhouette. This allows amplification of the dominating directions of the local phases and adds to increasing discriminative power. In a result an improvement to the classification accuracy was achieved as compared to [3]. Further improvement of accuracy was achieved thanks to the method which checks separation between best match values of the histograms, as well as thanks to the nonlinear filtering of the histograms and superior performance of the Bhattacharayya match measure. The latter can be explained by the non-Gaussian nature of our histograms. These features were positively verified by experiments with classification of the road signs and static hand gestures.
Acknowlegement The work was supported from the Polish funds for the scientific research in the year 2009.
References 1. Aherne, F.J., Thacker, N.A., Rockett, P.I.: The Bhattacharyya Metric as an Absolute Similarity Measure for Frequency Coded Data. Kybernetika 34(4), 363– 368 (1998) 2. Bhattacharayya, A.: On a Measure of Divergence Between Two Statistical Populations Defined by their Probability. Bull. Calcutta Mathematic Society 35, 99–110 (1943) 3. Cyganek, B.: A Real-Time Vision System for Traffic Signs Recognition Invariant to Translation, Rotation and Scale. In: Blanc-Talon, J., Bourennane, S., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2008. LNCS, vol. 5259, pp. 278– 289. Springer, Heidelberg (2008) 4. Freeman, W.T., Roth, M.: Orientation Histograms for Hand Gesture Recognition, Mitsubishi Electric Research Laboratories, TR-94-03a (1994) 5. Jackway, P.T., Deriche, M.: Scale-Space Properties of the Mutliscale Morphological Dilation-Erosion. IEEE PAMI 18(1), 38–51 (1996) 6. McConnell, R.: Method of and apparatus for patt. recogn. US. Patent No. 4,567,610 (1986)
A New Notion of Weakness in Classification Theory Igor T. Podolak and Adam Roman Institute of Computer Science, Faculty of Mathematics and Computer Science, Jagiellonian University L ojasiewicza 6, Krak´ ow, Poland [email protected], [email protected]
Summary. The notion of a weak classifier, as one which is “a little better” than a random one, was introduced first for 2-class problems [1]. The extensions to Kclass problems are known. All are based on relative activations for correct and incorrect classes and do not take into account the final choice of the answer. A new understanding and definition is proposed here. It takes into account only the final choice of classification that must be taken. It is shown that for a K class classifier to be called “weak”, it needs to achieve lower than 1/K risk value. This approach considers only the probability of the final answer choice, not the actual activations.
1 Introduction The classification for a given problem may be solved using various machine learning approaches. One of the most effective, is when several simple classifiers are trained, and their responses combined [2, 3, 4, 5]. It is possible that the training of subsequent classifiers depends on how well the already trained classifiers perform. This is the background of the boosting approach introduced by Schapire and further developed in [6, 7, 8]. The boosting algorithms train a sequel h1 , h2 , . . . of simple classifiers, each performing slightly better than a random one. After training hi , a boosting algorithm, e.g. AdaBoost, changes the probability distribution function with which examples are chosen for further training putting more attention to incorrectly recognized examples. The final classifier is a weighted sum of all individual classifiers hi . The algorithm is well defined for a 2-class problem with an extension to K-class (K > 2) through a so called pseudo-loss function [7]. Using the pseudo-loss approach, a classifier is defined to be weak if the expected activation for the true class is higher then the mean activation for incorrect classes. The authors have worked on a so-called Hierarchical Classifier (HC) [9]. The HC is built of several simple classifiers which divide the input space recursively into several overlapping subproblems (i.e. the HC is also a boosting approach), which are then solved on their own, and the results are combined witha Bayesian approach. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 239–245. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
240
I.T. Podolak and A. Roman
It became apparent that the pseudo-loss is not satisfactory. This is because it does not take into account that after a classifier finds the class distribution vector, the HC has to select the output class, using e.g. a soft-max approach, to form sub-problems. Therefore, the correct question to ask is not what is the expected activation for the true class (which is satisfactory for the AdaBoost kind of boosting), but what is the minimal activation value so that the class is chosen with a given minimal probability (needed by an HC type complex classifier). In this paper we introduce the classifier weakness notion for use in complex architectures where individual classifiers (like in HC) have to make the decision. The notion is more demanding on simple classifiers, but takes into account its whole structure.
2 A Difference between 2- and K-Class Classifiers The problem of classification can be defined as follows: Definition 1. Given a finite set of training examples D = {(xi , Ci ), i = 1, . . . , N }, such that xi ∈ X, the space of input attributes, and C = {C1 , . . . , CK } is a finite set of classes, a classifier Cl is a rule Cl : x ∈ X −→ C,
(1)
which assigns a class (labeling) to an attribute vector. We have deliberately excluded here a multi-classifier case, where an example can be labeled with more than one label. A loss function L(C, Cl(X)) measures errors between the set of target values C and classifications Cl(X). It can be defined differently, but for the case of the classification task we shall use the 0-1 loss function L(C, Cl(x)) = I(C = Cl(x)), and N1 i L(Ci , Cl(xi )) for a set of N examples, where Ci is the true class of example xi . A random classifier would achieve a loss value of 1/2 over a set of examples, and a classifier to be called weak has to achieve a slightly better loss value. This is quite straightforward for 2-class problems, where the class chosen as the answer is the one with higher activation value. In case of K-class classifiers, K > 2, a classifier that would attain error lower than 1/2 would actually be much stronger than a random one, as that would have a (K −1)/K error value. A pseudo-loss function, proposed by Freund and Schapire [8]: ⎞ ⎛ 1 1 l(xi ) = ⎝1 − Cl(xi , Ci ) + Cl(xi , C)⎠ , (2) 2 K −1 C =Ci
where Cl(xi , Ci ) is the activation for the true class Ci of example xi , has a value 1/2 if the activation for the true class is equal to the mean activations for the incorrect classes. The pseudo-loss function forms the basis of the AdaBoost.M2 classifier.
A New Notion of Weakness in Classification Theory
241
A K-class classifier is now called weak if the expected value of the pseudoloss function is lower than 1/2. For some true class Ci , this definition only requires that E[P r(x|Ci )] > E[P r(x|C = Ci )], i.e. the activation of a true class to be higher than the mean activation for other classes. This does not take into account that after, Cl(x) returns a K element probability answer vector Y = [y1 , y2 , . . . , yK ], we have to use some rule to select the output class, e.g. softmax. A most natural approach would be to select i = arg max Clk (x). k
It is easily seen that yi > 1/K by some small margin ε for some class Ci is not enough for that class to be chosen. Example 1. For a 3-class problem, let X1 be a subset of examples with true class with index 1. If E[Y |x ∈ C1 ] = [0.35, 0.36, 0.29], then the mean activation for true class is certainly higher than 1/K, but it would be usually not selected! Again, for 3 classes, let the expected activation for some true class be 0.39. The probability that this class would actually be selected, as it will be shown in the next chapter, is only 0.278689, much lower than 1/K = 1/3! Using a pseudo-loss function based definition, such classifier would be called weak. This comes from the nature of the selection mechanism, which chooses the class with the highest activation. For a 2-class case, if activation is lower than 1/K = 1/2, this is always the other class. For K > 2 the choice is not obvious, as it can be any of the K − 1 classes. It is therefore necessary to reformulate the weakness notion in order to take into account this selection mechanism. Definition 2. A K–class classifier Cl is weak, if the probability that the activation for the true class is higher than for any other class is higher than 1/K, i.e. for y = Cl(x) P r(yi > yj ∀ j = i | x ∈ Ci ) > 1/K.
(3)
α : N −→ (0, 1],
(4)
We also define which returns the minimal E[yi |Ci ] such that (3) holds. We define α(1) = 1. The definition based on the pseudo-loss notion required only that mean activations for true classes were higher than 1/K; now we have a requirement that over 1/K examples from true classes are selected.
3 Main Result In this section we describe our main result - a theorem which gives the necessary and sufficient conditions for a classifier to be weak in the sense of Def. 2. The principal assumption here is a uniform distribution of activations on all but the true class.
242
I.T. Podolak and A. Roman
Theorem 1. Let Cl be a classifier with uniform activation Kdistribution on all 1 but the true classes. Then Cl is weak if and only if K i=1 E[pi |i] > α(K), where K−2 K−2 iα 1 i K −1 α(K) = min α : (5) 1− (−1) > α 1−α + K i i=0 where (x)+ = x, if x > 0 and 0, if x ≤ 0. Proof. (sketch) Let Cl be a K-class weak classifier and Y = [y1 , . . . , yK ] it’s probability answer vector. Let t denotes the true class. From the assumption we know, that a random vector [y1 , y2 , . . . , yt−1 , yt+1 , . . . , yK ] is uniformly distributed and i =t yi = 1−α, where α = yt . It’s consecutive coordinates yi , i = t are the lengths of the intervals related to the partition of the interval I = [0, 1 − yt], determined as follows. Let ξ1 , . . . , ξK−2 be K − 2 random variables uniformly distributed on I. Put ξ0 := 0, ξK−1 = 1 − yt . Let ξ(1) , . . . , ξ(K−2) be the permuted ξ1 , . . . , ξK−2 , such that ξ(1) < · · · < ξ(K−2) . Then {η(i) = ξ(i) − ξ(i−1) }K−1 i=1 is the family of order statistics. Put η = (η(1) , ..., η(K−1) ). Let us consider a case, where ξi are random variables on I = [0, 1]. It is easy then to transform any length l of subinterval from [0, 1] into [0, 1 − yt] by use of scaling function s(l) = l/(1 − yt ). It is a well-known fact that (η(1) , η(2) , . . . , η(K−2) ) ∼ Dir(1, . . . , 1; 1),
(6)
K−2
where Dir(α1 , . . . , αK−2 ; αK−1 ) is a Dirichlet distribution parametrized by K−2 a1 , . . . , aK−1 (naturally η(K−1) = 1 − i=1 ηi ). In our case we have ai = 1 for all i, so the density function for the probability vector (η(1) , . . . , η(K−2) ) is f (x1 , . . . , xK−2 ; 1, ..., 1) = (K − 2)!. Let x ∈ X and let t be the index of a true class. We have P r{arg max y Cl (x) = t|t} = P r{yt > yi ∀i = t|t}.
(7)
i
We only need to find the minimal yt (denoted by α) for which this probability is greater than 1/K. Notice, that if we want to find α for the case I = [0, 1−α], then in case I = [0, 1] we need to find the corresponding value α/(1 − α), computed with the scaling function s(l) for l = α. So, let us fix α. We have ! α Pr > yi ∀i = t|t = (K − 2)!dx1 x2 . . . xt−1 xt+1 . . . xK−1 , (8) 1−α A where A = {(x1 . . . , xt−1 xt+1 . . . xK−1 ) : ∀i = t xi <
α α ∧ }. xi = 1 − 1−α 1−α i =t
A New Notion of Weakness in Classification Theory
243
In order to compute this probability, we will use the marginal distributions of η. It is known that the marginal distribution η(i) of η is a Beta distribution B(1, K − 2). Recall that a PDF of the random variable X ∼ B(a1 , a2 ) is f (x; a1 , a2 ) = 1 0
xa1 −1 (1 − x)a2 −1 ua1 −1 (1 − u)a2 −1 du
.
(9)
Putting a1 = 1, a2 = K − 2 we obtain, for each i = 1, 2, . . . , K − 1, ! x (1 − t)K−3 P r{η(i) < x} = dt = 1 − (1 − x)K−2 .
1 K−3 du (1 − u) 0 0 Hence, P r{η(i) > x} = (1 − x)K−2 . Now, it is easy to observe that . (10) P r{η(1) > x1 , . . . , η(K−1) > xK−1 } = (1 − x1 − x2 − · · · − xK−1 )K−2 + From the inclusion-exclusion principle we finally obtain P r η(1) <
α α , . . . , η(K−1) < 1−α 1−α
=
K−2 i=0
(−1)i
K −1 i
1−
iα 1−α
K−2 . +
(11) 1 Now it is enough to say, that Cl is weak if this probability is greater than K . This ends the proof.
4 Discussion Figure 1 presents the α values resulting from Theorem 1 together with comparisons. It can easily be seen that, except for the case of K = 2, α(K) > 1/K. This is a result of the requirement that Cl has to select the output class by chossing the one with the highest activation. The value α(2) = 1/2, then the margin α(K) − 1/K grows for K up to 5 (α(5) − 1/5 0.0889), then starts to slowly decline (α(3) − 1/3 α(15) − 1/15). This shows that training of classifiers for the above range, is more complex in relation to the pseudo-loss defined weakness notion. The proposed definition can be used for complex architectures in which individual classifiers need to select the output class, like in the HC [9]. The authors have proposed lemmas which show that such architecture can achieve strong learning with weak classifiers at nodes, provided that they follow the definition proposed here. For higher K values, the evaluation of α(K) directly from (5) may be cumbersome. On the other hand, it appears that α(K) can be well fitted using a least squares minimisation with axb model: α(K) ∼ 0.842723K −0.68566,
(12)
244
I.T. Podolak and A. Roman
0.5
alpha for different K
margin alpha−1/K
●
●
●
● ●
0.08
●
● ● ● ●
0.4
● ●
●
● ●
0.06 alpha−1/K
● ● ●
0.2
●
0.04
0.3
●
● ●
●
●
●
●
●
0.02
●
●● ●● ●● ●● ●●● ●●● ●●●● ●●●● ●●●●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
0.0
0.00
0.1
alpha and 1/K (solid line)
●
2
5
10
20
50
100
●
2
5
10
20
K
●
●
0.5 0.4
●
●
0.3
●
● ● ● ●
0.2
● ● ●
● ●
●
●
●
●
1.5
●
●
●
●
0.1
● ●
1.0
●
●
2
5
100
alpha approximation with f(x)=0.84723 x^−0.68566
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ●● ●● ●● ●● ●● ●● ● ● ● ● ● ● ● ● ● ●
alpha
3.0 2.5 K * alpha
2.0
●
●
50
K
K * alpha(K)
●
● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ●● ●● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
10
20 K
50
100
2
5
10
●
●● ●● ●● ●● ●● ●●● ●●● ●●●● ●●●● ●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
20
50
100
k
Fig. 1. The α value for different number of classes. From top to bottom: alpha values against the 1/K from a pseudo-loss type definition, α − 1/K margin value, the α/(1/K) ratio, and the approximation of α with f (K) = 0.842723 ∗ K −0.68566 . Note that the class number is logarithmically scaled.
although, naturally, higher values of K are rarely found in classification problems. The proposed notion would have no effect on the AdaBoost type algorithm, if used in place of Schapire’s pseudo-loss notion. On the other hand, it is methodologically proper and complete, since it describes the whole classifier together with the eventual selection of the output class. Moreover, it seems to us that this notion of weakness is more intuitive then the pseudo-loss definition, and is elegant. We adopted this notion in a theorem which states the sufficient conditions for HC classifier with n levels to have lower risk than the one with n − 1 levels [12].
A New Notion of Weakness in Classification Theory
245
References 1. Kearns, M., Valiant, L.: Cryptographic limitations on learning Boolean formulae and finite automata. Journal of the Association for Computing Machinery 41(1), 67–95 2. Bax, E.: Validation of Voting Committees. Neural Computation 4(10), 975–986 (1998) 3. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, New York (2001) 4. Tresp, V.: Committee Machines. In: Hu, Y.H., Hwang, J.-N. (eds.) Handbook for Neural Network Signal Processing. CRC Press, Boca Raton (2001) 5. Kittler, J., Hojjatoleslami, A., Windeatt, T.: Strategies for combining classifiers employing shared and distinct pattern representations. Pattern Recognition Letters 18, 1373–1377 (1997) 6. Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990) 7. Eibl, G., Pfeiffer, K.-P.: Multiclass boosting for weak classifiers. Journal of Machine Learning 6, 189–210 (2005) 8. Freund, Y., Schapire, R.E.: A decision theoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences 55, 119–139 (1997) 9. Podolak, I.T.: Hierarchical Classifier with Overlapping Class Groups. Expert Systems with Applications 34(1), 673–682 (2008) 10. Podolak, I.T., Biel, S.: Hierarchical classifier. In: Wyrzykowski, R., Dongarra, J., Meyer, N., Wa´sniewski, J. (eds.) PPAM 2005. LNCS, vol. 3911, pp. 591–598. Springer, Heidelberg (2006) 11. Podolak, I.T.: Hierarchical rules for a hierarchical classifier. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds.) ICANNGA 2007. LNCS, vol. 4431, pp. 749–757. Springer, Heidelberg (2007) 12. Podolak, I.T., Roman, A.: Improving the accuracy of a hierarchical classifier (in preparation)
The Adaptive Fuzzy Meridian and Its Appliction to Fuzzy Clustering Tomasz Przybyla1, Janusz Jezewski2 , and Krzysztof Horoba2 1
2
Silesian University of Technology, Institute of Electronics, ul Akademicka 16, 44-101 Gliwice, Poland [email protected] Institute of Medical Technology and Equipment ITAM, Departament of Biomedical Informatics, Roosvelta Str 118, 41-800 Zabrze, Poland [email protected], [email protected]
Summary. The fuzzy clustering methods are useful in the data mining field of applications. In this paper a new clustering method that deals with data described by the meridian distribution is presented. The fuzzy meridian is used as the cluster prototype. Simple computation method for the fuzzy meridian is given as well as the meridian medianity parameter. A numerical example illustrates the performance of the proposed method.
1 Introduction Robust statistics are designed to be resistant to outliers. The location estimators belong to the class of maximum likelihood (ML) estimators, which have been developed in the theory of robust statistics [1]. Robust nonlinear estimators are critical for applications involving impulsive processes (e.g. ocean acoustic noise), where heavy–tailed non–Gaussian distributions model such signals. Robustness means that the performance of an algorithm should not be affected significantly by small deviations from the assumed model. The algorithm should not deteriorate drastically due to noise and outliers. Methods that are able to tolerate noise and outliers in the data have become very popular [2] [3], [4]. Traditional clustering algorithms can be devided into two main categories [6] [7] [8]: hierarchical and partitional. In hierarchical clustering, the number of clusters need not to be specified a priori. Yet, the problems due to initialization and local minima do not arise. However, they cannot incorporate a priori knowledge about the global shape or size of clusters since hierarchical methods consider only local neighbors in each step [5]. Prototype–based partitional clustering methods can be classified into two classes: hard (or crisp) methods and fuzzy methods. In the hard clustering methods, every data case belongs to only one cluster. In the fuzzy M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 247–255. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
248
T. Przybyla, J. Jezewski, and K. Horoba
clustering methods every data point belongs to every cluster. Fuzzy clustering algorithms can deal with overlapping cluster boundaries. The meridian distribution has been proposed in [9]. The proposed distribtion describes random variable formed as the ratio of the two independent zero–mean Laplacian distrubuted random variables. The ML estimates of the location parameter for meridian distribution is called sample meridian. An assignment of nonnegative weights to the data set generalize the sample meridian to the weighted meridian. The paper is organized as follows. The section 2 contains a definition of a meridian as well as the extension of the meridian to the fuzzy sets. The proposed aFCMer clustering algorithm is presented in the section 3. The section 4 shows results obtained in a numerical experiment. Conclusions complete the paper.
2 Fuzzy Meridian The random variable formed as the ratio of two independent zero–mean Laplacian distributed random variables is refered to as the meridian distribution [9]. The form of the proposed distribution is given by f (x; δ) =
δ 1 . 2 (δ + |x|)2
(1)
For the given set of N independent samples x1 , x2 , . . . , xN each obeying the meridian distribution with the common scale parameter δ, the sample meridian βˆ is given by βˆ = arg min
N
β∈IR
" # log [δ + |xi − β|] = meridian xi |N , i=1 ; δ
(2)
i=1
where δ is the medianity parameter. The sample meridian βˆ is a ML estimation of location for the meridian distribution. The sample meridian can be generlized to the weighted meridian by assigning nonnegative weights to the input samples. The weights associated with the data points may be interpreted as the membership degrees. Hence, in such an interpretation of the weights, the weighted meridian becomes the fuzzy meridian. So, the the fuzzy meridian is given by βˆ = arg min
β∈IR
N
" # log [δ + ui |xi − β|] = meridian ui ∗ xi |N . i=1 ; δ
(3)
i=1
In the case of the meridian the terms weighted and fuzzy will be used interchangeable in the rest of this paper.
The Adaptive Fuzzy Meridian and Its Appliction to Fuzzy Clustering
249
Fuzzy Meridian Properties The behavior of the fuzzy meridian is significantly dependent on the value of its medianity parameter δ. It can be shown that for large values of δ, the fuzzy meridian is equivalent to the fuzzy median [11]. For the given data set of N independent samples x1 , x2 , . . . , xN and assigned the membership degrees u1 , u2 , . . . , uN the following equation holds true " # " # N lim βˆ = lim meridian ui ∗ xi |N . (4) i=1 ; δ = median ui ∗ xi |i=1 ; δ δ→∞
δ→∞
This property is called the median property. The second case, when δ tends to zero, is called weighted–mode (fuzzy–mode) meridian. In this case the weighted meridian βˆ is equal to one of the most repeated values in the sample set. Furthermore ⎤ ⎡ N & 1 lim βˆ = arg min ⎣ ui |xi − xj |⎦ , (5) xj ∈M (uj )r δ→0 i=1 xi =xj
where M is the set of the frequently repeated values and r is the number of occurrences of a member of M in the sample set. Proofs of the properties can be found in [9].
3 Fuzzy c–Meridian Clustering Method Let us consider a clustering category in which partitions of data set are built on the basis of some perfomance index, known also as an objective function [8]. The minimization of a certain objective function can be considered as an optimisation approach leading to suboptimal configuration of the clusters. The main design challange is in formulating an objective function that is capable of reflecting the nature of the problem so that its minimization reveals a meaningful structure in the data set. The proposed method is an objective functional based on fuzzy c–partitions of the finite data set [7]. The proposed objective function can be an extension of the classical functional of within–group sum of an absolute error. The objective function of the proposed method can be described in the following way Jm (U, V) =
p N c
log [δ + um ik |xk (l) − vi (l)|] ,
(6)
i=1 k=1 l=1
where c is the number of clusters, N is the number of the data, and p is the number of features. The δ is the medianity parameter, uik ∈ U is the fuzzy partition matrix, xk (l) represents l-th feature of the k-th input data from the data set and 1 ≤ l ≤ p, and m is the fuzzyfing exponent called the fuzzyfier.
250
T. Przybyla, J. Jezewski, and K. Horoba
The constant value of the δ parameter means that the behavior (i.e. the influence of the outliers) of the fuzzy meridian is exactly the same for each feature and the each cluster. When the medianity parameter for each class and for each feature is introduced, the cost function of the proposed clustering method becomes Jm (U, V) =
p N c
log [δil + um ik |xk (l) − vi (l)|] .
(7)
i=1 k=1 l=1
The different values of the medianity parameter improve the more accuracy of an estimation of the cluster prototypes. The optimization objective function Jm is completed with respect to the partition matrix and the prototypes of the clusters. The first step is a constraint–based optimization, which involves Lagrange multipliers to accomodate the constraints of the membership grades [7]. The columns of the partition matrix U are independent, so the minimization of the objective function (6) can be described as Jm (U, V) =
p N c
log [δil + um ik |xk (l) − vi (l)|] =
k=1 i=1 l=1
N
Jk .
(8)
k=1
The minimization of (7) can be reduced to the minimization of independent components Jk , 1 ≤ k ≤ N . When a linear transformation L(·) is applied to the expression δil + um ik |xk (l) − vi (l)|, its variability range is changed to (0, 2], i.e.: 0 < L (δil + um ik |xk (l) − vi (l)|) ≤ 2 .
(9)
By means of the above equation, and representing the logarithm function by its power series, the minimization of the objective function can be reduced to the following expression: Jk =
p c
[γil + um ik dik (l) − 1] ,
(10)
i=1 l=1
where m γil + um ik dik = L (δil + uik |xk (l) − vi (l)|) , dik = |xk − vi | .
When the Lagrange multipliers optimization method is applied to the (10) equation, we obtain: c p c Jk (λ, uk ) = [γil + um uik − 1 , (11) ik dik − 1] − λ i=1 l=1
i=1
The Adaptive Fuzzy Meridian and Its Appliction to Fuzzy Clustering
251
where λ is the Lagrangemultiplier, uk is the k-th column of the partition matrix and the term λ ( ci=1 uik − 1) cames from the definition of the partition matrix U [8]. When the gradient of (11) is equal to zero then, for the sets defined as: ∀
1≤k≤N
Ik = {i | 1 ≤ i ≤ c; xk − vi 1 = 0} I˜ = {1, 2, · · · , c} − Ik
,
the values of partition matrix are described by:
∀
∀
1≤i≤c 1≤k≤N
uik
⎧ c * xk −vi 1 +1/(m−1) −1 ⎪ ⎪ ⎪ ⎪ j=1 xk −vj 1 ⎨ = 0 ⎪ ⎪ ⎪ ⎪ ⎩ 1
if Ik = ∅ if ∀
,
(12)
i∈I˜k
if I = ∅
where · 1 is the L1 norm, and vi are cluster prototypes 1 ≤ i ≤ c. For the fixed number of clusters c, and the partition matrix U as well as for the exponent m, the prototype values minimizing (6) are the fuzzy meridian described as follows: vi (l) = arg min
β∈IR
N
log [δil + |xi − β|] ,
(13)
i=1
where: i is the cluster number 1 ≤ i ≤ c and l is the component (feature) number 1 ≤ l ≤ p. Estimation of the Medianity Parameter One of the results obtained from the clustering procedure is the partition matrix. The membership grades can be used during the determination of the influence of the input data samples on the estimation of the data distribution in the clusters. In the method of the density estimation proposed by Parzen [10], the influence of the each sample on the estimated distribution is the same and is equal to 1/N , where N is the number of samples. The values of the estimated density function can be computed as follows N 1 1 x − xi ˆ f(x) = , K N i=1 h h
(14)
where N is the number of samples, h is the smooth parameter, and K(·) is the kernel function. The equation (14) is changed when the real, nonnegative weights ui (1 ≤ i ≤ N ) are introduced into the following form:
252
T. Przybyla, J. Jezewski, and K. Horoba
fˆw (x) =
N ui i=1
h
K
x − xi h
N / ui ,
(15)
i=1
where fˆw (x) is the weighted estimation of the density function. A cost function can be built for the input data samples of the meridian distribution with the madianity parameter δ0 : Ψδ (x) = fˆw (x) − f (x; δ0 )L ,
(16)
where fwˆ(x) is the weighted estimation of the density function, f (x; δ0 ) is the meridian density function, and · L is the L-norm. The value of medianity parameter can be computed using the following equation δˆ0 = arg min
δ∈IR
N i=1
Ψδ (xi ) = arg min
δ∈IR
N
fˆw (x) − f (x; δ0 )L .
(17)
i=1
Let the norm · L be the L1 norm, then the solution of the (17) is the least median solution [1]. The least median solution is less sensitive to outliers than the least square solution. The Estimation of the Medianity Parameter 1. For the input data samples x1 , x2 , . . . , xN and the assigned weights u1 , u2 , . . . , uN , fix the initial value δ, the kernel function K(·), the smooth parameter h, the treshold ε and the iteration counter l = 1, 2. Compute the weighted meridian minimizing the (3), 3. Calculate the medianity parameter δ minimizing (17), 4. If δ (l) − δ (l−1) < ε then STOP, otherwise l = l + 1 and go to (2). Clustering Data with the Adaptive Fuzzy c–Meridian Method The proposed clustering method (aFCMer) can be described as follows: 1. For the given data set X = {x1 , · · · , xN } where xi ∈ IRp , fix the number of clusters c ∈ {2, · · · , N }, the fuzzyfing exponent m ∈ [1, ∞) and assume the tolerance limit ε. Initialize randomly the partition matrix U and fix the value of the medianity parameter δ parameter, fix l = 0, 2. calculate the prototype values V, as the fuzzy meridians. The fuzzy meridian has to be calculated for each feature vi based on (13) and the method of the medianity parameter estimation (17), 3. update the partition , , matrix U using (12), 4. if ,U(l+1) − U(l) , < ε then STOP the clustering algorithm, otherwise l = l + 1 and go to (2).
The Adaptive Fuzzy Meridian and Its Appliction to Fuzzy Clustering
253
4 Numerical Experiments In the numerical experiments m = 2 and the tolerance limit ε = 10−5 have been chosen. The Laplace kernel function has been used as the kernel. The smooth parameter has been fixed to h = 0.2. For a computed set of prototype vectors v the clustering accuracy has been measured as the Frobenius norm distance between the true centers and the prototype vectors. The A matrix is created as μ − vF , where AF : ⎛ AF = ⎝
⎞ 12 A2j,k ⎠ .
i, k
The familiar fuzzy c–means method (FCM) proposed by Bezdek [7] has been used as the reference clustering method. Synthetic Data with Various Number of Outliers The fig. 1 shows the data set of 39 samples, used in the first experiment. It consists of well–separated three radial clusters centered at μ1 = [1, 2]T , μ2 = [3, −2]T and μ3 = [5, 1.5]T and varying number of outliers at the indicated position of x = [20, 20]T . The purpose of this experiment is to investigate sensitivity to outliers. The obtained results are presented in the table 1. Heavy–Tailed Groups of Data The second example involves three heavy–tailed and overlapped groups of data. The whole data have been generated by a pseudo–random generator. 4 3
Second feature
2 1 0 −1 −2 −3 −4 −1
0
1
2
3 First feature
4
5
6
Fig. 1. “Three cluster data” scatterplot
7
254
T. Przybyla, J. Jezewski, and K. Horoba
Table 1. The differences among computed cluster centers [μ1 μ2 μ3 ]−[v1 v2 v3 ]F Outliers aFCMer FCM Outliers aFCMer 0 0.0 0.0140 7 0.0 1 0.0 0.5939 8 0.0 2 0.0 1.1553 9 0.0 3 0.0 23.8330 10 0.0 4 0.0 23.8354 11 0.0 5 0.0 23.8371 12 3.6742 6 0.0 23.8381
FCM 23.8387 23.8393 23.8396 23.8399 23.8402 23.8404
Table 2. Prototypes of the clusters for the data with heavy–tails
cluster
aFCMer v
FCM δ
1st [−2.4745 3.6653]T [0.0620 2.000] [−2.2145 4.0286]T 2nd [1.5555 − 0.0640]T [2.0000 31.2507] [15.6536 112.5592]T 3rd [5.0967 − 2.6998]T [2.000 0.1441] [4.9634 − 2.7977]T Fnorm
2.1403
113.5592
The first group has been generated with the Laplace distribution, the second with the Cauchy distribution and the third with the Student’s–t distribution. The true group centers are: [−3, 5]T , [0, 0]T , and [5, −3]T . The obtained prototypes for the proposed method and the reference method are shown in table 2. The bottommost row shows the Frobenius norm among the true centers and the obtained centers.
5 Conclusions In many cases, the real data are corrupted by noise and outliers. Hence, the clustering methods should be robust for noise and outliers. In this paper the adaptive fuzzy c–meridian method has been presented. The fuzzy meridian has been used as the cluster prototypes. The value of the fuzzy meridian depends on the data samples and the assigned membership grades. Moreover, the value of the fuzzy meridian depends on the medianity parameter. So, in this paper a method for an estimation of the medianity parameter using the data samples and the assigned membership grades has been proposed. The word adaptive stands for automated estimation of the medianity parameter. For the simple data set, the obtained results from the proposed method and the reference method are similar, but the results are quite different for corrupted data. The data set that includes samples of different distributions has been partitioned correctly according to our exceptations. The updated
The Adaptive Fuzzy Meridian and Its Appliction to Fuzzy Clustering
255
membership formula of the proposed method is very similar to the formula in the the FCM method. Hence, the existing modification of the FCM method (e.g. clustering with partial supervision or conditional clustering) can be directly applied to the adaptive fuzzy c–meridian method. The current work solves the problem of the performance of the adaptive fuzzy meridian estimation for large data sets.
Acknowledgment This work was partially supported by the Ministry of Science and Higher Education resources in 2008–2010 under Research Project N N158 335935.
References 1. Huber, P.: Robust statistics. Wiley, New York (1981) 2. Dave, R.N., Krishnapuram, R.: Robust Clustering Methods: A Unified View. IEEE Trans. on Fuzzy System 5, 270–293 (1997) 3. L eski, J.: An ε–Insensitive Approach to Fuzzy Clustering. Int. J. Appl. Math. Comput. Sci. 11, 993–1007 (2001) 4. Chatzis, S., Varvarigou, T.: Robust Fuzzy Clustering Using Mixtures of Student’s–t Distributions. Pattern Recognition Letters 29, 1901–1905 (2008) 5. Frigui, H., Krishnapuram, R.: A Robust Competitive Clustering Algorithm With Applications in Computer Vision. IEEE Trans. Pattern Analysis and Machine Intelligence 21, 450–465 (1999) 6. Kaufman, L., Rousseeuw, P.: Finding Groups in Data. Wiley–Interscience, Chichester (1990) 7. Bezdek, J.C.: Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum, New York (1981) 8. Pedrycz, W.: Konwledge–Based Clustering. Wiley–Interscience, Chichester (2005) 9. Aysal, T.C., Barner, K.E.: Meridian Filtering for Robust Signal Processing. IEEE Trans. on Signal Proc. 55, 3949–3962 (2007) 10. Parzen, E.: On Estimation Of A Probability Density Function And Mode. Ann. Math. Stat. 33, 1065–1076 (1962) 11. Kersten, P.R.: Fuzzy Order Statistics and Their Application to Fuzzy Clustering. IEEE Trans. On Fuzzy Sys. 7, 708–712 (1999)
Comparison of Various Feature Selection Methods in Application to Prototype Best Rules Marcin Blachnik Silesian University of Technology, Electrotechnology Department, Katowice Krasinskiego 8, Poland [email protected] Summary. Prototype based rules is an interesting tool for data analysis. However most of prototype selection methods like CFCM+LVQ algorithm do not have embedded feature selection methods and require feature selection as initial preprocessing step. The problem that appears is which of the feature selection methods should be used with CFCM+LVQ prototype selection method, and what advantages or disadvantages of certain solutions can be pointed out. The analysis of the above problems is based on empirical data analysis1 .
1 Introduction In the field of computational intelligence exist many methods that provide good classification or regression performance like SVM, unfortunatelly they do not allow as to understand the way they make their decision. On the other hand, fuzzy modeling can be very helpful providing flexible tools that can mimic the data. However they are also restricted just to the continuous or ordinary attributes. An alternative for both these groups are similarity based methods [1], which on one hand base on various methods of machine learning techniques like NPC nearest prototype classifier, and on the other hand can be seen as a generalization of fuzzy rule-based system (F-rules) leading to prototype (similarity) based logical rules (P-rules) [2]. One of the aims of any rule-based system is comprehensibility of obtained rules, and in P-rules system it leads to the problem of selecting possible small set of prototypes. This goal can be achieved utilizing one of prototype selection methods. An example of that kind of algorithms that provides very good quality of obtained results is CFCM+LVQ algorithm [3]. However, this algorithm does not have any embedded feature selection. In any rule-based system feature selection is one of very important issues, so in this paper the combination of P-rules and various feature selection techniques is considered. Usually feature selection methods can be divided into three groups: filters - which do the feature selection independent to the inductive algorithm, 1
Project partially sponsored by the grant No PBU - 47/RM3/07 from the Polish Ministry of Education and Science (MNiSZW).
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 257–264. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
258
M. Blachnik
wrappers - where as evaluation faction the inductive algorithm is used and embedded methods where feature selection is build into the inductive algorithm. In presented paper various methods belonging to two first groups are compared in application to P-rules. Next section describes the CFCM+LVQ algorithm, in section 3 different approaches to feature selection are provided, and section 4 describes our experiments. The last section conclude the paper pointing out advantages and disadvantages of different feature selection techniques.
2 CFCM+LVQ Algorithm The simplest prototype selection methods are obtained via clustering the dataset, and taking cluster centers as prototypes. However, in this approach the information about mutual relations between class distribution is ignored. Possible solution are semi-supervised clustering methods, which can acquire external knowledge such as describing mutual class distribution. Generally, the context dependent clustering methods get additional information from an external variable fk defined for every k’th training vector. This variable determine the so called clustering context describing the importance of certain training vector. A solution used for building P-rule system was proposed by Blachnik et al. in [3], where the fk variable is obtained first calculating the wk coefficient: ⎡ ⎤−1 2 2 xk − xj ⎣ xk − xl ⎦ (1) wk = j,C(xj )=C(xk )
l,C(xl ) =C(xk )
where C(x) is a function returning class label for vector x. Obtained wk value is normalized to fit values in ranges [0,1]. Obtained wk parameter defines mutual position of each vector and its neighbors and after normalization take values around 0 for training vectors close homogeneous class centers, and close to 1 for vectors xk far from class centers and close to vectors from opposite class. In other words large values of wk represent position close to the border, and very large, close to 1 are outliers. Such defined coefficient wk requires further preprocesing to determine the area of interest, and the grouping context for CFCM algorithm. The preprocesing step can be interpreted as defining a linguistic value for wk variable in the fuzzy set sense. In [3] the Gaussian function has been used: (2) fk = exp −γ(wk − μ)2 where μ defines the position of clustering area, and γ is the spread of this area. Such obtained via clustering prototypes can be further optmized using LVQ based algorithm. This combination of two independent algorithms can be seen as tandem which allow obtaining accurate model with small set of prototypes. Another problem facing P-rules system is determining the number of
Comparison of Various Feature Selection Methods
259
prototypes. It can be solved whith appropriate computational complexity by observing that in prototype selection algorithms the overall system accuracy mostly depends on the classification accuracy of the least accurate class. On this basis we can conclude that improoving overal classification accuracy can be obtained by improving the worst classified class. This assumption is the basis for the racing algorithm, where a new prototype is iteratively added to the class with the highest error rate (Cierr ) [3].
3 Feature Selection Methods 3.1
Ranking Methods
Ranking methods are one of the fastest methods in feature selection problems. They are based on determining coefficient J(·) describing a relation between each input variable f and output variable C. This coefficient is set for each attribute, and then according to the value of coefficient J(·) is sorted from the most to the least important variable. In the last step according to some previously selected accuracy measure the first and best n features are selected to the final feature subset used while building the final model. Ranking methods use different coefficients for estimation quality of each feature. One of possible solutions is criteria functions based on statistical and information theory coefficients. One of the examples of such metrics is a normalized information gain, also known as asymmetric dependency coefficient, ADC [4] described as ADC(C, f ) =
M I (C, f ) H (C)
(3)
where H(c) and H(f ) are class and feature entropy, and M I(C, f ) is mutual information between class C and feature f defined by Shanonna [5] as: c H(C) = − i=1 p(Ci ) lg2 p(Ci ) H(f ) = − x p(f = x) lg2 p(f = x) (4) M I(C, f ) = −H(C, f ) + H(C) + H(f ) In this formula the sum over x for feature f require discrete values, however numerical ordered features require replacing sum with integral operation, or initial discretization, to estimate probabilities p(f = x). Another metric was proposed by Setiono [6] and is called normalized gain ratio(5). US (C, f ) =
M I(C, f ) H(f )
(5)
Another possible normalization allowing to obtain information gain is metric defining feature-class entropy as (6). UH (C, f ) =
M I(C, f ) H(f, C)
where H(f, C) is joined entropy of variable f and class C.
(6)
260
M. Blachnik
Mantaras [7] suggested some criterion DML fulfilling distance metric axiom as (7). DML (fi , C) = H(fi |C) + H(C|fi ) (7) where H(fi |C) and H(C|fi ) are conditional entropies defined by Shanon [5] as H(X|Y ) = H(X, Y ) − H(Y ). The index of weighted joined entropy was proposed by Chi [8] as (8) Ch(f ) = −
N
p(f = xk )
K
p(f = xk , Ci ) lg2 p(f = xk , Ci )
(8)
i=1
k=1
An alternative for already proposed information theory methods is χ2 statistics, which measures relations between two random variables - here defined as between feature and class. χ2 statistic can be defined as (9). χ2 (f, C) =
(p(f = xj , Ci ) − p(f = xj ) ∗ p(Ci ))2 p(f = xj )p(Ci ) ij
(9)
where p(·) is the appropriate probability. High values of χ2 represent strong correlation between certain feature and class variable, which can be used for feature ranking. 3.2
Search Based Feature Selection
An advantage of search based feature selection methods over rankings are usually more accurate results. These methods are based on both stochastic and heuristic searching strategies, what implies higher computational complexity, which for very large datasets (ex. as provided during NIPS’2003 challenge) that have a few thousands of variables may limit some algorithms usability. Typical solutions of search based feature selection are forward /backward selection methods. Forward Selection Forward selection starts from an empty feature set and in each iteration adds one new attribute, form the set of remaining. One that is added is this feature which maximizes certain criterion usually classification accuracy. To ensure the proper outcome of adding a new feature to the feature subset, quality is measured in the crossvalidation process. Backward Elimination Backward elimination algorithm differs from forward selection by starting from the full feature set, and iteratively removes one by one feature. In each iteration only one feature is removed, which mostly affects overall model accuracy, as long as the accuracy stops increasing.
Comparison of Various Feature Selection Methods
3.3
261
Embedded Feature Selection Algorithms Used as Filters
Some data mining algorithms have built-in feature selection (embedded feature selection). An example of this kind of solutions are decision trees which automatically determine optimal feature subset while building the tree. These methods can be used also as external feature selection methods, called feature filters by extracting knowledge of selected attributes subset by these data mining models. In decision trees it is equivalent to visiting every node of the tree and acquiring attributes considered for testing. This approach used as an external tool has one important advantage over ranking methods - it considers not only relations between one input attribute and the output attribute, but also searches locally for attributes that allow good local discrimination.
4 Datasets and Results 4.1
Datasets Used in the Experiments
To verify quality of previously described feature selection techniques experiments have been performed on several datasets from UCI repository From this repository 4 classification datasets have been used for validating ranking methods and 10 datasets for search based methods and tree based feature selection. Selected datasets are: Appendicitis, Wine, Pima Indian diabetes, Ionosphere, Cleveland Heart Disease, Iris, Sonar, BUPA liver disorders, Wisconsin breast cancer, Lancet. 4.2
Experiments and Results
To compare described above algorithms the empirical test where performed based on datasets presented in previous section. In all tests Infosel++ library was used to perform feature selection, and as induction algorithm selfimplemented in Matlab Spider Toolbox CFCM+LVQ algorithm was utilized. Ranking Methods Results obtained for various ranking methods displayed different behavior and different ranking order. This differences appear not only for different ranking coefficients but also the type of performed discretization, which was utilized to estimate all probabilities like p(f = Xj , Ci ). As a discretization simple equal width method has been used with 5, 10 and 15 beens. According to this for final results comparison we have used number of beens which maximize classification accuracy, for each specific coefficient, and each dataset. Obtained results are presented in fig.(1) for CFCM+LVQ algorithm. Each of these figures presents a relation between number of n - best features and obtained classification accuracy.
262
M. Blachnik 0.95
0.72
0.7 0.9 ADC U
0.68
S
U
H
0.85
D
ML
0.66
Chi 2
Acc [%]
Acc [%]
0.8
0.75
0.62
0.6
ADC US
0.7
χ Corr
0.64
U
0.58
H
D
ML
0.65
Chi
0.56
2
χ Corr 1
2
3
4
5
6
7
8
9
0.54
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 fi [-]
1
2
3
(a) Ionosphere
4
5
6
7 fi [-]
8
9
10
11
12
13
(b) Heart Disease
0.8
0.98
0.78
0.96
0.94
0.76
0.92 0.74
Acc [%]
Acc [%]
0.9 0.72
0.7
0.88
0.86 0.68
ADC U
ADC U
0.84
S
S
UH
0.66
UH
0.82
D
DML
ML
Chi
0.64
0.62
Chi 0.8
2
χ Corr 1
2
3
4
5
6
7
8
0.78
χ2 Corr 1
2
3
4
fi [-]
(c) Pima Indians
5 fi [-]
6
7
8
9
(d) Wisconsin Brest Cancer
Fig. 1. Results of ranking algorithms and CFCM+LVQ P-rules system
Search Based and Embedded Feature Selection As in the presented results of ranking methods also here a two stage testing process was used. In the first step feature selection algorithms like forward, backward selection and selection based on decision trees have been used to Table 1. Comparison of feature selection algorithms for P-rules system Dataset Appendicitis Wine Pima Indians Ionosphere Clev. Heart Dis. Iris Sonar Leaver disorder Brest Cancer Lancet
Accuracy f l Backward elimination 85.70 ± 13.98 6 2 97.25 ± 2.90 11 3 77.22 ± 4.36 7 3 87.81 ± 6.90 32 6 85.49 ± 5.59 7 3 95.33 ± 6.32 3 5 66.81 ± 20.66 56 4 67.24 ± 8.79 5 2 97.82 ± 1.96 7 3 95.66 ± 2.36 7 4
Accuracy f l Forward selection 84.86 ± 14.38 4 2 95.06 ± 4.79 5 7 76.96 ± 3.01 4 3 92.10 ± 4.25 5 6 84.80 ± 5.23 3 2 97.33 ± 3.44 2 4 75.62 ± 12.97 4 3 68.45 ± 6.40 4 2 97.81 ± 2.19 5 3 94.65 ± 2.64 5 4
Accuracy f l Tree based selection 84.86 ± 15.85 2 2 92.57 ± 6.05 6 3 75.65 ± 2.43 2 4 87.57 ± 7.11 2 4 84.80 ± 5.23 3 2 97.33 ± 3.44 2 4 74.62 ± 12.40 1 2 66.71 ± 9.05 2 2 97.23 ± 2.51 5 4 93.33 ± 3.20 4 3
Comparison of Various Feature Selection Methods
263
select best feature subset from the whole dataset. In the second stage classification algorithm has been tested using crossvalidation test. Collected results for CFCM+LVQ P-rules system are presented in table (1), where also are provided information on the number of selected prototypes (l) and a number of selected features (f ). As the accuracy measure optimized while selecting best feature subset only pure classification accuracy has been used.
5 Conclusions In the problem of combining P-rules systems and feature selection techniques it is possible to observe trend according to the problem of simplicity and comprehensibility of rule-systems. This problem is related to the model complexity versus accuracy dilemma. The analysis of provided results shows that forward selection and backward elimination outperform other methods, leading to conclusion that search based feature selection is the most robust. However these methods has a big drawback when considering computational complexity. The simplest searching methods require 0.5k(2n − (k + 1)) times invoking evaluation function, where n is the number of features, and k is the number of iterations of adding or subtracting a new feature. This problem causes an important limitation of application of these methods. Another problem of the simple search strategies like forward or backward selection is stacking in local minimas (usually the first minimum). It can be observed on Liver disorder dataset where feature selection methods based on decision trees gives better results then both searching methods. The local minima problem can be solved using more sophisticated searching algorithm, however it requires even more computational effort. The computational complexity problem does not appear in ranking methods, whose complexity is a linear function of the number of features. However these methods may be unstable in the function of obtained accuracy. It is shown in fig.(1.a) and fig.(1.b) where accuracy is fluctuating according to the number of selected features. Another important problem that appear in ranking based feature selection are redundant features, which are impossible to removed during training. Possible solution is an FCBF algorithm [9] or algorithm proposed by Biesiada et al. in [10]. Obtained results for ranking methods do not allow for selecting the best ranking coefficient, so the possible solution are ranking committees, where the feature weight is related to the frequency of appearing at certain position in different rankings. Another possibility is selecting ranking coefficients with the lowest computational complexity like correlation coefficient, which is very cheap to estimate and does not require previous discretization (discretization was used for all entropy based ranking coefficients). As a tool for P-rules system ranking algorithms should be used only for large datasets where the computational complexity of search based methods is significantly to high. A compromise solution between ranking and search based methods can be obtained with filters approach realized as features
264
M. Blachnik
selected by decision trees. This approach provide good classification accuracy and low complexity which is nm log(m), and is independent of the final classifier complexity.
Acknowledgment The author would like to thank prof. W Duch for his help in developing P-rules based systems.
References 1. Duch, W.: Similarity based methods: a general framework for classification approximation and association. Control and Cybernetics 29, 937–968 (2000) 2. Duch, W., Blachnik, M.: Fuzzy rule-based systems derived from similarity to prototypes. In: Pal, N., Kasabov, N., Mudi, R., Pal, S., Parui, S. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 912–917. Springer, Heidelberg (2004) 3. Blachnik, M., Duch, W., Wieczorek, T.: Selection of prototypes rules context searching via clustering. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, ˙ L.A., Zurada, J.M. (eds.) ICAISC 2006. LNCS(LNAI), vol. 4029, pp. 573–582. Springer, Heidelberg (2006) 4. Shridhar, D., Bartlett, E., Seagrave, R.: Information theoretic subset selection. Computers in Chemical Engineering 22, 613–626 (1998) 5. Shanonn, C., Weaver, W.: The Mathematical Theory of Communication. University of Illinois Press (1946) 6. Setiono, R., Liu, H.: Improving backpropagation learning with feature selection. The International Journal of Artificial Intelligence, Neural Networks, and Complex Problem-Solving Technologies 6, 129–139 (1996) 7. de Mantaras, R.L.: A distance-based attribute selecting measure for decision tree induction. Machine Learning 6, 81–92 (1991) 8. Chi, J.: Entropy based feature evaluation and selection technique. In: Proc. of 4-th Australian Conf. on Neural Networks (ACNN 1993) (1993) 9. Yu, L., Liu, H.: Feature selection for high-dimensional data: A fast correlationbased filter solution. In: Proceedings of The Twentieth International Conference on Machine Learning (2003) 10. Duch, W., Biesiada, J.: Feature selection for high-dimensional data: A kolmogorov-smirnov correlation-based filter solution. In: Advances in Soft Computing, pp. 95–104. Springer, Heidelberg (2005)
A Novel Ensemble of Scale-Invariant Feature Maps Bruno Baruque1 and Emilio Corchado2 1 2
University of Burgos [email protected] University of Burgos [email protected]
Summary. A novel method for improving the training of some topology preserving algorithms as the Scale Invariant Feature Map (SIM) and the Maximum Likelihood Hebbian Learning Scale Invariant Map (MAX-SIM) is presented and analyzed in this study. It is called Weighted Voting Superposition (WeVoS), providing two new versions, the WeVoS-SIM and the WeVoS-MAX-SIM. The method is based on the training of an ensemble of networks and the combination of them to obtain a single one, including the best features of each one of the networks in the ensemble. To accomplish this combination, a weighted voting process takes place between the units of the maps in the ensemble in order to determine the characteristics of the units of the resulting map. For comparison purposes these new models are compared with their original models, the SIM and MAX-SIM. The models are tested under the frame of an artificial data set. Three quality measures have been applied for each model in order to present a complete study of their capabilities. The results obtained confirm that the novel models presented in this study based on the application of WeVoS can outperform the classic models in terms of organization of the presented information.
1 Introduction Among the great variety of tools for multi-dimensional data visualization, several of the most widely used are those belonging to the family of the topology preserving maps [16]. Two interesting models are the Scale Invariant Map (SIM) [8] and the Maximum Likelihood Scale Invariant Map (MAXSIM) [5]. Both are designed to perform their best with radial data sets due to the fact that both create a mapping where each neuron captures a “pie slice” of the data according to the angular distribution of the input data. The main difference between this mapping and the SOM is that this mapping is scale invariant. When the SOM is trained, it approximates a Voronoi tessellation of the input space [17]. The scale invariant map, however, creates a mapping where each neuron captures a “pie slice” of the data according to the angular distribution of the input data. The main problem of all the neural network algorithms is that, they are rather unstable [11]. The use of ensembles is one of the most spread techniques M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 265–273. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
266
B. Baruque and E. Corchado
for increasing the stability and performance of an analysis model [3,12]. This meta-algorithm consists on training several slightly different models over the same dataset and relying on their combined results, rather than in the results of a single model. There are many combination algorithms in classification ensembles literature; but few of them, up to the knowledge of the authors, is directly applicable to topology preserving algorithms. Several algorithms for topographic maps summarization have been previously proposed [19, 10, 22], although there are some characteristics of the topology preserving models that have not been taken into account. In this research we present and analyse a new fusion algorithm called Weighted Voting Superposition (WeVoS) applied for the first time to the SIM and MAX-SIM. The study reports the application of these algorithms under one artificial dataset, created accordingly to the main characteristic of the models under study.
2 Topology Preserving Maps The main target of the family of topology preserving maps [15] is to produce low dimensional representations of high dimensional datasets maintaining the topological features of the input space. The Scale Invariant Map (SIM) [8] uses a simple network which uses negative feedback of activation and simple Hebbian learning to self-organize. By adding neighbourhood relations to its learning rule, it creates a feature map which has the property of retaining the angular properties of the input data, i.e. vectors of similar directions are classified similarly regardless of their magnitude. A SIM is also a regular array of nodes arranged on a lattice. Competitive learning and a neighbourhood function are used in a similar way as with the SOM. The input data (x) is fed forward to the outputs yi in the usual way. After selection of a winner, the winner, c, is deemed to be firing (yc=1) and all other outputs are suppressed (yi = 0, ). The winner’s activation is then fed back through its weights and this is subtracted from the inputs to calculate the error or residual e as shown in Eq. 1: e = x − Wc · yc , (yc = 1)
(1)
The Maximum Likelihood Scale Invariant Map (MAX-SIM) [5] is an extension of the Scale Invariant Map (SIM) based on the application of the Maximum Likelihood Hebbian Learning (MLHL) [6, 9]. The main difference with the SIM is that the MLHL is used to update the weights of all nodes in the neighbourhood of the winner, once this has been updated as in Eq. 1. This can be expressed as in Eq. 2: p−1
Wi = hci · η · sign (e − Wc ) |e − Wc |
, ∀i ∈ Nc
(2)
By giving different values to p, the learning rule is optimal for different probability density functions of the residuals. hci is the neighbourhood function
A Novel Ensemble of Scale-Invariant Feature Maps
267
as in the case of the SOM and Nc is the number of output neurons. Finally η, represents the learning rate. During the training of the SIM or the MAX-SIM, the weights of the winning node are fed back as inhibition at the inputs, and then in the case of the MAX-SIM, MLH learning is used to update the weights of all nodes in the neighbourhood of the winner as explained above. 2.1
Features to Analyse
Several quality measures have been proposed in literature to study the reliability of the results displayed by topology preserving models in representing the dataset that have been trained with [20, 21]. There is not a global and unified one, but rather a set of complementary ones, as each of them asses a specific characteristic of the performance of the map in different visual representation areas. The three used in this study are briefly presented in the following paragraphs. Topographic Error [14]. It consists on finding the first two best matching units (BMU) for each entry of the dataset and testing whether the second is in the direct neighbourhood of the first or not. Distortion [18]. When using a constant radius for the neighbourhood function of the learning phase of a SOM; the algorithm optimizes a particular function. This function can be used to quantify in a more trustful way than the previous one, the overall topology preservation of a map by means of a measure, called distortion measure in this work. Goodness of Map [13]. This measure combines two different error measures: the square quantization error and the distortion. It takes account of both the distance between the input and the BMU and the distance between the first BMU and the second BMU in the shortest path between both along the grid map units, calculated solely with units that are direct neighbours in the map.
3 Previous Work: Fusion of SOM Several algorithms for fusion of maps have been tested and reviewed recently by the authors of this work [2, 4]. In the present study, two of them will be employed. The first one, called Fusion based on Voronoi Polygons Similarity [22], is characterized by determining the units of different maps that are suitable to be fused by comparing the input space covered by each unit [1]. That is, comparing what are called the Voronoi polygons of each unit. This summary is very good at recognizing and adapting its structure in the input space of the dataset, but is not really able to represent that same dataset in a 2-D map; thus being of no use for dimensional reduction and visualization tasks.
268
B. Baruque and E. Corchado
The second one, called Fusion based on Euclidean Distance [10] uses the classic Euclidean distance between units to determine their suitability to be fused, instead. In this case the model is able to represent the dataset as a 2-D map, but the way it computes the neurons to fuses is an approximate one, so is prone to minor errors in the topology preservation of the map.
4 Weighted Voting Superposition (WeVoS) The novel algorithm presented in this work tries to overcome the problems outlined for the previous described models. The principal idea is to obtain the final units of the map by a weighted voting among the units in the same position in the different maps, according to a quality measure. This measure can be any found in literature, as long as can be calculated in a unit by unit basis. The voting process used is the one described in Eq. 3: bp,m qp,m Vp,m = M · M (3) i=1 bp,i i=1 qp,i where, Vp,m is the weight of the vote for the unit included in map m of the ensemble, in its in position p, M is the total number of maps in the ensemble, bp,m is the binary vector used for marking the dataset entries recognized by unit in position p of map m, and qp,m is the value of the desired quality measure for unit in position p of map m. The detailed description of the WeVoS algorithm is included below under the title Algorithm 1.
Algorithm 1. Weighted Voting Superposition 1: Train several maps by using the bagging (re-sampling with replacement) metaalgorithm 2: Calculate the quality measure/error measure chosen for the unit in each position (p) of each map (m) 3: Calculate an accumulated total of the quality/error for each position Q(p) in all maps 4: Calculate an accumulated total of the number of data entries recognized by a position in all maps D(p) 5: initialize the fused map (f us) by calculating the centroid (w’) of the units of all maps in that position (p) 6: for each map in the ensemble (m) calculate the vote weight of each of its units (p) by using Eq. 3 7: for each unit (p) in each map (m), feed to the fused map (f us) the weights vector of the unit (p) as if it was an input to the map, using the weight of the vote calculatedin step 6 as the learning rate and the index of that same unit (p) as the index of the BMU
A Novel Ensemble of Scale-Invariant Feature Maps
269
5 Experiments and Results An artificial 2-dimensional dataset was created for testing and comparing the different algorithms described in this study. It was generated by using classic Gaussian distributions. The Gaussian distributions were centred along five different points of the 2-D data space. The centres were disposed forming a “circular shape”. Tests were run using a classic five-fold cross-validation in order to use the complete dataset for training and testing. The ensembles were trained using one of the simplest meta-algorithm for ensemble training: the bagging meta-algorithm [3]. When a 1-D topology preserving map grid has to adapt to a circular shaped dataset (see Fig. 1) the most appropriate shape to use should intuitively be a circular shape, then the application of a SIM and MAX-SIM is very
(a) A single SIM
(b) Fusion by Euclidean Distance
(c) Fusion by Voronoi Polygon Similarity
(d) Weighted Voting Superposition
Fig. 1. The single SIM and the three summarizations for the same 6 network ensemble. All trained over the circular dataset employing the SIM learning Algorithm. The resultant 1-D networks are showed embedded in the 2-D dataset.
270
B. Baruque and E. Corchado
(a) Topographic Error
(b) Distortion
(c) Goodness of Adaptation Fig. 2. Measures for the MAX-SIM model both for the single version and the summarization algorithms obtained for the circular dataset
appropriated, as they tend to generate circular shaped grids, which are scale invariant. Fig. 1(a) to Fig. 1(d) show the results obtained by training a single SIM and an ensemble of 6 different SIMs over the circular shaped dataset. The figures show the results of calculating each of the summarizing algorithms discussed for the ensembles. In this case (Fig. 1), all models use a circular shape for their adaptation to the dataset. The single SIM (Fig. 1(a)) adapts to the dataset correctly, but using a slightly too open map. The Fusion by Distance (Fig. 1(b)) suffers from an expected problem: several twists appear on its structure. To find the actual closest unit to another one using the Euclidean distance would be a NPcomplete problem. That is why the algorithm works with an approximation to find the units to fuse. For classification problems that does not seem to be an important issue, but when dealing with visual inspection this approach has the problem of not always preserving strictly the topography of the network. Regarding the Fusion by Similarity (Fig.1(c)) it can easily be seen that, although the shape is correct, there are too many unnecessary units. This is due to the fact that, with such a big dataset, is impossible for a 1-D network to cover the data space correctly, so very few units are related with Voronoi polygons that overlap because they recognized mainly the same data entries. The WeVoS-SIM (Fig. 1(d)) obtains an oval shape without many twist or dead neurons, providing the best results.
A Novel Ensemble of Scale-Invariant Feature Maps
271
Results yielded by the application of the WeVoS-MAX-SIM are very similar to those obtained by the WeVoS-SIM . This second experiment’s corresponding figures are not showed due to the limit of space. Regarding the different summarization models, it can be observed how single models, adapt to the dataset correctly, but that their results can be improved by the use of the ensemble meta-algorithms. Among these summarization methods, the WeVoS obtains a simple network, without major twists and obtains a result more adapted to the dataset shape than the single model. As stated in the introduction, the aim of the novel model presented (WeVoS) is to obtain a truly topology preserving representation of the dataset in a map. Thus, the most important features to evaluate in this case are the neighbouring relationship of the units of the map and the continuity of the map. These features are assessed by topographic error, distortion and to some extent goodness of map. The following figures show the different three measures described in section 2.1 obtained for the single model and for the three different summarization algorithms described in this work. What is shown for each measure in each figure is the comparison of quality obtained by each of the four fusion algorithms calculated over the same ensemble of maps. They represent how the measures vary when the number of maps included in the summary increases from 1 to 15. X-axis represents this number of maps, while Y-axis represents the measure obtained. All the three measures are errors, so the closest to 0, the best a result can be considered. As expected, according to the measures regarding topographic ordering the WeVoS obtains better results than other models both for the SIM and the MAX-SIM. The exception to this is the Fusion by Similarity. Its results are not directly comparable, as the number of units contained in it is different from the rest, thus altering the results. The other three models behave in a more consistent way, being the WeVoS the model obtaining lower error rates. As stated before, the preservation of neighbourhood and topographic ordering is the main concern of the presented meta-algorithm. As a kind of confirmation of these results, again with the exception of the Fusion by Similarity, the Goodness of adaptation of the map for the WeVoS is the best of all models.
6 Conclusions An algorithm to summarize an ensemble of topology preserving maps has been presented in this work. This algorithm aims to obtain the best topology preserving summary as possible, in order to be used as a reliable tool in data visualization. Specifically in the case of the present work it has been applied to a model called Scale Invariant map and an extension of it, called Maximum Likelihood – SIM. Tests performed over an artificial dataset prove this algorithm as useful, obtaining better results than other similar ones. Future work includes the application of this algorithm to other topology preserving
272
B. Baruque and E. Corchado
models and its combination with the use of other ensemble generation algorithms to boost its performance. Also real life dataset will be used in order to assess its usefulness for real problems.
Acknowledgements This research has been partially supported through project BU006A08 by the Junta de Castilla y Le´on.
References 1. Barna, G., Chrisley, R., Kohonen, T.: Statistical pattern recognition with neural networks. Neural Networks 1, 1–7 (1988) 2. Baruque, B., Corchado, E., Yin, H.: ViSOM Ensembles for Visualization and Classification. In: Sandoval, F., Prieto, A.G., Cabestany, J., Gra˜ na, M. (eds.) IWANN 2007. LNCS, vol. 4507, pp. 235–243. Springer, Heidelberg (2007) 3. Breiman, L.: Bagging Predictors. Machine Learning 24, 123–140 (1996) 4. Corchado, E., Baruque, B., Yin, H.: Boosting Unsupervised Competitive Learning Ensembles. In: de S´ a, J.M., Alexandre, L.A., Duch, W., Mandic, D.P. (eds.) ICANN 2007. LNCS, vol. 4668, pp. 339–348. Springer, Heidelberg (2007) 5. Corchado, E., Fyfe, C.: Maximum Likelihood Topology Preserving Algorithms. In: U.K. Workshop on Computational Intelligence (2002) 6. Corchado, E., Fyfe, C.: The Scale Invariant Map and Maximum Likelihood Hebbian Learning. In: International Conference on Knowledge-Based & Intelligent Information & Engineering System (2002) 7. Corchado, E., MacDonald, D., Fyfe, C.: Maximum and Minimum Likelihood Hebbian Learning for Exploratory Projection Pursuit. Data Mining and Knowledge Discovery 8, 203–225 (2004) 8. Fyfe, C.: A scale-invariant feature map. Network: Computation in Neural Systems 7, 269–275 (1996) 9. Fyfe, C., Corchado, E.: Maximum likelihood Hebbian rules. In: European Symposium on Artificial Neural Networks (ESANN) (2002) 10. Georgakis, A., Li, H., Gordan, M.: An ensemble of SOM networks for document organization and retrieval. In: International Conference on Adaptive Knowledge Representation and Reasoning (AKRR 2005), pp. 6–141 (2005) 11. Heskes, T.: Balancing between bagging and bumping. In: Advances in Neural Information Processing Systems, vol. 9, pp. 466–472 (1997) 12. Johansson, U., Lofstrom, T., Niklasson, L.: Obtaining accurate neural network ensembles. In: International Conference on Computational Intelligence for Modelling, Control & Automation Jointly with International Conference on Intelligent Agents, Web Technologies & Internet Commerce, Proceedings, vol. 2, pp. 103–108. 13. Kaski, S., Lagus, K.: Comparing Self-Organizing Maps. In: Vorbr¨ uggen, J.C., von Seelen, W., Sendhoff, B. (eds.) ICANN 1996. LNCS, vol. 1112, pp. 809–814. Springer, Heidelberg (1996) 14. Kiviluoto, K.: Topology preservation in self-organizing maps. In: IEEE International Conference on Neural Networks (ICNN 1996), vol. 1, pp. 294–299 (1996)
A Novel Ensemble of Scale-Invariant Feature Maps
273
15. Kohonen, T.: The self-organizing map. Neurocomputing 21, 1–6 (1998) 16. Kohonen, T.: Self-Organizing Maps. Springer, Berlin (1995) 17. Kohonen, T., Lehtio, P., Rovamo, J., Hyvarinen, J., Bry, K., Vainio, L.: A principle of neural associative memory. Neuroscience 2, 1065–1076 (1977) 18. Lampinen, J.: On Clustering Properties of Hierarchical Self-Organizing Maps. Artificial Neural Networks 2, 1219–1222 (1992) 19. Petrakieva, L., Fyfe, C.: Bagging and Bumping Self Organising Maps. Computing and Information Systems Journal, 1352–1404 (2003) 20. Polani, D.: Measures for the Organization of Self-Organizing Maps. In: Selforganizing Neural Networks: Recent Advances and Applications. Studies in Fuzziness and Soft Computing, pp. 13–44 (2003) 21. Polzlbauer, G.: Survey and Comparison of Quality Measures for Self-Organizing Maps. In: Fifth Workshop on Data Analysis (WDA 2004), pp. 67–82. Elfa Academic Press, London (2004) 22. Saavedra, C., Salas, R., Moreno, S., Allende, H.: Fusion of Self Organizing Maps. In: Sandoval, F., Prieto, A.G., Cabestany, J., Gra˜ na, M. (eds.) IWANN 2007. LNCS, vol. 4507, pp. 227–234. Springer, Heidelberg (2007) 23. Vesanto, J., Sulkava, M., Hollen, J.: On the Decomposition of the SelfOrganizing Map Distortion Measure. In: Proceedings of the Workshop on SelfOrganizing Maps (WSOM 2003), pp. 11–16 (2003) 24. Voronoi, G.: Nouvelles applications des parametres continus a la theorie des formes quadratiques. Math. 133, 97–178 (1907)
Multivariate Decision Trees vs. Univariate Ones Mariusz Koziol and Michal Wozniak Chair of Systems and Computer Networks, Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, 50-370 Wroclaw, Poland {mariusz.koziol,michal.wozniak}@pwr.wroc.pl
Summary. There is much current research into developing ever more efficient and accurate recognition algorithms. Decision tree classifiers are currently the focus of intense research. In this work methods of univariate and multivariate decision tree induction are presented and their qualities are compared via computer experiments. Additionally causes of decision tree parallelization are discussed.
1 Introduction Problem of pattern recognition accompanies our whole life. Therefore methods of automatic pattern recognition is one of the main trend in Artificial Intelligence. The aim of such task is to classify the object to one of the predefined categories, on the basis of observation of its features [13]. Such methods are applied to the many practical areas [7, 9, 29, 32]. Numerous approaches have been proposed to construct efficient, high quality classifiers like neural networks, statistical and symbolic learning [21]. Among the different concepts and methods of machine learning, a decision tree induction is both attractive and efficient. The basic idea involved in any multistage approach is to break up a complex decision into several simpler classifications [27]. Paper presents two main concepts of decision tree induction. The first one returns an univariate decision tree and the second one allows to create a multivariate one. Examples of above-mentioned approaches will be presented and compared in this paper. Additionally parallelization of methods which we are taking into consideration will be discussed. The content of the paper is as follow: section 2 introduces to pattern recognition problem, the next one describes problems of decision tree induction and causes of their parallel versions development, section 4 presents some experiments which evaluate methods of multivariate and univariate decision tree generation, the last section concludes the paper.
2 Pattern Recognition Model A main aim of each pattern recognition algorithm is to construct classifier Ψ which is able to assign object to appropriate class i ∈ M = {1, ..., M } on the basis of the observed features x M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 275–284. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
276
M. Koziol and M. Wozniak
Ψ (x) = i.
(1)
There are many methods of learning classifier. Such methods solve the optimization problem how to train efficient classifier on the basis of learning set, which consists of elements representing feature values and corresponding class. In this case ”efficient” means high-quality (i.e. classifier which average cost of misclassification is lower), cheap in exploitation (i.e. cost of feature acquisition is small) and cheap in construction (i.e. cost of it learning, like time, is small also). From this point of view decision tree induction methods are very attractive approaches.
3 Decision Trees The decision tree classifier is a possible approach to multistage pattern recognition. The synthesis of such classifier is a complex problem, which involves specification of designing of a decision tree structure, feature selection used at each non-terminal node of decision tree, and choice of decision rules for performing the classification [22]. The decision tree induction algorithms have been used for several years [1, 21]. They propose an approximation discrete function method which is adopted to the classification task. It is one of the most important method for classification which achieve very good classification quality in many practical decision support systems. One of the main advantage of this methods is that obtained tree could be easy convert into the set of rules (each path from that form is a decision rule). This form of knowledge is the most popular form of knowledge representation of expert systems [19]. 3.1
Univariate Decision Tree
Many decision-tree algorithms have been developed. The most famous are ID3 [23] and its modification C4.5 [24]. ID3 is a typical decision-tree algorithm. It introduces information entropy as the splitting attribute’s choosing measure and trains a tree from root to leaf. The central choice in the ID3 algorithm is selecting ”the best” attribute (which attributes to test at each node of the tree). The proposed algorithm uses the information gain that measures how well the given attribute separates the training examples according to the target classification. This measure based on the Shanon’s entropy of learning set S [23]. The Information Gain of an attribute A relative to the collection of examples S, is defined as Gain(S, A) = Entropy(S) −
c∈values(A)
|Sv | Entropy(Sv ), |S|
(2)
where values(A) is the set of all possible values for attribute A and Sv is the subset of S for which A = v. As we mentioned above the C4.5 algorithm
Multivariate Decision Trees vs. Univariate Ones
277
is an extended version of ID3. It improves appropriate attribute selection measure, avoids data overfitting, reduces error pruning, handles attributes with different weight, improves computing efficiency, handles missing value data and continuous attributes, and performs other functions. C4.5 instead of information gain in ID3 uses an information gain ratio [24]. There are exist many other propositions which could be used instead of Quinlan’s proposition like Gini metric used by e.g. by CART [5] or χ2 statistic. 3.2
Multivariate Decision Tree
Let us present exemplary learning problem depicted in Figure 1. Let note that the following linear classifier (depicted by dashed line) separates classes well if y >
y1 x then "dot" else "star". x1
(3)
An univariate decision tree prefers more complex separating curve depicted by solid line. The structure of the univariate tree related to mentioned-above curve is depicted on the left diagram in Figure 1. Therefore multivariate decision tree offers simpler structure than univariate ones. We can suppose that simple structure is less susceptible to overfitting. Moreover univariate decision tree induction uses greedy search methods. As a criterion local dicsrimiantion power (like information gain presented in (2)) is used. In [8] Cover proved that for the set of attributes the best pair of features can not consist of two best independent features. It could consists of two another features. There are many proposition how to construct multivariate decision tree. Some of them suggest to use classifier in each node, e.g. LMDT uses linear classifier [6], in [17] authors propose to use Bayesian one. Interesting approaches was presented in [18], where proposition called LMT uses ID3 algorithm for discrete features and then linear regression for the rest of features. Another
Fig. 1. Exemplary learning problem and respective univariate decision tree
278
M. Koziol and M. Wozniak
approach to multivariate decision tree induction suggests to use traditional or heuristic feature selection methods in each node [4, 10]. 3.3
Parallelization of Decision Tree
In [23] Quinlan notes that computational complexity of ID3 (for discrete attribute) at each node of tree is O(NLS ·NA ), where NLS is number of examples in learning set and NA is number of attributes in the node. For continuous attributes the computational complexity is over quadratic in the size of learning set [25]. For such case to speed examination of the candidates, ID3 sorts examples using the continuous attributes as the sort key. The computational complexity of mentioned operation is O(NLS · log2 NLS ), what needs very long time for large dataset. Another time-consuming problem is decision tree pruning, which protects decision tree from overtraining. Its computational complexity is hard to estimate because it depends on decision tree size. To effectively deal with huge databases, we need time-efficient parallel decision tree induction methods which can use a distributed network environment. There are some proposition of parallel decision tree algorithm. SLIQ [20] and its extension SPRINT [28] use a pre-sorting technique in tree-growing phase and proposed new pruning procedures. In [16] data distributed parallel formulation of C4.5 was shown. Author used only frequency statistics from data to choose the best attribute. Parallel algorithm SPIES of decision tree induction with almost linear speedup was presented in [13]. In [30] synchronous and partitioned decision tree induction algorithms were shown. Additionally authors compared these propositions and formulated hybrid algorithm. The interesting research was presented in [34] where authors proposed three parallel versions of univariate decision tree algorithm. Results of experiments which evaluated dependencies between speedup and number of processors were shown in cited work also. Mentioned algorithms concentrated their attention on constructing decision tree for a given learning set. If new data is coming the algorithms have to start from the beginning, because structure of decision tree is hard to modify. For many practical problems (where databases grow slowly) this feature of methods is not disadvantage but for fast grooving database it could be a problem. Very interesting proposition of parallel decision tree algorithm for streaming data could be found in [3]. Proposed method builds a decision tree using horizontal parallelism on the basis of on-line method for building histograms from coming data. These histograms are used to create new nodes of tree. Authors of mentioned paper showed that classification error of their proposition for distributed version of decision tree induction was slightly worse than original one, but value of error bound was acceptable for practical implementation. Some interesting observations and useful tips for decision tree construction on the basis of streaming data could be found in [13]. When decision tree algorithms are applied to huge database they require significantly more computations. If we want to use multivariate decision tree induction algorithms in this case then
Multivariate Decision Trees vs. Univariate Ones
279
number of computations dramatically grows. This observation caused that an idea of parallel decision tree induction algorithm was introduced early [28]. Another reason of parallelization needs may be data distribution and cost (usually time) of data relocation [16]. The main parallelization strategies for univariate decision trees were discussed in [34].
4 Experimental Investigation The aim of the experiment is to compare the performance of classifiers based on univariate decision tree obtained via C4.5 [24] and CART [5] procedures with the quality of classifier based on multivariate decision tree obtained via LMT, FT, and NBT methods. LMT (Logistic Model Tree) [18, 31] returns decision tree with logistic regression function at the leaves. Tree obtained via FT (Functional Tree) could have logistic regression function in the inner nodes [12, 18]. NBT (Naive Bayes Tree) returns tree with naive Bayes classifiers in the leaves [15]. 4.1
Experiment’s Set-Up
Experiments were carried out on 26 benchmark databases from UCI Machine Learning Repository [2], which are described in Table 1. The set-up of experiments were as follows: • • •
All experiments were carried out in WEKA environment [33] and own software developed in Java environment. Classifiers’ errors were estimated using the ten fold cross validation method [14]. The tree generation time is the average time of 10 generations of tree based on whole learning set. Of course we realize ourselves that it depends on quality of implementation, but we believe that mentioned-above algorithms were implemented well and the tree generation time could be used to compare computational complexity of discussed methods.
Results of experiments are presented in Table 2. 4.2
Experimental Results Evaluation
Firstly, one has to note that we are aware of the fact that the scope of computer experiments were limited. Therefore, making general conclusions based on them is very risky. The main observation are as follows: • •
Multivariate decision trees have almost given the best results of recognition. Only for 5 databases univariate decision trees have been slightly better than multivariate ones. Univariate decision tree induction algorithms have always returned more complex tree than multivariate ones. The smallest trees have been built by LMT (sometimes it consists of one node), and the biggest ones
280
M. Koziol and M. Wozniak
Table 1. Databases’ description
database 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
• •
arrythmia audiology autos balance-scale breast-w colic credit-a credit-g dermatology diabetes ecoli flags glass heart-c heart-h hepatitis hypothyroid ionosphere iris liver-disorder lung-cancer sonar soybean tic-tac-toe vote wine
number of number of number of attributes classes examples 279 69 24 4 9 22 15 20 34 8 7 29 9 13 13 19 29 34 4 6 57 60 37 9 16 14
16 24 7 3 2 2 2 2 6 2 8 8 7 5 5 2 4 2 3 2 2 2 19 2 2 3
452 226 205 625 699 690 690 1000 366 768 336 194 214 303 294 155 3772 351 150 345 32 208 683 958 435 178
have been created by C4.5, what confirm our observation presented in section 3.2. Univariate decision tree induction method are time-efficient. They have built the trees fast, much more faster than multivariate ones (e.g. LMT has been more than 400 times slower than C4.5 on average). In presented experiments FT has achieved quite good quality (only a little bit worse than LMT) and its tree generation time has been acceptable (it has been the fastest multivariate decision tree induction method and it has been only ca. 3 times slower than CART and ca. 30 times slower than C4.5)
On the basis of mentioned-above researches and reported experiments’ results presented e.g. in [6] it is worth continuing works on multivariate decision tree induction methods.
Multivariate Decision Trees vs. Univariate Ones
281
Table 2. Results of experiments. Identifier of database is compatible with identifier in table 1. For each database in each row accuracy [%], time of tree generation [s] and number of tree nodes are presented respectively. FT C4.5 LMT CART NB 67.48 1 34.69 1 84.51 2 5.66 1 76.10 3 1.19 7 90.72 4 0.22 5 96.99 5 0.28 3 79.62 6 0.52 9 85.51 7 0.80 11 68.30 8 2.06 29 95.90 9 2.81 1 77.34 10 0.42 1 85.42 11 0.42 1 67.37 12 2.91 3 64.49 13 0.52 3
64.38 1.17 52 77.88 0.06 54 81.95 0.05 69 76.64 0.02 103 94.56 0.06 27 85.33 0.03 6 86.09 0.06 42 70.50 0.13 140 93.99 0.02 40 73.83 0.05 39 84.23 0.03 43 59.28 0.02 69 66.82 0.03 21
69.47 247.86 1 84.07 121.70 1 77.56 13.94 3 89.76 3.52 9 95.99 2.14 1 82.61 5.92 3 84.78 5.98 1 75.90 31.25 1 97.81 17.70 9 77.47 3.05 1 87.20 2.13 1 62.89 22.44 3 68.69 4.36 1
70.58 5.14 43 72.57 1.08 43 74.15 0.33 59 79.04 0.22 25 94.85 0.22 15 86.14 0.66 7 85.22 0.70 3 73.90 1.77 13 93.99 0.92 17 75.13 0.30 5 83.93 0.14 15 35.57 1.34 1 70.56 0.17 13
78.32 35.03 29 78.32 44.47 29 81.46 1.34 25 76.64 0.49 19 96.28 0.30 1 82.06 4.53 35 84.64 4.53 30 75.60 0.69 1 96.45 8.47 26 74.35 0.33 1 82.14 0.55 7 57.73 6.05 23 70.56 0.42 7
FT 85.51 14 0.45 9 81.63 15 0.47 9 81.29 16 0.11 3 99.34 17 4.50 7 90.31 18 0.38 19 99.67 19 0.09 1 75.07 20 0.25 7 71.88 21 0.11 1 79.81 22 0.31 5 94.29 23 15.13 1 94.99 24 0.92 9 95.40 25 0.34 5 98.88 26 0.14 1
C4.5 LMT CART NB 77.56 0.0.2 51 80.95 0.02 10 83.87 0.02 21 99.59 0.16 29 91.45 0.14 35 96.00 0.0.02 9 68.70 0.03 51 78.13 0.01 7 71.15 0.05 35 91.51 0.06 93 85.28 0.06 146 96.32 0.02 11 93.82 0.01 9
83.17 2.64 1 85.03 7.08 1 83.23 1.05 1 99.50 170.36 11 93.16 6.72 3 94.00 0.05 1 66.38 1.22 3 81.25 0.44 1 78.37 7.45 1 93.56 253.97 1 98.23 33.27 1 96.78 1.08 1 97.19 0.70 1
80.86 0.25 19 78.57 0.19 15 78.71 0.11 13 99.55 3.55 15 89.74 0.31 9 95.33 0.01 9 67.54 0.09 5 87.50 0.14 3 71.15 0.38 19 91.21 3.74 129 92.90 0.38 61 95.40 0.23 9 89.33 0.11 9
81.52 0.97 20 82.31 0.84 20 80.00 0.70 7 99.65 10.98 21 89.74 4.81 7 94.00 0.20 7 66.09 0.38 11 78.13 1.20 4 75.96 3.89 11 91.51 44.13 53 82.15 1.86 88 95.63 1.28 9 96.63 0.50 9
282
M. Koziol and M. Wozniak
5 Conclusions Problems of univariate and multivariate decision tree induction have been presented in this work. Advantages of some methods and evaluation on the basis of computer’s experiments of some propositions of decision tree induction have been shown. Classifiers based on decision tree schema are both attractive and efficient but their computational complexity is high. There are many propositions how to parallelize them [13, 16, 20, 25, 28] without lost of their classification accuracy. It is worth researching into their modification using streaming data [3]. The results of experiments and literature review of works connected with parallel decision tree induction methods encourage us to continue works on distributed methods of multivariate decision tree induction, especially on: • • •
efficient distributed methods of multivariate decision tree induction for cluster, grid and P2P environments, method of privacy assurance for such methods, evaluations of classification accuracy and computational complexity of proposed concepts.
Acknowledgement This work is supported by The Polish State Committee for Scientific Research under the grant which is realizing in years 2006-2009.
References 1. Alpaydin, E.: Introduction to Machine Learning. MIT Press, London (2004) 2. Asuncion, A., Newman, D.J.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine (2007), http://www.ics.uci.edu/~ mlearn/MLRepository.html 3. Ben-Haim, Y., Yom-Tov, E.: A streaming parallel decision tree algorithm. In: Proc. of ICML 2008 Workshop PASCAL Large Scale Learning Challenge, Helsinki, Finland (2008) 4. Blum, A.L., Langley, P.: Selection of Relevant Features and Examples in Machine Learning. Artificial Intelligence 97(1-2), 245–271 (1997) 5. Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and Regression Trees, Wadsworth (1984) 6. Brodley, C.E., Utgoff, P.E.: Multivariate Decision Trees. Machine Learning 19(1), 45–77 (1995) 7. Buckinx, W., et al.: Using Machine Learning Techniques to Predict Defection of Top Clients. In: Proc. 3rd International Conference on Data Mining Methods and Databases, Bologna, Italy, pp. 509–517 (2002) 8. Cover, T.M.: The Best Two Independent Measurements are Not the Two Best. IEEE Transactions on Systems, Man and Cybernetics SMC-4(1), 116– 117 (1974)
Multivariate Decision Trees vs. Univariate Ones
283
9. Crook, J.N., Edelman, D.B., Thomas, L.C.: Recent developments in consumer credit risk assessment. European Journal of Operational Research 183, 1447– 1465 (2007) 10. Dash, M., Liu, H.: Feature Selection for Classification. Intelligent Data Analysis 1(1-4), 131–156 (1997) 11. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. John Willey and Sons, New York (2001) 12. Gama, J.: Functional Trees. Machine Learning 55(3), 219–250 (2004) 13. Jin, R., Agrawal, G.: Communication and memory efficient parallel decision tree construction. In: Proc. of the 3rd SIAM Conference on Data Mining, San Francisco, CA, pp. 119–129 (2003) 14. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. of the 14th Int. Joint Conf. on Artificial Intelligence, San Mateo, pp. 1137–1143 (1995) 15. Kohavi, R.: Scaling Up the Accuracy of Naive-Bayes Classifiers: A DecisionTree Hybrid. In: Proc. of the Second International Conference on Knoledge Discovery and Data Mining, pp. 202–207 (1996) 16. Kufrin, R.: Decision trees on parallel processors. In: Geller, J., Kitano, H., Suttner, C.B. (eds.) Parallel Processing for Artificial Intelligence, vol. 3, pp. 279–306. Elsevier Science, Amsterdam (1997) 17. Kurzynski, M.: The optimal strategy of a tree classifier. Pattern Recognition 16(1), 81–87 (1983) 18. Landwehr, N., Hall, M., Frank, E.: Logistic Model Trees. Machine Learning 95(1-2), 161–205 (2005) 19. Liebowitz, J. (ed.): The Handbook of Applied Expert Systems. CRC Press, Boca Raton (1998) 20. Mehta, M., et al.: SLIQ: A fast scalable classifier for data mining. In: Proc. of the 5th International Conference on Extending Database Technology, Avignon, France, pp. 18–32 (1996) 21. Mitchell, T.M.: Machine Learning. McGraw-Hill Comp., Inc., New York (1997) 22. Mui, J., Fu, K.S.: Automated classification of nucleated blood cells using a binary tree classifier. IEEE Trans. Pattern Anal. Mach. Intell., PAMI 2, 429– 443 (1980) 23. Quinlan, J.R.: Induction on Decision Tree. Machine Learning 1, 81–106 (1986) 24. Quinlan, J.R.: C4.5: Program for Machine Learning. Morgan Kaufman, San Mateo (1993) 25. Paliouras, G., Bree, D.S.: The effect of numeric features on the scalability of inductive learning programs. In: Lavraˇc, N., Wrobel, S. (eds.) ECML 1995. LNCS, vol. 912, pp. 218–231. Springer, Heidelberg (1995) 26. Pearson, R.A.: A coarse grained parallel induction heuristic. In: Kitano, H., Kumar, V., Suttner, C.B. (eds.) Parallel Processing for Artificial Intelligence, vol. 2, pp. 207–226. Elsevier Science, Amsterdam (1994) 27. Safavian, S.R., Landgrebe, D.: A survey of decision tree classifier methodology. IEEE Trans. Systems, Man Cyber. 21(3), 660–674 (1991) 28. Shafer, J., et al.: SPRINT: A scalable parallel classifier for data mining. In: Proc. of the 22nd VLBD Conference, pp. 544–555 (1996) 29. Shen, A., Tong, R., Deng, Y.: Application of Classification Models on Credit Card Fraud Detection. In: Proc. of 2007 International Conference on Service Systems and Service Management, Chengdu, China, June 9-11, 2007, pp. 1–4 (2007)
284
M. Koziol and M. Wozniak
30. Srivastava, A., et al.: Parallel formulations of decision tree classification algorithms. Data Mining and Knowledge Discovery 3(3), 237–261 (1999) 31. Sumner, M., Frank, E., Hall, M.: Speeding up Logistic Model Tree Induction. In: Jorge, A.M., Torgo, L., Brazdil, P.B., Camacho, R., Gama, J. (eds.) PKDD 2005. LNCS, vol. 3721, pp. 675–683. Springer, Heidelberg (2005) 32. Tang, T.-I., et al.: A Comparative Study of Medical Data Classification Methods Based on Decision Tree and System Reconstruction Analysis. Industrial Engineering and Management Systems 4(1), 102–108 (2005) 33. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann Pub., San Francisco (2000) 34. Yidiz, O.T., Dikmen, O.: Paralel univariate decision trees. Pattern Recognition Letters 28, 825–832 (2007)
On a New Measure of Classifier Competence in the Feature Space Tomasz Woloszynski and Marek Kurzynski Technical University of Wroclaw, Faculty of Electronics, Chair of Systems and Computer Networks, Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland [email protected]
Summary. This paper presents a new method for calculating competence of a classifier in the feature space. The idea is based on relating the response of the classifier with the response obtained by a random guessing. The measure of competence reflects this relation and rates the classifier with respect to the random guessing in a continuous manner. Two multiclassifier systems representing fusion and selection strategies were developed using proposed measure of competence. The performance of multiclassifiers was evaluated using five benchmark databases from the UCI Machine Learning Repository and Ludmila Kuncheva Collection. Classification results obtained for three simple fusion methods and one multiclassifier system with selection strategy were used for a comparison. The experimental results showed that, regardless of the strategy used by the multiclassifier system, the classification accuracy has increased when the measure of competence was employed.
1 Introduction The measure of competence has been widely used in multiclassifier systems based on selection strategy. The main idea of this strategy is based on the assumption, that there exist a classifier in a given pool of trained classifiers, which can provide the correct classification of a given object. Although such assumption may not be true for all classification problems, the classifier selection approach has received much attention in the last decade [4,8]. This is mainly due to the increasing popularity of multiclassifier systems and their ability to produce a strong classifier from a pool of relatively weak classification methods. Currently developed measures of competence evaluate the classifier by calculating its performance in some neighbourhood of the object being classified. As a result, at any point in the feature space the classifiers can be compared against each other and the most competent classifier(s) can be selected. There are two types of competence measures which can be employed [5]. The first type of competence measure evaluates the classifier in a class-dependent fashion, i.e. the value of the competence depends on the class label assigned by the classifier to the object. The second type of competence measure acts in a M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 285–292. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
286
T. Woloszynski and M. Kurzynski
class-independent way, i.e. each classifier is evaluated regardless of the class label assigned to the object. In this paper we present a class-independent measure of competence. Unlike methods briefly described above, the value of the competence is calculated with respect to the response obtained by random guessing. In this way it is possible to evaluate a group of classifiers against a common reference point. Competent (incompetent) classifiers gain with such approach meaningful interpretation, i.e. they are more (less) accurate than the random classifier.
2 The Measure of Classifier Competence Consider an n-dimensional feature space X ⊆ Rn and a finite set of class numbers M = {1, 2, . . . , M }. Let ψ:X →M
(1)
be a classifier which produces a set of discriminant functions (d1 (x), d2 (x), . . . , dM (x)) for a given object described by a feature vector x. The value of the discriminant function di (x), i = 1, 2, . . . , M represents a support given by the classifier ψ for the i-th class. Without loss of generality we assume that di (x) > 0 and di (x) = 1. Classification is made according to the maximum rule, i.e. ψ(x) = i ⇔ di (x) = max dk (x). k∈M
(2)
We assume that, apart from a training and testing datasets, a validation dataset is also available. The validation dataset is given as VN = {(x1 , i1 ), (x2 , i2 ), . . . , (xN , iN )},
(3)
where xk ∈ X , k = 1, 2, . . . , N denotes the feature vector representing the k-th object in the dataset and ik ∈ M denotes the object’s class label. We define the source competence Kψ (xk ) of the classifier ψ at a point xk ∈ X from the set (3) as: ⎧ ⎪ for M = 2, ⎨ 2 dik (xk ) − 1 (4) Kψ (xk ) = ⎪ ⎩ log[M(M−2) dik (xk )+1] − 1 for M > 2. log(M−1) The values of the function Kψ (xk ) lie within the interval [−1, 1], where the interval limits −1 and 1 describe absolutely incompetent and absolutely competent classifier, respectively. The function Kψ (xk ) was defined in such a way because it satisfies the following criteria: • •
it is strictly increasing, i.e. when the support dik (xk ) for the correct class increases the competence Kψ (xk ) also increases, it is equal to −1 (evaluates the classifier as absolutely incompetent) in the case of zero support for the correct class, i.e. dik (xk ) = 0 ⇒ Kψ (xk ) = −1,
On a New Measure of Classifier Competence in the Feature Space
• • • •
287
it is negative (evaluates the classifier as incompetent) in the case where the support for the correct class is lower than the probability of random 1 guessing, i.e. dik (xk ) ∈ [0, M ) ⇒ Kψ (xk ) < 0, it is equal to 0 (evaluates the classifier as neutral or random) in the case where the support for the correct class is equal to the probability of 1 random guessing, i.e. di (xk ) = M ⇒ Kψ (xk ) = 0, it is positive (evaluates the classifier as competent) in the case where the support for the correct class is greater than the probability of random 1 guessing, i.e. dik (xk ) ∈ ( M , 1] ⇒ Kψ (xk ) > 0, it is equal to 1 (evaluates the classifier as absolutely competent) in the case of maximum support for the correct class, i.e. dik (xk ) = 1 ⇒ Kψ (xk ) = 1.
The source competence also depends on the number of classes in the classification problem. This dependence is shown in Fig.1 where the value of the source competence is plotted against the support given by the classifier for the correct class. The competence of the classifier ψ at any given point x is defined as the weighted sum of source competences Kψ (xk ), k = 1, 2, . . . , N with weights exponentially dependent on the distance x − xi between points xk and x, namely: Cψ (x) =
N
Kψ (xk ) exp[− x − xi ].
(5)
k=1
Although any given metric can be used in the definition of the distance x − xi we propose a modified Euclidean distance in the following form: x − xi =
0.5 (x − xk )T (x − xk ), h2opt
(6)
Source competence of the classifier
1
0.5
0
increasing M
−0.5
−1
0
0.2 0.4 0.6 0.8 Support given by the classifier for the correct class
1
Fig. 1. The source competence plotted against the support for the correct class for different number of classes (M = 2, 3, 5, 10, 20, 50)
288
T. Woloszynski and M. Kurzynski
where the parameter hopt is the optimal smoothing parameter obtained for the validation dataset VN by the Parzen kernel estimation. The value of hopt ensures that the source competences calculated at points xk located outside the neighbourhood of the point x will be significantly suppressed.
3 Application to Multiclassifier Systems The measure of competence can be incorporated in virtually any multiclassifier system providing that X is an metric space. In this chapter we describe two multiclassifier systems based on proposed measure of competence, each one employing different strategy. Let us assume that we are given a set (pool) of trained base classifiers L = {ψ1 , ψ2 , . . . , ψL } and the validation dataset VN . We define the multiclassifier F1 (x) to be the classifier with the highest positive competence value at the point x: F1 (x) = ψl (x) ⇔ Cψl (x) > 0 ∧ Cψl (x) =
max
k=1,2,...,L
Cψk (x).
(7)
The multiclassifier F1 uses a selection strategy, i.e. for each object described by a feature vector x it selects a single classifier to be used for classification. In the case where all classifiers have negative values of competence classification is made according to the random classifier. The random classifier draws a 1 class label using a discrete uniform distribution with probability value M for each class. The multiclassifier F2 represents a fusion approach where the final classification is based on responses given by all competent classifiers: F2 (x) = Cψl (x)ψl (x), (8) l∈Lpos
where the set Lpos contains indices of classifiers with positive values of competence. Again, the random classifier is used in the case where all classifiers have negative values of competence. The classification is made according to the maximum rule given in (2).
4 Experiments 4.1
Benchmark Data and Experimental Setup
Benchmark databases used in the experiments were obtained from the UCI Machine Learning Repository [1] (Glass, Image segmentation, Wine) and Ludmila Kuncheva Collection [6] (Laryngeal3 and Thyroid ). Selected databases represent classification problems with objects described by continuous feature vectors. For each database, feature vectors were normalized for zero mean and unit standard deviation (SD). Three datasets were generated
On a New Measure of Classifier Competence in the Feature Space
289
from each database, i.e. training, validation and testing dataset. The training dataset was used to train the base classifiers. The values of the competence for each base classifier were calculated using the validation dataset. The testing dataset was used to evaluate the accuracy of tested classification methods. A brief description of each database is given in Table 1. For each database, 30 trials with the same settings were conducted. The accuracy of each classifier and multiclassifier used was calculated as the mean (SD) value of these 30 trials. In this way it was possible to evaluate both the accuracy and the stability of examined multiclassifier systems. 4.2
Classifiers
The following set of base classifiers was used in the experiments [2]: 1. LDC — linear discriminant classifier based on normal distributions with the same covariance matrix for each class 2. QDC — quadratic discriminant classifier based on normal distributions with different covariance matrix for each class 3. NMC — nearest mean classifier 4. 1-NN — nearest neighbour classifier 5. 5-NN — k-nearest neighbours classifier with k = 5 6. 15-NN — k-nearest neighbours classifier with k = 15 7. PARZEN1 — Parzen classifier with the Gaussian kernel and optimal smoothing parameter hopt 8. PARZEN2 — Parzen classifier with the Gaussian kernel and smoothing parameter hopt /2 9. TREE — Tree classifier with Gini splitting criterion and pruning level set to 3 10. BPNN1 — Feed-forward backpropagation neural network classifier with two hidden layers (2 neurons in each layer) and the maximum number of epochs set to 50 11. BPNN2 — Feed-forward backpropagation neural network classifier with one hidden layer (5 neurons) and the maximum number of epochs set to 50 Table 1. A brief description of each database; %training, %validation and %testing indicate the percentage of objects used for generation of the training, validation and testing dataset, respectively Database Glass Image segm. Wine Laryngeal3 Thyroid
#classes 6 7 3 3 3
#objects 214 2310 178 353 215
#features 9 19 13 16 5
%training 30 20 20 20 20
%validation 40 30 40 40 40
%testing 100 100 100 100 100
290
T. Woloszynski and M. Kurzynski
Proposed multiclassifier systems were compared against a classifier selection method with class-independent competence estimation of each classifier (CSDEC) [7] and two groups of simple fusion methods. The group A contained the sum, product and majority vote fusion methods used with all base classifiers. The group B contained the same three fusion methods with the exception that only the base classifiers with positive value of competence Cψ (x) were used. If no competent base classifier for a given object x was available, the classification was made using the random classifier. 4.3
Results and Discussion
The results obtained for the base classifiers are shown in Table 2. It can be seen from the table that the set of base classifiers provided diversity needed in the multiclassifier systems, i.e. there was no single superior classifier and the range of classification scores was large (high values of the SD). The best overall accuracy averaged over all databases was achieved by the nearest neighbour classifier 1-NN (85.8%), followed shortly by two Parzen classifiers: PARZEN2 (85.6%) and PARZEN1 (84.9%). The lowest averaged classification scores were obtained for the quadratic discriminant classifier QDC (47.6%) and the neural network classifier BPNN1 (62.6%). The relatively low accuracy of the neural network classifier can be explained by the fact, that the learning process was stopped after just 50 epochs. The results obtained for the multiclassifier systems are presented in Table 3. It can be noticed from the table that the group of fusion methods which used all base classifiers (group A) achieved the lowest classification accuracies. This indicates that weak classifiers from the pool can noticeably affect the sum, product and majority vote fusion methods. However, the same fusion methods combined with the measure of competence (group B) produced the classification scores which were, on average, 10% higher (e.g. in Table 2. The results obtained for the base classifiers. The best score for each database is highlighted.
Classifier LDC QDC NMC 1-NN 5-NN 15-NN PARZEN1 PARZEN2 TREE BPNN1 BPNN2
Glass 60.1(4.1) 20.5(13.7) 45.8(4.6) 75.5(2.7) 65.1(2.3) 52.2(5.6) 70.3(2.7) 74.7(2.7) 61.1(6.8) 46.3(8.1) 48.0(8.3)
Database / Image seg. 91.0(0.7) 79.2(6.5) 83.6(1.8) 93.4(0.7) 90.0(0.8) 85.9(0.9) 93.2(0.7) 93.0(0.7) 81.5(4.8) 40.9(6.9) 41.1(6.7)
Mean (SD) accuracy [%] Laryngeal3 Thyroid 70.2(2.9) 90.9(2.8) 34.6(15.8) 55.3(34.3) 67.4(3.6) 91.9(2.8) 72.5(2.9) 92.7(2.5) 71.4(1.8) 86.7(3.5) 70.1(3.3) 71.8(2.9) 74.9(2.4) 90.9(2.1) 72.6(2.8) 92.9(2.1) 66.6(4.6) 84.7(5.5) 64.8(4.6) 84.8(8.4) 64.5(5.0) 86.4(8.5)
Wine 95.5(1.7) 48.5(19.1) 95.7(1.1) 94.8(1.9) 94.5(2.3) 87.6(7.5) 95.1(2.0) 94.8(1.9) 81.5(6.9) 76.3(16.5) 81.7(13.0)
On a New Measure of Classifier Competence in the Feature Space
291
Table 3. The results obtained for the mutliclassifier systems; fusion methods marked with A used all base classifiers, fusion methods marked with B used only a set of base classifiers with positive value of competence Cψ (x). The best score for each database is highlighted.
Multiclassifier SumA ProductA Majority voteA SumB ProductB Majority voteB CS-DEC F1 F2
Glass 50.7(19.6) 23.2(11.0) 68.8(3.9) 75.2(2.5) 75.2(3.0) 71.9(3.7) 77.9(2.1) 78.4(2.7) 76.9(2.7)
Database / Image seg. 93.0(1.9) 91.0(2.9) 93.5(0.8) 95.6(0.7) 95.7(0.8) 94.0(0.7) 94.4(0.6) 95.7(0.5) 95.9(0.6)
Mean (SD) accuracy [%] Laryngeal3 Thyroid 73.9(9.4) 91.9(2.9) 45.1(16.4) 89.5(9.0) 74.9(1.7) 91.3(2.5) 81.3(1.7) 94.7(1.4) 80.2(1.8) 95.3(1.3) 79.3(1.5) 93.3(1.6) 80.6(1.6) 94.9(1.9) 80.6(1.3) 95.6(1.1) 81.5(1.7) 95.5(1.3)
Wine 95.9(2.0) 81.0(11.6) 95.6(2.0) 97.7(1.0) 96.4(1.7) 97.0(1.3) 97.0(1.3) 97.1(1.2) 97.7(1.1)
the case of product multiclassifier and Glass database the improvement was over 50%). This can be explained by the fact, that for each input object x only classifiers that are assumingly more accurate than the random guessing were used in the classification process. It can be shown that a set of weak base classifiers, where each classifier performs just slightly better than the random classifier, can be turned into a powerful classification method. Such approach has been successfully used in boosting algorithms [3]. The multiclassifier CS-DEC used for comparison produced better classification scores averaged over all databases (89%) than the fusion methods from groups A and B. However, it was outperformed by two developed multiclassifier systems F1 and F2 (both 89.5%). The multiclassifier F1 based on selection strategy produced the best stability (the SD value of 1.4% averaged over all databases), followed by the F2 , CS-DEC and the group B sum multiclassifiers (all 1.5%). Results obtained indicate that proposed measure of competence produced accurate and reliable evaluations of the base classifiers over all feature space. This in turn enabled multiclassifier systems to perform equally well for all input objects x. This can be explained by the fact, that the set of base classifiers was diversified, i.e. for each database and each object x to be classified, at least one competent classifier was always available (the random classifier in F1 and F2 methods was never used).
5 Conclusions A new method for calculating the competence of a classifier in the feature space was presented. Two multiclassifier systems incorporating the competence were evaluated using five databases from the UCI Machine Learning Repository and Ludmila Kuncheva Collection. The results obtained indicate that the proposed measure of competence can eliminate weak (worse
292
T. Woloszynski and M. Kurzynski
than random guessing) classifiers from the classification process. At the same time strong (competent) classifiers were selected in such a way that the final classification accuracy was always better than the single best classifier from the pool. Simple fusion methods (sum, product and majority vote) displayed the greatest improvement when combined with the measure of competence. Two developed multiclassifier systems based on selection strategy (F1 ) and fusion strategy (F2 ) achieved both the best classification scores and stability, and outperformed other classifcation methods used for comparison. Experimental results showed that the idea of calculating the competence of a classifier by relating its response to the response obtained by the random guessing is correct, i.e. a group of competent classifiers provided better classification accuracy than any of the base classifiers regardless of the strategy which was employed in the multiclassifier system.
References 1. Asuncion, A., Newman, D.: UCI Machine Learning Repository. University of California, Department of Information and Computer Science, Irvine (2007), http://www.ics.uci.edu/~ mlearn/MLRepository.html 2. Duda, R., Hart, P., Stork, D.: Pattern Classification. Wiley-Interscience, Hoboken (2001) 3. Freund, Y., Schapire, R.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 148–156 (1996) 4. Giacinto, G., Roli, F.: Design of effective neural network ensembles for image classification processes. Image Vision and Computing Journal 19, 699–707 (2001) 5. Kuncheva, L.: Combining Pattern Classifiers. Wiley Interscience, Hoboken (2004) 6. Kuncheva, L.: Collection, http://www.informatics.bangor.ac.uk/~kuncheva/ activities/real data full set.htm 7. Rastrigin, L.A., Erenstein, R.H.: Method of Collective Recognition. Energoizdat, Moscow (1981) 8. Woods, K., Kegelmeyer, W.P., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 405–410 (1997)
Dominance-Based Rough Set Approach Employed in Search of Authorial Invariants Urszula Stanczyk Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice [email protected]
Summary. Mechanisms for interpretation and manipulation of data required in decision support systems quite often deal with cases when knowledge is uncertain or incomplete. Dominance-based Rough Set Approach is an example of such methodology, dedicated to analysis of data with ordinal properties, with dominance relation substituting that of indiscernibility of objects as defined by Classical Rough Set Approach. The paper presents application of DRSA procedures to the problem of stylometric analysis of literary texts, which by the notion of its primary concept of authorial invariants enables to identify authors of unattributed or disputed texts.
1 Introduction Classical Rough Set Approach (CRSA) was first proposed and then developed by Zdzislaw Pawlak [7] in the early 1980s, offering a new approach to analysis of imprecise knowledge about the Universe perceived by granules of objects that can be discerned basing on values of their attributes. Thus the indiscernibility relation is one of fundamental concepts in CRSA. While CRSA allows only for establishing whether objects are discernible or not, Dominance-based Rough Set Approach (DRSA), proposed by S. Greco, B. Materazzo and R. Slowinski [5, 9, 6], enables also ordinal evaluations by replacing the indiscernibility relation with dominance relation. Hence DRSA observes not only the presence or absence of a property, but monotonic relationships within data as well, and can be seen as generalisation of CRSA. Rough Set theory provides tools for construction of decision algorithms from extracted rules of "IF. . . THEN. . . " type and can be enumerated among other rule-based computational intelligence techniques applied in decision support and knowledge mining systems for execution of classification tasks. Artificial intelligence techniques are often employed in contemporary stylometric analysis, which is a study of texts that yields information on linguistic styles of their authors. Stylometry can be seen as a successor of historical textual analysis dedicated to proving or disproving the authenticity or authorship of texts by means of comparisons of documents. These comparisons M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 293–301. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
294
U. Stanczyk
had in view finding some similarities and shared properties between texts and they used human ability of perceiving patterns or strining elements. In contrast, modern stylometry employs computational power of computers and studies even common parts of speech, which is much more reliable as they are used by writers subconsciously and reflect individual writing habits. Such scientific investigation into numerical measurement of style goes back to the late XIX century and the works of Mendenhall [8] who as the first proposed to use quantitative as opposed to qualitative text descriptors. Stylometric analysts agree that there exists so-called authorial invariant, which is considered the primary stylometric concept. It is such a property of a text that is similar for all texts written by the same author and significantly different in texts by different authors. Authorial invariants can be used to discover plagiarism, recognise the real authors of anonymously published texts, for disputed authorship of literature in academic and literary applications, and even in criminal investigations in the area of forensic linguistics for example to verify ransom notes. Unfortunately, the question what features of a text can constitute authorial invariants is being argued for decades and it still stands open. Some researches propose to use lexical properties, while others prefer syntactic, structural or content-specific [1]. In the paper there is presented application of Dominance-based Rough Set Approach to the problem of author identification for literary texts, with search for authorial invariants among lexical and syntactic properties.
2 Stylometry Stylometric analysis provides descriptors of linguistic styles for authors which can be used to study of these styles and employed in tasks of author characterisation that brings conclusions about social background or education, similarity detection which allows to find properties shared by authors, and authorship attribution considered to be of the primary importance, which answers the question of author identity. The underlying notion in this stylometric research is that works by different authors can be distinguished by quantifiable features of the text [1]. To be of use, textual features selected as descriptors for analysis must constitute the authorial invariant, such characteristic that remains unchanged for all documents by one writer and is different in texts by other authors. Since modern stylometry operates rather on electronic formats of documents rather than on handwritten manuscripts, in such context authorial invariants are also called "cyber fingerprints", "cyberprints" or "writerprints". Linguistic descriptors are usually classified into four categories: lexical, syntactic, structural and content-specific. As lexical attributes there are used such statistics as total or average number of words, distribution of word length, frequency of usage for individual letters. Syntactic features describe patterns of sentence construction formed by punctuation, structural attributes reflect the general layout of text and elements like font type or
Dominance-Based Rough Set Approach
295
hyperlink, and as content-specific descriptors there are considered words of higher importance or with specific relevance to some domain [8]. Many measures used are strongly dependent on the length of the text and so are difficult to apply reliably. Thus selection of features is one of crucial factors in stylometric studies and it is not only task-dependent problem, but also to some degree determined by methodologies employed. These usually are either statistic-oriented or artificial intelligence techniques. Statistical techniques rely on computations of probabilities and distribution of occurrences for single characters, words, word patterns. As the representatives of this group there should be mentioned Principal Component Analysis, Linear Discriminant Analysis, or Markovian Models. Artificial intelligence techniques are highly efficient when dealing with large data sets. Examples from this group include Genetic Algorithms, Artificial Neural Networks, or Rough Set Theory, discussed in more detail in the next section.
3 Rough Set Theory Rough set theory proved to be a useful tool for analysis of imprecise description of situations where decision support is required. It provides information about the role of particular attributes preparing the ground for representation of knowledge by means of decision rules that comprise decision algorithms. In classification tasks the attributes q ∈ Q describing objects of the Universe U are divided into two sets, called conditional C and decision attributes D. Then information about the Universe can be expressed in the form of Decision Table DT, which is defined as 5-tuple (1)
DT =< U, C, D, v, f > -
U , C, and D being finite sets, and v such a mapping that to every q ∈ {C D} assigns its finite value q), and f the information - set Vq (domain of attribute function f : U × (C D) → V , with V = q∈{C∪D} Vq , and for all x and q, f (x, q) = fx (q) ∈ V . Thus columns of a Decision Table specify conditional and decision attributes defined for objects of the Universe, and rows provide values of these attributes for all objects. Each row is also the decision rule as for specified values of condition attributes the values of decision attributes are given. 3.1
Classical Rough Set Approach
The fundamental concept of Classical Rough Set Approach is the indiscernibility relation which using available information (values of attributes Q) about objects in the Universe partitions the input space into some number of equivalence classes [x]Q that are such granules of knowledge, within which single objects cannot be discerned [7].
296
U. Stanczyk
The indiscernibility relation leads to lower QX and upper approximations QX of sets, the first comprised of objects whose whole equivalence classes are included in the set, the second consisting of objects whose equivalence classes have non-empty intersections with the set. If the set difference between the upper and lower approximation of some set, called the boundary region of this set, is not empty then the set is said to be rough, otherwise it is crisp. CRSA cannot deal with preference order in the value sets of condition and decision attributes, thus it supports classification only when it is nominal. 3.2
Dominance-Based Rough Set Approach
Dominance-based Rough Set Approach has been proposed to deal with cases when the value sets of attributes describing objects of the Universe are ordered. The ordering of data in decision problems is naturally related to preferences on considered condition and decision attributes. In classification problems condition attributes are called criteria and with many of them the problem becomes that of multicriteria classification or decision making [5]. While the indiscernibility principle of CRSA says that if two objects x and y are indiscernible with respect to considered attributes then they should both be classified in the same way (to the same class), the dominance or Pareto principle of DRSA states that if x is as least as good as y with respect to considered attributes, then x should be classified at least as good as y. Let q be a weak preference relation on U representing a preference on the set of objects with respect to criterion q. x q y means that "x is at least as good as y with respect to criterion q". x dominates y with respect to P ⊆ C (x P -dominates y) if for all q ∈ P , x q y. It is denoted as xDP y. While in CRSA granules of knowledge are objects that cannot be discerned, in DRSA these granules are: a set of objects dominating x, called P -dominating set DP+ (x) = {y ∈ U : yDP x}, and a set of objects dominated by x, called P -dominated set DP− = {y ∈ U : xDP y}. It often happens that the set of decision attributes contains just a single attribute D = {d}. d partitions the Universe into finite number of classes Cl = {Clt }, with t = 1, . . . , n. The classes are ordered and the increasing preference is indicated by increasing indices. Due to this preference order present in the set of classes Cl , the sets to be approximated are upward or downward unions of classes, or dominance cones, respectively defined as . . Clt≥ = Cls Clt≤ = Cls (2) s≥t
s≤t
With P ⊆ C, and t = 1, . . . , n, P -lower approximation of Clt≥ , P (Clt≥ ), is the set of objects belonging to Clt≥ without any ambiguity while P -upper approximation of Clt≥ , P (Clt≥ ), is the set of objects that could belong to Clt≥ P (Clt≥ ) = {x ∈ U : DP+ ⊆ Clt≥ }
P (Clt≥ ) = {x ∈ U : DP− ∩ Clt≥ = ∅} (3)
Dominance-Based Rough Set Approach
297
The differences between upper and lower approximations enable to define the boundary regions of Clt≥ and Clt≤ with respect to P , that is P -boundary of Clt≥ and Clt≤ , denoted as BnP (Clt≥ ) and BnP (Clt≤ ) respectively, BnP (Clt≥ ) = P (Clt≥ ) − P (Clt≥ )
BnP (Clt≤ ) = P (Clt≤ ) − P (Clt≤ )
Quality of approximation of Cl by criteria P ⊆ C can be defined as / / / / / / ≥ BnP (Clt ) / U− / / / t∈{2,...,n} γP (Cl ) = |U |
(4)
(5)
Each irreducible subset P ⊆ C such that γP (Cl ) = γC (Cl ) is called a reduct and denoted by REDCl . A decision table can have many reducts and their intersection is called a core, CORECl . Approximations of dominance cones is the starting point for induction of decision rules which can be • • • • •
certain D≥ -rules supported by objects ∈ Clt≥ without ambiguity, possible D≥ -rules supported by objects ∈ Clt≥ with or without ambiguity, certain D≤ -rules supported by objects ∈ Clt≤ without ambiguity, possible D≤ -rules supported by objects ∈ Clt≤ with or without ambiguity, approximate D≥≤ -rules supported by objects ∈ Cls ∪ Cls+1 ∪ . . . ∪ Clt without possibility of discerning class.
A set of decision rules is complete when every object of decision table can be classified into one or more groups according to the rules, that is no object remains unclassified. A set of rules is minimal when it is complete and irredundant, that is exclusion of any rule makes the set incomplete.
4 Experiments As the input data for experiments there were taken literary works of Polish writers, H. Sienkiewicz ("Rodzina Połanieckich" and "Quo vadis", "Potop" and "Krzyżacy") and B. Prus ("Emancypantki" and "Placówka", "Lalka" and "Faraon"). The training and testing sets were composed of 10 samples from 2 novels of each writer giving for both totals of 40 samples. To work as authorial invariants there were applied two sets of textual descriptors: usage of function words from lexical category and punctuation marks from syntactic. The sets were constructed as follows: • •
Set 1 - Lexical: "ale", "i", "nie", "to", "w", "z", "że", "za", "na" (which can roughly be expressed in English as "but", "and", "not/no", "then", "in", "with", "that/which", "too", "on"); Set 2 - Syntactic: a comma, a semicolon, a fullstop, a bracket, a quotation mark, an exclamation mark, a question mark, a colon.
298
U. Stanczyk
Dedicated software applied to data returned occurrence frequencies of all markers. As frequencies are continuous values, they are not directly applicable in CRSA that works on discrete data. In such case it is possible to use some discretisation strategy by defining a discretisation factor such as proposed in [2], or modified indiscernibility relation applicable for continuous attributes [3]. On the other hand, the data is ready for DRSA since clearly there is ordering in value sets. However, from the stylometric point of view it cannot be definitely stated whether attributes are preference ordered as it would imply that some greater or lower frequency is preferable to others and for that so far there is no proof in stylometric research. Only one decision attribute is required with the choice of one out of two authors, thus the two distinguished classes are defined by different authors. The decision tables corresponding to two sets of textual descriptors were subjected to rough set analysis by 4eMka software, starting with calculations of reducts. For both decision tables the core of reducts are the empty set. For DT1 (lexical descriptors) there are just three reducts, each consisting of a single attribute, "że", "za" and "na". DOMLEM rule induction algorithm returned the decision algorithm consisting of 18 rules as follows. "S:" at the end of each rule indicates its support. Set 1 Decision Algorithm: R1. (że≥0.012844) ⇒ (auth≤prus) S:9 R2. (na≥0.028275) ⇒ (auth≤prus) S:1 R3. (nie≥0.022207) AND (za≥0.007705) ⇒ (auth≤prus) S:2 R4. (i≥0.039825) AND (z≥0.022302) ⇒ (auth≤ prus) S:1 R5. (to≥0.018739) AND (że≥0.010274) ⇒ (auth≤prus) S:1 R6. (że≥0.011673) AND (w≥0.019455) ⇒ (auth≤prus) S:2 R7. (za≥0.00945) ⇒ (auth≤prus) S:4 R8. (ale≥0.007277) AND (na≥0.018778) AND (z≥0.014983) AND (że≥0.010274) ⇒ (auth≤prus) S:2 R9. (że≤0.006993) ⇒ (auth≥sien) S:5 R10. (na≤0.012108) ⇒ (auth≥sien) S:1 R11. (za≤0.005213) AND (w≤0.015245) ⇒ (auth≥sien) S:3 R12. (że≤0.009128) AND (z≤0.013046) ⇒ (auth≥sien) S:2 R13. (nie≤0.015385) AND (za≤0.006989) ⇒ (auth≥sien) S:2 R14. (ale≤0.005179) AND (za≤0.004108) ⇒ (auth≥sien) S:1 R15. (że≤0.008004) AND (ale≤0.005179) ⇒ (auth≥sien) S:4 R16. (że≤0.011685) AND (na≤0.016228) AND (ale≤0.00779) ⇒(auth≥sien) S:1 R17. (że≤0.01) AND (i≤0.036474) AND (to≤0.009878) AND (na≤0.0232) ⇒ (auth≥sien) S:1 R18. (że≤0.01) AND (i≤0.0328) AND (za≤0.0076) ⇒ (auth≥sien) S:5 For DT2 (syntactic markers) there are just two reducts, one comprising a single attribute, a semicolon, and another containing two, a quotation mark and a colon. The decision algorithm includes 5 decision rules.
Dominance-Based Rough Set Approach
299
Set 2 Decision Algorithm: R1. (semico≥0.099631) ⇒ (auth≤prus) S:19 R2. (fullst≥0.006084) ⇒ (auth≤prus) S:1 R3. (semico≤0.075456) ⇒ (auth≥sien) S:17 R4. (comma≤0.09392) AND (bracke≤0.0) AND (colon≤0.013417) ⇒ (auth≥sien) S:1 R5. (colon≤0.005965) AND (exclam≤0.003509) ⇒ (auth≥sien) S:3 Investigation of these decision algorithms reveals that in DA1 there remained all 9 attributes, while in DA2 six out of initial 8 attributes are present.
5 Obtained Results The efficiency of both rule-based classifiers obtained was verified with testing samples. The results are given (Table 1) into three categories of total verdict per sample: as correct classification, incorrect classification and undecided, in relation to the total number of testing examples. In cases of several partial classification verdicts from different constituent decision rules, for the final verdict for a sample the decision was based on majority of verdicts and support of verdicts, and in the case of tie verdict it is classified as undecided. There is also listed percentage of classification accuracy for each algorithm. From the results it is immediately apparent that syntactic markers used as authorial invariants outperform lexical in significant degree. Lexical classifier has much lower classification accuracy and offers no reduction of the original set of attributes which is one of great advantages of rough set methodology. Syntactic classifier is satisfactory but there is still room for improvement. For both decision algorithms all undecided classification verdicts correspond to situations when for testing samples there were no matchning rules in decision alogorithms. Generation of all decision rules (instead of just minimal cover by DOMLEM algorithm) did not, however, enhanced classification accuracy for any of these classifiers only resulting in longer and more complex algorithms. This suggest that in order to improve results the input decision tables should be constructed from more diversified samples than in the research performed as this could lead to obtaining more flexible algorithms. Table 1. Classification results Classification verdict correct incorrect undecided Classification accuracy
Ratio DA 1 DA 2 24/40 33/40 9/40 4/40 5/40 3/40 60% 82.5%
300
U. Stanczyk
6 Conclusions The results of authorship attribution studies obtained with DRSA presented in this paper lead to conclusion that applied methodology enables to construct the classifier with satisfactory accuracy when based on syntactic descriptors in form of punctuation marks thus allowing to use them successfully as authorial invariants. On the other hand lexical descriptors perform much worse and do not exploit fully data reduction offered by DRSA methodology. Yet even the syntactic classifier could be further enhanced which can be possibly obtained by trying other descriptors to work as writer invariants, training also with texts from short stories as they are likely to have less uniform distributions of selected features which can help to tune-in the classifier. Acknowledgements. The software used in the research to obtain frequencies of descriptors for texts was implemented by P. Cichon under supervision of K. Cyran, in fulfilment of requirements for MSc thesis. 4eMka Software used in search for reducts and decision rules is a system for multicriteria decision support integrating dominance relation with rough approximation [5], [4]. The software is available at a website of Laboratory of Intelligent Decision Support Systems, Institute of Computing Science, Poznan University of Technology (http://www-idss.cs.put.poznan.pl/).
References 1. Argamon, S., Karlgren, J., Shanahan, J.G.: Stylistic analysis of text for information access. In: Proceedings of the 28th International ACM Conference on Research and Development in Information Retrieval, Brazil (2005) 2. Cyran, K.A., Mrozek, A.: Rough sets in hybrid methods for pattern recognition. International Journal of Intelligent Systems 16, 149–168 (2001) 3. Cyran, K.A., Stanczyk, U.: Indiscernibility relation for continuous attributes: application in image recognition. In: Kryszkiewicz, M., Peters, J.F., Rybinski, H., Skowron, A. (eds.) RSEISP 2007. LNCS(LNAI), vol. 4585, pp. 726–735. Springer, Heidelberg (2007) 4. Greco, S., Matarazzo, B., Slowinski, R.: Handling missing values in rough set analysis of multi-attribute and multi-criteria decision problems. In: Zhong, N., Skowron, A., Ohsuga, S. (eds.) RSFDGrC 1999. LNCS(LNAI), vol. 1711, pp. 146–157. Springer, Heidelberg (1999) 5. Greco, S., Matarazzo, B., Slowinski, R.: The use of rough sets and fuzzy sets in MCDM. In: Gal, T., Hanne, T., Stewart, T. (eds.) Advances in Multiple Criteria Decision Making, ch. 14, pp. 14.1–14.59. Kluwer Academic Publishers, Dordrecht (1999) 6. Greco, S., Matarazzo, B., Slowinski, R.: Dominance-based rough set approach as a proper way of handling graduality in rough set theory. In: Peters, J.F., Skowron, A., Marek, V.W., Orłowska, E., Słowiński, R., Ziarko, W.P. (eds.) Transactions on Rough Sets VII. LNCS, vol. 4400, pp. 36–52. Springer, Heidelberg (2007)
Dominance-Based Rough Set Approach
301
7. Pawlak, Z.: Rough set rudiments. Technical report, Institute of Computer Science Report, Warsaw University of Technology, Warsaw, Poland (1996) 8. Peng, R.D., Hengartner, H.: Quantitative analysis of literary styles. The American Statistician 56(3), 15–38 (2002) 9. Slowinski, R., Greco, S., Matarazzo, B.: Dominance-based rough set approach to reasoning about ordinal data. In: Kryszkiewicz, M., Peters, J.F., Rybinski, H., Skowron, A. (eds.) RSEISP 2007. LNCS(LNAI), vol. 4585, pp. 5–11. Springer, Heidelberg (2007)
Intuitionistic Fuzzy Observations in Local Optimal Hierarchical Classifier Robert Burduk Chair of Systems and Computer Networks, Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, 50-370 Wroclaw, Poland [email protected]
Summary. The paper deals with the multistage recognition task. In this problem of recognition the Bayesian statistic is applied. This model of classification is based on the notion of intuitionistic fuzzy sets. A probability of misclassifications is derived for a classifier under the assumption that the features are class-conditionally statistically independent, and we have intuitionistic fuzzy information on object features instead of exact information. The decision rules minimize the mean risk, that is the mean value of the zero-one loss function. Additionally, we consider the local optimal hierarchical classifier.
1 Introduction The classification error is the ultimate measure of the performance of a classifier. Competing classifiers can also be evaluated based on their error probabilities. Several studies have previously described the Bayes probability of error for a single-stage classifier [1], [4] and for a hierarchical classifier [8], [11]. Since Zadeh introduced fuzzy sets in 1965 [15], many new approaches and theories treating imprecision and uncertainty have been proposed [7], [12]. In 1986, Atanassov [2] introduced the concept of an intuitionistic fuzzy set. This idea, which is a natural generalization of a standard fuzzy set, seems to be useful in modelling many real life situations, like logic programming [3], decision making problems [13], [14] etc. In papers [5], [6] there have been presented probability measures for the intuitionistic fuzzy events. In this paper, we consider the problem of classification for the case in which the observations of the features are represented by the intuitionistic fuzzy sets as well as for the cases in which the features are class-conditionally statistically independent and a Bayes rule is used. We consider the the local optimal strategy of multistage recognition task. The contents of the work are as follows. Section 2 introduces the necessary background and describes the Bayes hierarchical classifier. In section 3 the introduction to intuitionistic fuzzy sets is presented. In section 4 we presented the difference between the probability of misclassification for the intuitionistic fuzzy and crisp data in Bayes local hierarchical classifier. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 303–310. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
304
R. Burduk
2 Bayes Hierarchical Classifier In the paper [11] the Bayesian hierarchical classifier is presented. The synthesis of a multistage classifier is a complex problem. It involves specifying of the following components: • • •
the decision logic, i.e. hierarchical ordering of classes, the feature used at each stage of decision, the decision rules (strategy) for performing the classification.
This paper is devoted only to the last problem. This means that we will only consider the presentation of decision algorithms, assuming that both the tree structure and the feature used at each non-terminal node have been specified. The procedure in the Bayesian hierarchical classifier consists of the following sequences of operations presented in Fig. 1. At the first stage, some specific features x0 are measured. They are chosen from among all accessible features x, which describe the pattern that will be classified. These data constitute a basis for making a decision i1 . This decision, being the result of recognition at the first stage, defines a certain subset in the set of all classes and simultaneously indicates features xi1 (from among x) which should be measured in order to make a decision at the next stage. Now at the second stage, features xi1 are measured, which together with i1 are a basis for making the next decision i2 . This decision, – like i1 – indicates features xi2 that are necessary to make the next decision (at the third stage as in the previous stage) that in turn defines a certain subset of classes, not in the set of all classes, but in the subset indicated by the decision i2 , and so on. The whole procedure ends at the N -th stage, where the decision made iN indicates a single class, which is the final result of multistage recognition. Thus multistage recognition means a successive narrowing of the set of potential classes from stage to stage, down to a single class, simultaneously indicating features at every stage that should be measured to make the next decision in a more precise manner. Object Features
x
Measurment
Measurment
Measurment
of features x0
of features xi
of features xi
x0
Decision rule
1
i1
xi1
Decision rule
N 1
i2
iN
xiN 1
1
Decision rule
iN
Fig. 1. Block diagram of the hierarchical classifier
Intuitionistic Fuzzy Observations in Local Optimal Hierarchical Classifier
2.1
305
Decision Problem Statement
Let us consider a pattern recognition problem, in which the number of classes equals M . Let us assume that the classes are organised in a (N +1) horizontal decision tree. Let us number all the nodes of the constructed decision-tree with consecutive numbers of 0, 1, 2, . . . , reserving 0 for the root-node, and let us assign numbers of classes from the M = {1, 2, . . . , M } set to terminal nodes so that each one of them can be labelled with the number of the class connected with that node. This allows us to introduce the following notation: • • • • • •
M(n) – the set of nodes, which distance from the root is n, n = 0, 1, 2, . . . , N . In particular M(0) = {0}, M(N ) = M, N. −1 M= M(n) – the set of interior nodes (non terminal), n=0
Mi ⊆ M(N ) – the set of class labels attainable from the i-th node (i ∈ M), Mi – the set of nodes of immediate descendant node i (i ∈ M), mi – node of direct predecessor of the i-th node (i = 0), s(i) – the set of nodes on the path from the root-node to the i-th node, i = 0.
We will continue to adopt the probabilistic model of the recognition problem, i.e. we will assume that the class label of the pattern being recognised jN ∈ M(N ) and its observed features x are realisations of a couple of random variables J N and X . The complete probabilistic information denotes the knowledge of a priori probabilities of classes: jN ∈ M(N )
p(jN ) = P (JN = jN ),
(1)
and class-conditional probability density functions: fjN (x) = f (x/jN ),
x ∈ X,
jN ∈ M(N ) .
(2)
Let xi ∈ Xi ⊆ Rdi ,
di ≤ d,
i∈M
(3)
denote vector of features used at the i-th node, which have been selected from the vector x. Our aim is now to calculate the so-called multistage recognition strategy πN = {Ψi }i∈M , that is the set of recognition algorithms in the form: Ψi : X i → M i ,
i∈M.
(4)
Formula (4) is a decision rule (recognition algorithm) used at the i-th node that maps observation subspace to the set of immediate descendant nodes of the i-th node. Analogically, decision rule (4) partitions observation subspace Xi into disjoint decision regions Dxki , k ∈ Mi , such that observation xi is allocated to the node k if ki ∈ Dxki , namely:
306
R. Burduk
Dxki = {xi ∈ Xi : Ψi (xi ) = k}, k ∈ Mi ,
i ∈ M.
(5)
Our aim is to minimise the mean risk function (the probability of misclassification) denoted by: R∗ (πN ) =
min R(πN ) =
Ψin ,...,ΨiN −1
min E[L(IN , JN )].
Ψin ,...,ΨiN −1
(6)
where πN is the strategy of the decision tree classifier. The πN is the set of classifying rules used at the particular node.
3 Basic Notions of Intuitionistic Fuzzy Events As opposed to a fuzzy sets in X = x, given by: A = {x, μA (x) : x ∈ X}
(7)
where μA : X → [0, 1] is the membership function of the fuzzy sets A , an intuitionistic fuzzy set A ∈ X is given by: A = {x, μA (x), νA (x) : x ∈ X}
(8)
where μA : X → [0, 1] and νA : X → [0, 1] with the condition 0 ≤ μA (x) + νA (x) ≤ 1
∀x ∈ X
(9)
and the numbers μA (x), νA (x) ∈ [0, 1], denote the degree of membership and non-membership of x to A respectively. The difference πA (x) = 1 − μA (x) − νA (x) (10) is called an intuitionistics index and the number πA (x) ∈ [0, 1] is treated as a measure of a hesitancy bounded with the appreciation of the degree of the membership or non-membership of an element x to the set A. In [15] for the first time it was jointed the concept of a fuzzy event ant the probability. The probability of fuzzy event in Zadeh’s form is given by: ! P (A ) =
μA (x)f (x)dx.
(11)
d
The probability P (A ) of a fuzzy event A defined by (11) represents a crisp number in the interval [0, 1]. Minimal probability of an intuitionistics fuzzy event A is equal to: ! Pmin (A) = μA (x)f (x)dx. (12) d
Maximal probability of an intuitionistics fuzzy event A is equal to:
Intuitionistic Fuzzy Observations in Local Optimal Hierarchical Classifier
307
! Pmax (A) = Pmin (A) +
πA (x)f (x)dx
(13)
d
so probability of an event A is a number from the interval [Pmin (A), Pmax (A)]. In [6] the probability of an intuitionistics fuzzy event A is proposed as a crisp number from the interval [0, 1]: ! μA (x) + 1 − νA (x) P (A) = f (x)dx. (14) 2 d
In the paper [6] there was shown that formula (14) satisfies all classical properties of probability in the theory of Kolmogorov. In our consideration we will use the simple notation for the probability of an intuitionistics fuzzy event A: ! P (A) = τA (x)f (x)dx, (15) d A (x) . where τA (x) = μA (x)+1−ν 2 Let us consider an intuitionistics fuzzy information. The intuitionistics fuzzy information Ak from xk ∈ d , k = 1, ..., d is a set of intuitionistics fuzzy events Ak = {A1k , A2k , ..., Ank k } characterised by membership and non-membership functions:
Ak = {μA1k (xk ), νA1k (xk ), , ..., μAnk (xk ), νAnk (xk )}. k
k
(16)
The value of index nk defines the possible number of intuitionistics fuzzy events for xk . In addition, assume that for each observation subspace xk the set of all available intuitionistics fuzzy observations (16) satisfies the orthogonality constraint: nk μAl (xk ) + 1 − νAl (xk ) k
l=1
k
2
=1
∀x ∈ X.
(17)
When we use the probability of the intuitionistics fuzzy event represented by nk (15) and (17) arises, it is clear that we get P (Al ) = 1. l=1
4 Local Optimal Strategy Applying a procedure similar to [11] and using the zero-one loss function, we obtain the searched locally optimal strategy with decision algorithms as follows: Ψ in (Ain ) = in+1
if
(18)
308
R. Burduk
in+1 = arg maxi p(k) k∈M
n
in
τAin (xin )fk (xin )dxin .
The probability of error P e(in ) in node in for crisp data and under the condition that recognised objects belong to classes with set Min is given by [10]: P e(in ) = 1 −
jn+1 ∈Min
p(jn+1 ) q(jn+1 /in , jn+1 ). p(in )
(19)
Similarly, if (17) holds the probability of error P eIF (in ) in node in for intuitionistics fuzzy data is the following: P eIF (in ) = 1 −
jn+1 ∈Min
p(jn+1 ) p(in )
! τAin (xin )fjn+1 (xin )dxin (20)
(j ) Ain ∈Dxin+1 in n
When we use intuitionistics fuzzy information on object features instead of exact information we deteriorate the classification accuracy. The difference between the probability of misclassification for the intuitionistics fuzzy P eIF (in ) and crisp data P e(in ) in individual node in is the following: P eIF (in ) − P e(in ) = ! p(jn+1 ) ⎝ fj (xi ) dxin − τAin (xin ) max = i p(in ) n+1 n jn+1 ∈M n Ain ∈Xin i n ⎧ ⎫⎞ ! ⎨ p(j ⎬ n+1 ) τAin (xin )fjn+1 (xin )dxin ⎠ . − max ⎭ jn+1 ∈Min ⎩ p(in ) ⎛
(21)
in
5 Illustrative Example Let us consider the two-stage binary classifier. Four classes have identical a priori probabilities that equal 0.25. We use 3-dimensional data x = [x(1) , x(2) , x(3) ] where class-conditional probability density functions are normally distributed. For performing the classification at the root-node 0 the first coordinate was used, and components x(2) and x(3) were used at the nodes 5 and 6 respectively. In the data covariance matrices are equal for every class j = 4I, j2 ∈ M(2), and the expected values are the following: 2 μ1 = [0, 0, 0] , μ2 = [0, 4, 0] , μ3 = [3, 0, 1] , μ4 = [3, 0, 8]. In experiments, the following sets of intuitionistic fuzzy numbers were used: case A: 1 2 24 A = {A ⎧ , A , ...,2A }, where (x + 9) for x ∈ [−9, −8], ⎨ (x + 8)2 for x ∈ [−9, −7], 2 1 1 (x + 7) for x ∈ [−8, −7], νA (x) = μA (x)= 0 for otherwise, ⎩ 0 for otherwise,
Intuitionistic Fuzzy Observations in Local Optimal Hierarchical Classifier
309
Table 1. Difference of misclassification in the locally optimal strategy P eIF (in ) − P e(in ) in node Case 0 5 6 A B
⎧ ⎨ (x − 14)2 μA24 (x)= (x − 16)2 ⎩ 0
0,005 0,017 0,001 0,005 0,004 0,001
for x ∈ [14, 15], (x − 15)2 for x ∈ [14, 16], for x ∈ [15, 16], νA24 (x) = 0 for otherwise, for otherwise,
case B: B = {B 1 , B 2 , ..., B 24 }, where ⎧ ⎨ 4(x + 9)2 for x ∈ [−9, −8.5], μB 1 (x) = 4(x + 8)2 for x ∈ [−8.5, −8], ⎩ 0 for otherwise, 4(x + 8.5)2 for x ∈ [−9, −8], νB 1 (x) = 0 for otherwise, ⎧ 2 for x ∈ [15, 15.5], ⎨ 4(x − 15) μB 49 (x) = 4(x − 16)2 for x ∈ [15.5, 16], ⎩ 0 for otherwise, 2 4(x − 15.5) for x ∈ [15, 16], νB 49 (x) = 0 for otherwise, Tab. 1 shows the difference between the probability of misclassification for intuitionistic fuzzy and non fuzzy data in the local optimal strategy of multistage classification calculated form (21). These results are calculated for full probabilistic information. The received results show deterioration the quality of classification when we use intuitionistic fuzzy information on object features instead of exact information in Bayes hierarchical classifier. We have to notice that the difference in the misclassification for fuzzy and crisp data does not depend only on the intuitionistic fuzzy observations. In every case, we obtain a different result for internal nodes of decision tree. The position of the class-conditional probability density in relation to the observed intuitionistic fuzzy features is the essential influence.
6 Conclusion In the present paper we have concentrated on the Bayes optimal classifier. Assuming a full probabilistic information we have presented the difference between the probability of misclassification for intuitionistics fuzzy and crisp data. Illustrative example shoves that the position of the class-conditional probability density in relation to the observed intuitionistics fuzzy features
310
R. Burduk
is the essential influence for the difference P eIF (in ) − P e(in ) in individual node of local optimal hierarchical classifier. Acknowledgements. This work is supported by The Polish State Committee for Scientific Research under grant for the years 2006–2009.
References 1. Antos, A., Devroye, L., Gyorfi, L.: Lower bounds for Bayes error estimation. IEEE Trans. Pattern Analysis and Machine Intelligence 21, 643–645 (1999) 2. Atanassov, K.: Intuitionistic fuzzy sets. Fuzzy Sets and Systems 20, 87–96 (1986) 3. Atanassov, K., Georgeiv, C.: Intuitionistic fuzzy prolog. Fuzzy Sets and Systems 53, 121–128 (1993) 4. Avi-Itzhak, H., Diep, T.: Arbitrarily tight upper and lower bounds on the bayesian probability of error. IEEE Trans. Pattern Analysis and Machine Intelligence 18, 89–91 (1996) 5. Gerstenkorn, T., Ma´ nko, J.: Bifuzzy probability of intuitionistic sets. Notes of intuitionistic Fuzzy Sets 4, 8–14 (1988) 6. Gerstenkorn, T., Ma´ nko, J.: Probability of fuzzy intuitionistic sets. Busefal 45, 128–136 (1990) 7. Goguen, J.: L-fuzzy sets. Journal of Mathematical Analysis and Applications 18(1), 145–174 (1967) 8. Kulkarni, A.: On the mean accuracy of hierarchical classifiers. IEEE Transactions on Computers 27, 771–776 (1978) 9. Kuncheva, L.I.: Combining pattern classifier: Methods and Algorithms. John Wiley, New York (2004) 10. Kurzy´ nski, M.: Decision Rules for a Hierarchical Classifier. Pattern Recognition Letters 1, 305–310 (1983) 11. Kurzy´ nski, M.: On the multistage Bayes classifier. Pattern Recognition 21, 355–365 (1988) 12. Pawlak, Z.: Rough sets and fuzzy sets. Fuzzy Sets and Systems 17, 99–102 (1985) 13. Szmidt, E., Kacprzyk, J.: Using intuitionistic fuzzy sets in group decision making. Control and Cybernetics 31(4), 1037–1053 (2002) 14. Szmidt, E., Kacprzyk, J.: A consensus-reaching process under intuitionistic fuzzy preference relations. International Journal of Intelligent Systems 18(7), 837–852 (2003) 15. Zadeh, L.A.: Probability measures of fuzzy events. Journal of Mathematical Analysis and Applications 23, 421–427 (1968)
Electrostatic Field Classifier for Deficient Data Marcin Budka1 and Bogdan Gabrys2 1
2
Computational Intelligence Research Group, Bournemouth University, School of Design, Engineering & Computing, Poole House, Talbot Campus, Fern Barrow Poole BH12 5BB, United Kingdom [email protected] [email protected]
Summary. This paper investigates the suitability of recently developed models based on the physical field phenomena for classification of incomplete datasets. An original approach to exploiting incomplete training data with missing features and labels, involving extensive use of electrostatic charge analogy has been proposed. Classification of incomplete patterns has been investigated using a local dimensionality reduction technique, which aims at exploiting all available information rather than trying to estimate the missing values. The performance of all proposed methods has been tested on a number of benchmark datasets for a wide range of missing data scenarios and compared to the performance of some standard techniques.
1 Introduction Physics of information has recently emerged as a popular theme. The research on quantum computing and the concept of ‘it from bit’ [13] have motivated researchers to exploit physical models for design of learning machines. One example of such learning model is the Information Theoretic Learning framework derived in [6], which enables online manipulation of entropy and mutual information by employing the concept of Information Potential and Forces. Using higher order statistics of the probability density functions, the common ‘Gaussianity’ assumption has been lifted, resulting in efficient methods for problems like Blind Source Separation, nonlinear dimensionality reduction [11] or even training of MLPs without error backpropagation [6]. Another example are the Coulomb classifiers which form a family of models based on analogy to a system of charged conductors, trained by minimizing the Coulomb energy. The classifiers are in fact novel types of Support Vector Machines, comparable or even superior to standard SVMs [4]. The dynamic physical field analogy forms a basis of a universal machine learning framework derived in [9]. The Electrostatic Field Classifier is a model of particular interest. By exploiting a direct analogy with the electrostatic field, the approach treats all data samples as particles able to interact with each other. EFC has proven to be a robust solution [8] featuring relatively high level of diversity, which made it suitable for classifier fusion. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 311–318. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
312
M. Budka and B. Gabrys
The missing data problem is typical for many research areas and there are many ways of dealing with it. For probabilistic models the Expectation Maximization algorithm [2] is the commonly used approach, which allows to obtain maximum likelihood estimates of model parameters (e.g. parameters of mixture models). Maximum likelihood training of neural networks has been described in [12], where Radial Basis Networks are used to obtain a closedform solution instead of expensive integration over the unknown data. There are also some missing data ideas presented in the statistics literature almost 30 years ago which are still in use today [7] and some methods like multiple imputation or EM algorithm are still considered as the state of the art [10]. A different approach not requiring imputation of missing values based on hyperbox fuzzy sets has been presented in [3]. The architecture of the General Fuzzy Min-Max Neural Networks naturally supports incomplete datasets, exploiting all available information in order to reduce a number of viable alternatives before making the classification decision. The GFMM networks are also able to quantify the uncertainty caused by missing data. The above physically inspired models have been designed to handle complete data patterns only. This paper describes an extension of EFC to support incomplete data and is structured as follows. Section 2 contains a description of data field models which are the subject of this paper. Section 3 has been devoted to the missing data problem, giving a brief overview of the issue, describing some traditional approaches to missingness and a data field model approach used in experiments. The experimental results have been given in section 4, while the conclusions can be found in section 5.
2 Data Field Model Classifiers 2.1
Gravity Field Classifier (GFC)
The simplest, attractive field model has been based on the gravity field [9]. The model treats all data patterns as charged particles and assumes a static field with all sources (training patterns) fixed to their initial positions. Denoting by X a set of n training samples and by Y a set of N test samples the potential generated by the field source xi in the point yj is given by: 1 (1) rij where c is the field constant, si is the source charge and rij is some distance measure between xi and yj . For simplicity and for the sake of conformity with the physical model, Euclidean distance has been chosen. The superposition of individual contributions of all training samples defines the field in any particular point of the input space as: Vij = −csi
Vj = −c
n si r i=1 ij
and the overall potential energy in point yj is then given by:
(2)
Electrostatic Field Classifier for Deficient Data
U j = sj Vj = −csj
n si r i=1 ij
313
(3)
Assuming that all samples have the same, unit charge, si and sj can be dropped from the equations and the force exerted on yj by the field (negative gradient of Uj ) can be calculated as: F j = −c
n y j − xi 3 rij i=1
(4)
The training dataset uniquely identifies the field, thus no training is required and all calculations are performed during classification. The potential results in a force able to move an unlabelled test sample to finally meet one of the fixed field sources and share its label. Only force directions are followed, taking a small, fixed step d at a time (which makes the field constant c irrelevant). This allows to avoid problems in the vicinity of field sources, as rij → 0 implies Fj → ∞. The field is then recalculated and the procedure repeats until all testing samples approach one of the sources at a distance equal to or lower than d and are labelled accordingly. Due to the fact that d is fixed in all dimensions, the data should be rescaled to fit within the 0 − 1 range. The lower bound of the distance is also set to d, to avoid division by zero and overshooting the sources. Note, that all possible trajectories end up in one of the sources, which divides the space into distinct regions representing classes. 2.2
Electrostatic Field Classifier (EFC)
During classification the GFC does not take advantage of training data labels until the very end of the procedure. This information is thus wasted. The way to exploit it is to use the electrostatic field analogy, by introducing a repelling force into the model, so that samples from the same class attract each other, while samples from different classes – repel. To facilitate interaction between the field and unlabelled test samples, each of them must be first decomposed into a number of subsamples belonging to one of the target classes. This might be achieved by using the Parzen window density estimator, but in order not to introduce additional parameters, those partial memberships can also be assigned in proportion to the GFC potential of all classes in the test point. Denoting by L the vector of field source labels and by Vjk the potential generated by the k th class in point y j , the partial membership pjk is given by: pjk
/ k/ /V / j = C / / / i/ i=1 Vj
while the overall potential in point y j can be calculated as:
(5)
314
M. Budka and B. Gabrys
Vj =
n
k =Li
pjk − pjLi
rij
i=1
=
n 1 − 2pjL i=1
The resultant force calculation formula then becomes: 3 4 n y j − xi Fj = (1 − 2pjLi ) 3 rij i=1
rij
i
(6)
(7)
Note however, that if there are more than two classes, repelling force may dominate the field, as it would come from multiple classes, while the attracting force would come from only one. According to [9], to restore the balance between repelling and attracting forces it is sufficient to satisfy the condition: N j=1
Vj =
n N 1 − qpjL j=1 i=1
rij
i
=0
(8)
by estimating the value of regularization coefficient q, which controls the balance between the total amount of attracting and repelling force in the field. The classification process follows the same rules as in the case of gravity field model. As expected, classification performance and generalisation properties are better than in the case of the simpler model due to improved class separation and smoother decision boundaries [9]. In some applications the EFC tends to suffer from a number of issues – the excess of repelling force resulting in divergence of the algorithm, poor classification performance in higher dimensional spaces or needlessly slow convergence. Some improvements have been introduced to counteract those problems, but due to lack of presentation space they are not discussed here.
3 Handling Incomplete Data 3.1
The Missing Data Problem
The missing data problem is typical for many research areas. The reason of missingness has important implications, thus data can be divided into [7]: • Missing Completely At Random (MCAR), if the probability that the particular feature is missing is not related to the missing value or to the values of any other features. The best kind of missingness one can hope for [5]. • Missing At Random (MAR), if the probability that the particular feature is missing is not related to its own value but is related to the values of other features. There is no way to test if MAR holds but usually false assumption of MAR may have only minor influence on the result [10]. • Missing Not At Random (MNAR), if the probability that some feature is missing is a function of this feature value. The missingness should be somehow
Electrostatic Field Classifier for Deficient Data
315
modelled in this case but the model for missingness is rarely known, which makes the whole procedure a difficult and application specific task [5]. EFC is a purely data-driven approach, thus the type of missingness does not directly influence its operation, although it can influence the results. This dependency is however not investigated here and MCAR data was assumed. 3.2
Basic Approaches to Missingness
There exist some basic approaches to the missing data problem, based on editing. In statistical inference, usefulness of most of them is limited to a number of specific cases [5, 10] but their performance within the data field framework is in some cases quite reasonable. The techniques are: • Casewise deletion, which simply excludes incomplete data samples from further analysis. If their number is relatively small, this technique performs surprisingly well. This method has been extensively used in the experiments in section 4 as a base for performance comparison with other methods. • Pairwise deletion, which also ignores missing data but instead of dropping incomplete samples the approach uses only the features which are present. A similar method forms a basis of the data field specific approach to classification of incomplete patterns, as described in the following subsection. • Mean substitution, which replaces all missing features with appropriate mean values. Although commonly criticized in the literature [5,10], as shown in section 4, in many cases class conditional mean imputation turns out to perform very well in conjunction with the data field model specific approach. 3.3
EFC Approach to Missingness
Various modifications were required to facilitate handling of deficient data within the EFC framework. The modifications include distance calculation, force calculation and label assignment routines and are discussed below. • Classification of incomplete data. To exploit all available information, EFC acts on the incomplete test sample working only in available dimensions – the feature space dimensionality is locally reduced. As a result, distance and force calculations take place in the reduced feature space and the test sample is only able to move within it. The pattern can also no longer simply share the class of the nearest source as it might lead to ambiguity. Instead, a soft output is produced, proportional to the class conditional density for the current position of the test sample in the reduced feature space. • Learning from incomplete data. The missing feature scenario can be addressed by reintroducing the charge concept – if the charge is allowed to vary between field sources, the incomplete training patterns can be exploited by an intelligent charge redistribution mechanism. The algorithm starts with assigning a unit charge to all training samples. It then examines each incomplete sample in turn and redistributes its charge among all complete patterns from
316
M. Budka and B. Gabrys
the same class in proportion to their distance. Thus the closer the complete sample is, the more charge it receives. After all incomplete patterns are processed they are dropped and the remaining samples become field sources. The missing labels scenario has been addressed in 2 different ways – by treating unlabelled samples as gravity field sources (GFC fallback) or by redistributing the charge among all complete field sources regardless of their class.
4 Experimental Results The experiments included evaluation of classification error for various levels of missing data on a number of benchmark datasets from the UCI Machine Learning Repository [1]. All recognition rates given have been averaged over 10 runs with randomly removed features and labels (MCAR). The results for the following scenarios are provided: (1) deficient test data, (2) deficient training data, (3) deficient both types of data with missing labels and (4) deficient both types of data with labels given. The results have been given in Tables 1 – 3. For all datasets, local dimensionality reduction significantly outperforms mean imputation in the incomplete test data scenario (1). The advantage margin steadily grows up to more than 27 percentage points (Table 1), as the deficiency level increases. The situation changes in favor of mean imputation, when the training dataset has missing features (2). The performance of the method is always the best and in some cases (Iris and Wine) does not drop almost at all even at the highest deficiency level. This phenomenon must be however credited to good class separation of the datasets, rather than to the EFC model. In the most difficult scenario (3) the proposed model always outperforms simple casewise deletion, although at high deficiency level the performance margin is rather modest. Note however, that the performance drop of the former is much smoother (for casewise deletion the lowest recognition rate can be reached at 0.2 deficiency already) and even at the maximum deficiency level allowed by the model, it is still better than random guessing. Table 1. Iris dataset results for scenarios (1), (2) and (3) deficiency type/level1
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
(1) (1) (2) (2) (2)
95.5 95.5 95.4 95.4 95.4
local dim reduction mean imputation charge redistribution mean imputation casewise deletion
(3) best methods2 (3) casewise deletion 1 2
94.7 92.8 95.7 95.3 95.3
94.4 90.2 94.8 95.1 93.9
93.8 85.2 94.7 94.7 91.8
91.6 80.5 95.0 94.5 90.7
89.4 76.9 94.6 95.4 90.6
88.4 71.3 92.6 94.0 86.2
84.0 67.5 89.1 93.5 78.1
83.8 61.1 87.0 93.0 74.8
82.4 55.8 85.0 92.0 72.5
77.5 50.4 85.2 92.3 75.6
95.1 94.2 93.7 92.3 90.9 86.9 85.9 82.4 80.9 71.1 73.5 95.1 93.5 93.0 89.7 85.8 80.8 72.8 69.5 73.2 69.3 71.2
0 for complete and 1 for maximally incomplete data (one feature left for each object). Combination of the best performing methods from previous experiments.
Electrostatic Field Classifier for Deficient Data
317
Table 2. Wine dataset results for scenarios (1), (2) and (3) deficiency type/level
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
(1) (1) (2) (2) (2)
96.9 96.9 96.4 96.4 96.4
local dim reduction mean imputation charge redistribution mean imputation casewise deletion
(3) best methods (3) casewise deletion
95.2 93.1 94.8 96.3 87.9
94.8 89.6 92.7 96.7 72.3
93.8 84.5 86.1 96.9 62.9
91.9 78.9 83.3 96.1 72.0
90.3 75.9 82.1 96.1 72.9
88.1 70.5 82.8 96.1 73.3
84.4 66.0 80.6 96.1 70.8
79.2 57.5 82.1 96.4 74.9
72.9 53.6 83.2 94.4 73.6
71.4 51.8 81.5 92.7 76.4
96.5 94.7 94.1 91.8 91.5 87.8 83.6 78.6 67.9 60.8 58.0 96.5 86.5 69.0 66.6 70.1 68.8 68.1 65.7 59.9 57.7 56.8
Table 3. Segment dataset results for scenarios (1), (2) and (3) deficiency type/level
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
(1) (1) (2) (2) (2)
92.0 92.0 92.3 92.3 92.3
local dim reduction mean imputation charge redistribution mean imputation casewise deletion
(3) best methods (3) casewise deletion
91.5 86.4 87.5 91.8 81.5
90.9 80.2 70.6 90.9 52.3
90.3 72.5 62.7 89.9 52.4
88.4 64.2 65.7 88.8 57.0
87.8 56.3 64.2 88.3 55.5
84.8 48.5 63.5 87.5 58.3
79.9 40.8 62.4 87.0 54.5
72.0 33.4 63.3 85.3 56.3
59.6 25.3 63.0 83.6 57.6
43.4 18.5 59.6 75.4 55.6
91.9 90.5 87.8 85.4 82.2 78.2 72.8 65.8 56.9 44.5 34.8 91.9 78.4 51.3 54.2 53.1 55.2 51.4 47.3 44.7 39.6 31.5
1
1
0.9
0.9
0.8
0.8 classification performance
classification performance
The results for scenario (4) for the Iris and Segment datasets have been depicted in Fig. 1. Notice the superiority of class conditional mean imputation and local dimensionality reduction combination and its smooth performance decay. For the Iris (and Wine – not shown here) dataset, the latter method is almost entirely responsible for the performance drop as discussed above.
0.7 0.6 0.5 0.4 0.3 Charge redistribution Mean imputation (both) Mean imputation (train) Casewise deletion +/− std confidence interval
0.2 0.1 0
0
0.2
0.4 0.6 deficiency level
(a) Iris dataset
0.7 0.6 0.5 0.4 0.3 Charge redistribution Mean imputation (both) Mean imputation (train) Casewise deletion +/− std confidence interval
0.2 0.1 0.8
1
0
0
0.2
0.4 0.6 deficiency level
0.8
1
(b) Segment dataset
Fig. 1. Classification performance for deficient test and training data
318
M. Budka and B. Gabrys
5 Conclusions The underlying physical model of EFC appears well suited for incorporation of various missing data handling routines. The approaches investigated in this paper, although criticized in the statistics literature, perform quite reasonably in conjunction with a non-parametric physical field model. The performance of those methods appears not as problem dependant as one would expect – mean imputation, which intuitively should perform well only for certain types of datasets with well separated classes, is the best method for dealing with deficient training data in most experiments. There also emerges a pattern of what approach is most likely to produce the best results with a particular type of data missing: (1) local dimensionality reduction for incomplete test data, (2) class conditional mean imputation for training data with missing features and (3) charge redistribution for missing labels. A combination of above methods provides good recognition rates even for the most difficult scenarios, for both low and moderate deficiency levels. A peculiarity of the model is its limited sensitivity to removal of unlabelled training data – casewise deletion often performs only slightly worse than charge redistribution in this scenario.
References 1. Asuncion, A., Newman, D.J.: UCI machine learning repository (2007) 2. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum Likelihood from Incomplete Data via EM Algorithm. Journal of Royal Statistical Society 39(1), 1–38 (1977) 3. Gabrys, B.: Neuro-fuzzy approach to processing inputs with missing values in pattern recognition problems. International Journal of Approximate Reasoning 30(3), 149–179 (2002) 4. Hochreiter, S., Mozer, M.C., Obermayer, K.: Coulomb classifiers: Generalizing support vector machines via an analogy to electrostatic systems. In: Advances in Neural Information Processing Systems, vol. 15, pp. 545–552 (2003) 5. Outhwaite, W., Turner, S.P.: Handbook of Social Science Methodology. SAGE Publications Ltd., Thousand Oaks (2007) 6. Principe, J.C., Xu, D., Fisher, J.: Information theoretic learning. Unsupervised Adaptive Filtering, 265–319 (2000) 7. Rubin, D.B.: Inference and missing data. Biometrika 63(3), 581–592 8. Ruta, D., Gabrys, B.: Physical field models for pattern classification. Soft Computing 8(2), 126–141 (2003) 9. Ruta, D., Gabrys, B.: A Framework for Machine Learning based on Dynamic Physical Fields. Natural Computing Journal (2007) 10. Schafer, J.L., Graham, J.W.: Missing data: Our view of the state of the art. Psychological Methods 7(2), 147–177 (2002) 11. Torkkola, K.: Feature extraction by non parametric mutual information maximization. The Journal of Machine Learning Research 3, 1415–1438 (2003) 12. Tresp, V., Ahmad, S., Neuneier, R.: Training neural networks with deficient data. In: Advances in Neural Information Processing Systems, vol. 6, pp. 128– 135 (1994) 13. Zurek, W.H.: Complexity, Entropy and Physics of Information. Westview (1989)
Pattern Recognition Driven by Domain Ontologies Juliusz L. Kulikowski Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences, 4 Ks. Trojdena Str., 02-109 Warsaw, Poland [email protected]
Summary. The idea of using ontological models as a source of knowledge necessary to construct composite, multi-step pattern recognition procedures is presented in the paper. Special attention is paid to the structure of pattern recognition processes as sequences of decisions forming paths in bi-partite graphs describing pattern recognition networks. It is shown that such processes correspond to multi-step pattern recognition procedures described in literature, as well as they may describe more general classes of pattern recognition procedures. Construction of a multi-step pattern recognition procedure aimed at extraction of information about moving objects is illustrated by an example.
1 Introduction General pattern recognition PR concepts can be found in [1, 2, 3]. In the simplest case a PR problem can be formulated as follows: it is given a set Ω of real items (objects, persons, states of a process, events, phenomena, etc.) which has been divided into a finite family of disjoint subsets Ω1 , Ω2 ,. . . , ΩK , called patterns, K > 1 being a fixed natural number. It is assumed that each item is represented by a fixed number N of measurable parameters given in the form of a real N -component vector. The corresponding linear vector space RN is called an observation space; the vectors y = [y1 , y2 ,. . . ,yN ], y ∈ RN , are called observations, and their components – object features. An observation y being given, the PR task consists in indicating, with a minimum error risk, a pattern Ωk the corresponding item belongs to. From a formal point of view a classifier is an algorithm assigning pattern indices k, k ∈ [1,2,. . . ,K ], to the observations y entered at the input. Any type of classifiers needs a concept of a pattern as a similarity class of items to be defined. Moreover, excepting the simplest cases, in advanced PR the relationships among patterns should be taken into consideration. Both, concepts of patterns and assumed relationships among them constitute a resource of primary knowledge modeling the external world the PR problem being addressed to. For many years the models were based on the assumption: M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 319–327. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
320
J.L. Kulikowski
a/ of a finite set of patterns being to be recognized, as well as on the following additional default assumption: b/ of no relationships existing between the patterns; c/ of statistical independence between the observations acquired for a current PR act and the decisions made in the preceding recognition acts. The aim of this paper is to show that: 1/ the above-mentioned assumptions should be extended in order to make them adequate to more general situations arising in PR problems; 2/ in such situations recognized patterns form more sophisticated structures called pattern networks reflecting, in fact, our assumptions and knowledge about the environment; 3/ ontological models representing the knowledge about the real world can be used as a basis of pattern networks construction and PR decision processes managing. The organization of the paper is as follows. In Sec.2 several examples of PR tasks using non-typical pattern networks are given. More general approach to the construction of pattern networks by using ontological models is presented in Sec.3. Ontological models supporting the PR processes in dynamic systems are described in Sec.4. Short conclusions are given in Sec.5.
2 Pattern Networks in Selected PR Tasks Considering the problems of statistical detection of signals in noise J. Seidler [3] introduced a concept of passive parameters as the ones which are not primarily known and are not of a direct interest for the user, however, whose knowledge improves the effectiveness of detection. The concept of passive parameters can be illustrated by the following example. Example 1. Let us assume that a series of screening medical examination aimed at early detection of toxic jaundice is to be performed. Let it be known that the given population consists of sub-populations of a high (H ) and low (L) infection risk and that in the given sub-populations the a priori probabilities of toxic jaundice occurrence are given. We denote by X, X ∈ {0,1 }, a random event corresponding to non-occurrence (occurrence) of infection in an examined patient. Consequently, it is assumed that the following a priori probabilities: PH,0 , PH,1 , PL,0 and PL,1 are given. Let Y be a random vector of parameters used to the diagnosis. It is assumed that there are given the conditional probability distributions u(y |0 ) and u(y |1 ) and, thus, a statistical method of decision making can be applied. Then the following situations will be considered: 1/ The patients are chosen randomly from the total population; the corresponding vectors V 1 , V 2 , etc. form a statistically independent sequence of observation data; the doctor does not know, which risk group a given patient belongs to. In this case decision is made according to the Bayesian rule:
Pattern Recognition Driven by Domain Ontologies
321
X = 1 if u(y |1) · P1 > u(y |0) · P0 , X = 0 if u(y |1) · P1 ≤ u(y |0) · P0 , where P1 = PH,1 + PL,1 and P0 = PH,0 + PL,0 are the a priori probabilities of the recognized patterns. 2/ The patients are chosen randomly from fixed sub-populations; the vectors V 1 V 2 , etc. form a statistically independent sequence of observations subjected to a distribution adequate to the given sub-population; the doctor knows, which risk group a given sequence of patients belongs to. The decision is then made according to the Bayesian rule which for the sub-population H takes the form: X = 1 if u(y |1,H )·PH,1 > u(y |0,H ).PH,0 , X = 0 if u(y |1,H )·PH,1 ≤ u(y |0,H )·PH,0 , and a similar form in the case of the L sub-population. 3/ A situation like in case 2/; however, the doctor knows that patients are chosen from a fixed sub-population for which the a priori probabilities are also known but he does not know what a sub-population has been chosen. In this case the first decision will be made like in the case 1. However, each next decision should be based on verified probabilities of patterns. In the last two cases patterns recognized in consecutive examination acts are conditionally dependent because of random choosing and fixing a passive parameter: the infection risk level. Extended pattern networks should also be used in multi-step PR. Example 2. Biological tissues can be roughly recognized according to the visual properties of their texture. One of possible approach to this can be based on examination of morphological spectra of textures. Let us assume that for this purpose 1-st level morphological spectra, consisting of the components S, V, H and X, are used. Here, S denotes a total image intensity level in a selected basic window of the image under consideration, V, H and X denote, respectively, the levels of intensity of vertical, horizontal and skew morphological structures. The model of patterns consists then of the elements: TV – “strongly dominating V -component”, TH – “strongly dominating H -component”, TX – “strongly dominating X component”, T0 – “no strongly dominating component”. The recognition criteria then take the form: TV if V/S >> H/S and V/S >> X/S, TH if H/S >> V/S and H/S >> X/S, TX if X/S >> V/S and X/S >> V/S, T0 otherwise. The PR process can be organized in several steps leading to the decisions forming a tree shown in Fig. 1.
322
J.L. Kulikowski
T0 TV TH TX 1/11/2009 T0
TV TH TX TV TH TV
TX
TH
Fig. 1. Decision tree of a multi-level PR process
It is remarkable that the organization of the process is here directly related to a taxonomic scheme of textures.
3
Pattern Networks and PR Processes
Formulation of a single PR task can be based on a pattern model (PM ) being, in fact, a formal system: M = [U, σ, ϕ]
(1)
where U is an universe i.e. a space of multi-aspect mutually separable objects available for observation, σ is a relation of similarity of the elements of U , ϕ is a function assigning weights to the similarity classes of objects induced by σ However, the notion of similarity has been used here in a very wide sense. It may denote a strong-sense (reciprocal, symmetrical and transitive) similarity relation, a system of fuzzy subsets of U described by their membership functions, a system of conditional probability distribution functions defined on U (the condition being interpreted as a name of similarity class), as a similarity measure described on the Cartesian product U ×U , etc. The weight function ϕ indicates a relative a priori expectation level of the members of a given similarity class being observed. Each using of a given PR model to decision making will be called a PR act. A sequence of logically connected PR acts forms a PR process. Some formal properties of PR processes will be considered below. In the simplest case a PR process may consist of a sequence of repeated, identical PR acts based on the same PM ; this corresponds to the assumptions a/, b/, c/ formulated in Sec. 1. In a more general situation a PR process can be described by a bi-partite graph G consisting of two types of nodes [8]: a/ representing PR acts based on fixed PM and b/ representing PM transformations. We denote the first-type nodes by circles and the second-type nodes by rectangles. The arcs of the graph represent logically admissible transitions
Pattern Recognition Driven by Domain Ontologies
M
323
SId
Fig. 2. A graph of simple decision process
between the nodes. In this scheme only transitions between pairs of different types of nodes are admissible. The graph shown in Fig. 2 corresponds thus to the simplest PR process. PR acts may be repeated many times under the same primary assumptions. A different PR process is represented in Fig. 3.
S12
SId
M1
S21
SId
M2
Fig. 3. PR network consisting of switched sub-sequences of PR acts
Two alternative pattern models, M1 and M2 are shown there; each PM without changing can be used several times before the process is switched to the alternative PM. This corresponds to the situation described by Example 1. We say that a PR process is based on a given PR network if it can be imbedded into the network as a path linking a pair of PM nodes, passing alternately through PM and model transformation nodes and admitting passing several times through the same nodes. We call switching operation a segment of PR network consisting of two PM nodes linked by a single PM transformation node. The switching operations a PR network consists of can be divided into the groups of a priori determined or undetermined operations. A PR network will be called operationally determined if it consists of determined operations only, otherwise it will be called operationally undetermined. A PR process will be called operationally determined if it is based on a determined PR network, otherwise it will be called operationally undetermined. First concepts of operationally determined PR networks can be found in [4, 5], extension of those concepts on operationally undetermined PR networks were described in [6,7]. Similarly, occurrence of a switching operation can also be a priori determined or undetermined. A PR process will be called structurally determined if it is based on a PR network in which occurrence of all switching operations is determined, otherwise it will be called structurally undetermined. A PR process will be called determined if it is
324
J.L. Kulikowski
operationally and structurally determined, otherwise it will be generally called undetermined. Undetermined PR processes are of particular interest; they arise as a result of insufficiency of information concerning the environment and external conditions of PR making. The necessary additional information can be acquired from three sources: a/ from the experts’ (users’) indications, b/ by analysis of decisions made in preceding PR acts, and c/ by analysis of adequately chosen ontological models providing knowledge about the environment. Transformation of PR models, in particular, means: 1) choosing the next PR model from a given final subset of models; 2) modification of the preceding PR model by changing its components U, σ and/or ϕ ; 3) stopping the process (which formally can be interpreted as choosing a “STOP” PR model as the next one). Undetermined PR processes whose switching operations are based only on the decisions made in preceding PR acts are, in fact, well known as those realized by adaptive classifiers. Our attention will be focused on the case of using ontological models to realization of switching operations.
4 PR Models Transformation Based on Ontological Models The concept of ontology used as a form of reality description in decision making systems was proposed in numerous works (see a list of references in [9]). The concept of ontological models proposed in [10] tries to specify the form of ontology presentation as a set of logically and/or semantically related formal relations among the concepts concerning a domain (a selected part of an abstract or real world). A mandatory component of an ontology is a taxonomy of concepts related to the domain under investigation. In a context of PR tasks some of such concepts can be interpreted as patterns being to be recognized. The idea of using ontological models to PR tasks was presented in [10, 11]]. Below, this idea is extended on a situation when other ontological models, describing the relations among the concepts, in PR tasks play a substantial role. Example 4. Let us focus our attention on the following PR tasks. There is given a time-ordered set of images representing a class of similar objects moving in various directions and changing their form in time. We are interested in observation of behavior of selected objects, assessment of their trajectories and form changing parameters. For this purpose the following ontological models will be used: a) Taxonomy of concepts presented in Fig.4: b) Objects’ identity relation: for any objects ωi , ωj belonging to two different images
Pattern Recognition Driven by Domain Ontologies
325
Changing scene
Objects Objects of interest Size
Details
Background Side objects
Object’s movement
Object’s displacement
Object’s form changing
Direction Value
Type Value
Size Details
Fig. 4. Taxonomic tree of changing scene concepts
Ident(ωi , ωj ) =
T rue, if Sim (ωi , ωj ) ≥ , F alse, otherwise,
(2)
where < 0, ≤ 1, is a similary threshold, and Sim(ωi , ωj ) = Simd (ωi , ωj ) · Simf (ωi , ωj ),
(3)
Simd (ωi , ωj ) and Simf (ωi , ωj ) are some objects’ similarity measures defined, respectively, on the basis of displacement and form changing values. c) Objects’ trajectory hyperrelation: ⎧ ⎨ T rue, if Indent(ωi , ωi+1 ) = T rue H(ω1 , ω2 . . . , ωi , . . . , ωk ) = for all i = 1, 2, . . . , k − 1, ⎩ F alse, otherwise.
(4)
The PR process then can be realized according to the PR network shown in Fig. 5. The network consists of the nodes: Mod - model of objects detection, Sodc - objects detection control, Moid - model of objects identification, Soidc - objects identification control, Mtid - model of trajectories identification, Stidc - trajectories identification control, Moma - model of trajectories analysis. The operations based on the models Moid , Moid , Mtid and Moma are
Mod
Sodc
Moid
Soidc
Mtid
Moma
Stidc
Fig. 5. PR network of moving objects’ trajectories analysis
326
J.L. Kulikowski
based on algorithms in which elements of taxonomy of o bjects are used. However, the algorithms are not predetermined, because it may happen that some decisions concerning identity of objects or of their trajectories are logically inconsistent. In such case the inconsistency should be removed by coming back to the operational module and the corresponding decisions should be repeated by using stronger decision criteria. On the other hand, one of the role of the modules Sodc , Soidc , and Stidc consists in deciding, on the basis of the preceding decisions, whether the former PR act should be repeated or transition to the next operational module is necessary. These decisions are based on the objects identity relation and belonging to the same trajectory hyperrelation. This means that the path consisting of PR acts is not predetermined and the PR process also belongs to the class of structurally undetermined.
5
Conclusions
In this paper the idea of using ontological models as a source of knowledge necessary to construct composite PR procedures has been presented. It has been shown that the structure of such procedures can be chosen on the basis of information acquired from taxonomy of objects being to be recognized, and the structure of PR process is strongly connected with relations among the investigated objects. Further development of the proposed approach to the problems of data mining from composite images seems to be desirable.
References 1. Duda, R., Hart, O., Stork, D.: Pattern classification. J. Wiley, New York (2000) 2. Fu, K.S.: Recent Developments in Pattern Recognition. IEEE Trans. Computers 29, 845–854 (1980) 3. Seidler, J.: Statistical theory of signal detection. PWN, Warsaw (1963) (in Polish) 4. Bubnicki, Z.: Two-Level Pattern Recognition via Identification. In: Proc 7th Int. Congress of Cyb. & Syst., pp. 779–786 (1987) 5. Kulikowski, J.L.: About Statistical Decision Making in Hierarchical Systems. Arch. Elektr. 14(1), 97–112 (1966) (in Polish) 6. Kurzynski, M.: On the multistage Bayes classifier. Pattern Recognition 21, 355–365 (1998) 7. Kurzynski, M.: Multistage Diagnosis of Myocardial Infarction Using a Fuzzy Relation. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS(LNAI), vol. 3070, pp. 1014–1019. Springer, Heidelberg (2004) 8. Deo, N.: Graph theory with applications to engineering and computer science. Prentice-Hall, Inc., Englewood Cliffs (1974) 9. Fernandez-Lopez, M., Gomez-Perez, A.: Overview and Analysis of Methodologies for Buiulding Ontologies. The Knowledge Eng. Rev. 17(2), 129–156 (2002)
Pattern Recognition Driven by Domain Ontologies
327
10. Kulikowski, J.L.: The Role of Ontological Models in Pattern Recognition. In: Kurzynski, M., et al. (eds.) Computer Recognition Systems. Advances in Soft Computing, pp. 43–52. Springer, Berlin (2005) 11. Kulikowski, J.L.: Interpretation of Medical Images Based on Ontological Mod˙ els. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2006. LNCS(LNAI), vol. 4029, pp. 919–924. Springer, Heidelberg (2006)
Part III
Speech and Word Recognition
Cost-Efficient Cross-Lingual Adaptation of a Speech Recognition System Zoraida Callejas1 , Jan Nouza2 , Petr Cerva2 , and Ramón López-Cózar1 1
2
Dept. of Computer Languages and Systems, University of Granada, 18071 Granada, Spain {zoraida, rlopezc}@ugr.es Inst. of Information Technology and Electronics, Technical University of Liberec. Liberec, Czech Republic {jan.nouza, petr.cerva}@tul.cz
Summary. In this paper, we describe a methodology that proved to be successful and cost-efficient for porting an existing speech recognition system to other languages. Our initial aim was to make a system called MyVoice developed for handicapped persons in Czechia, available also to users that speak other languages. The experimental results show that the proposed method can not only be used with languages with similar origins (in this case Czech and Slovak), but also with languages that belong to very different branches of the Indo-European language family (such as Czech and Spanish), obtaining in both cases accuracy rates above 70% with a 149k words lexicon.
1 Introduction Cross-lingual adaptation makes it possible to employ corpora and resources already available in a language for the recognition of a different one. This allows fast and low-cost implementation of speech recognizers, which is especially useful for minority languages or dialects in which the number of shared resources available is very limited or even not existent. The hypothesis that we wanted to demonstrate was that a fully functional system based in the Czech language can be easily and rapidly adapted for the interaction in another language without the need of building a new speech recognizer or getting involved in an arduous linguistic study, as for example morphological diacritization in [1]. Thus, we propose an approach to reach this objective and present experimental results that measure its appropriateness with both a language that is similar to Czech (Slovak) and a language from a very different origin (Spanish). The rest of the paper is structured as follows. Section 2 describes our proposal and compares it with the state-of-the-art methods. Section 3 describes the previously created Czech speech recognizer and the MyVoice system. In Section 4, the cross-lingual adaptation is explained. Section 5 describes the experiments carried out to test the performance of the proposed technique, M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 331–338. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
332
Z. Callejas et al.
whereas Section 6 discusses the results obtained. Finally, in Section 7 we present the conclusions and possibilities for future work.
2 Related Work The development of a speech recognizer is a very arduous and time demanding task. A large amount of data spoken by hundreds of subjects must be recorded and carefully annotated to get a representative set suitable for training an acoustic model. Thus, collecting and annotating the necessary data generally requires many years of human effort. This is the reason why any possibility to share acoustic data between languages is very welcome in the scientific community. Frequently, these databases are gathered ad hoc by each developing team and are not freely available for the scientific community. This implies that in most cases, building a new speech recognizer requires obtaining all the necessary acoustic and linguistic resources right from the start. Thus, cross-lingual methods are an alternative to the complete creation of a new recognizer. One possible method to do this is to create a mapping between phonemes. The basic idea of phoneme mapping is to establish a correspondence between phonetic units in the origin and target languages. Thus, the result depends on the phonetic similarity between both languages. This mapping can be done either automatically or by experts. The automatic procedure employs datadriven measures, which are frequently extracted from phoneme confusion matrices. The expert-driven approach is based on human knowledge about the languages being processed. To carry out the mapping, we chose the expert-driven approach for two main reasons. Firstly, although automatic mapping has the advantage of not needing human intervention and thus obtains more objective results, it requires considerable speech material for computing the similarities between languages. Even though this is not as much material as needed for building a full new recognizer for the target language, it makes the adaptation process be more costly. Secondly, results depend on a close match between the acoustics of both languages, on the distance measure employed, and on the recording conditions.
3 The Czech Speech Recognizer and the MyVoice System The Czech speech recognizer used in the experiments has been developed during more than a decade in Technical University of Liberec [2]. Its acoustic models are based on three-state left-to-right HMMs of context-independent speech units which are supplemented by several models of noise, with output distributions using at least 64 Gaussians. According to the application conditions, these models can be either speaker-independent (SI), gender-dependent (GD), or speaker-adapted (SA).
Cost-Efficient Cross-Lingual Adaptation of a Speech Recognition System
333
The recognizer’s decoding module uses a lexicon of alphabetically ordered words, each represented by its text and phonetic form. This recognizer has been successfully employed for the development of the MyVoice system [3]. MyVoice allows persons with non-functional hands to work with a PC in a hands-free manner by using several hundreds of voice commands. For the experimentation presented in the paper, we have used the MyVoice system. MyVoice is structured in several command groups, each dealing with a specific task. For example, the group that controls the mouse is different from the one that deals with the keyboard, but they can be accessed easily from each other by voice commands. The size of these groups varies between 5 and 137 commands. The MyVoice software is currently employed by 60 handicapped users in the Czech Republic, whose reports show that the word error rate (WER) typically lies between 1% and 2%, if the user does not have any speech disorder.
4 Cross-Lingual Adaptation For the cross-language adaptation we used Slovak and Spanish texts along with an automatically generated Czech phonetic representation. The models built for the Czech recognizer can be applied to recognize words in another language. To do this we use the Czech phonetic form and construct the acoustic models of the words by concatenating the corresponding phoneme models. The translation from Slovak or Spanish text to the Czech phonetic representation was automatically done employing different mapping policies. Slovak and Czech belong to the same branch of Slavic languages and share a large portion (about 40%) of their lexical inventory and many of the remaining words differ slightly, either in spelling or pronunciation. In general, Slovak language sounds softer than Czech, which is caused by different phonotactics and a slightly different set of phonemes, which were mapped to the closest Czech phonemes. The correspondence between Spanish and Czech phonemes was carried out by one Spanish native speaker and supervised by several Czech native speakers. A complete mapping was carried out to cover all Spanish phonemes. For the phonemes that did not have any suitable correspondence which the Czech ones we studied two solutions. The first one was to use the nearest Czech phonemes.The second option was to adapt these previously existing Czech phonemes to the Spanish pronunciation, by creating two new symbols. Experimental results were very similar for both approaches, with a difference in accuracy of only 0.5%. The results reported in Section 6 were attained employing the second approach. The result of this process was a vocabulary with all the words that the speech recognizer accepted. For each word it contained a Slovak or Spanish text form and its Czech phonetic representation. This vocabulary was used by the Czech speech recognizer described in Section 3 as if it were a Czech vocabulary, thus not even a single line of code had to be changed in it. As a
334
Z. Callejas et al.
result, a user can utter a word in Spanish or Slovak, and the speech recognizer uses the Czech models to obtain the best Spanish or Slovak form candidate. To optimize the performance of the recognizer, we propose to carry out speaker adaptation as a final step of the cross-language adaptation procedure. This way, the Czech models were tuned to better adapt to the pronunciation that each speaker had of every phoneme in the target languages. This step is not against our initial aim of cost-effective implementation, as it can be performed in a fast and straightforward manner by making the user read a short text when he first uses the recognizer (e.g. the first time he runs MyVoice). The approach proposed for speaker adaptation is a combination of the Maximum A Posteriori (MAP) [4] and the Maximum Likelihood Linear Regression (MLLR) [5] methods for speaker adaptation, and is performed in two steps. To carry out the adaptation we used a 614 words vocabulary comprised of a list of the most frequent words in each language (covering all phonemes), along with MyVoice commands.
5 Experimental Set-Up We carried out several experiments with the main objective of testing the viability and performance of the proposed cross-lingual adaptation approach. Additionally, we measured the impact of several factors in the performance of the employed method, such as the usage of different user adaptation strategies, the size of the recognition dictionary, and the number of words considered for testing. Firstly, we translated into Slovak and Spanish the MyVoice commands, to measure the performance of the cross-lingual adaptation for a command-andcontrol application. The speakers used MyVoice to control a PC by spoken commands while they carried out their daily activities. Thus, they were not provided with a specific list of commands to utter. This way, we attained results from flexible and natural interaction with the system. As it was described in Section 3, the valid vocabulary of MyVoice is restricted at each step to the list of commands in the current group (vocabulary ranging between 5 and 137 commands). To obtain meaningful results from the different speaker models regardless of the groups visited during the interaction, we carried out additional experiments employing the whole MyVoice vocabulary (432 commands). In these experiments the task perplexity was always 432, given that after each command, any word could be uttered. We also wanted to corroborate that the results obtained from the interaction with MyVoice could be attainable in situations where the accepted vocabulary was larger. Thus, we extended MyVoice commands with a list of the 149k most frequent Spanish and Slovak words, respectively. The two dictionaries were collected from Spanish and Slovak newspapers and contained all the word forms, not only the lexemes.
Cost-Efficient Cross-Lingual Adaptation of a Speech Recognition System
335
At the same time, we also augmented the vocabulary employed to test the system. To do this, we randomly selected news from Spanish and Slovak newspapers different from the ones used to collect the recognition dictionaries. Eight Spanish native speakers (four male and four female), and two Slovak native speakers (one male and one female), recorded the isolated words. Concretely, 1,582 words were recorded by each Spanish speaker, and 989 by each Slovak speaker. For discussing the experimental results we used the average performance values over the number of words for each language, and for all speakers. Furthermore, in order to study to which extent speaker adaptation allowed us to attain better recognition results, we carried out experiments with speaker-independent, gender-dependent, as well as speaker-adapted models.
6 Experimental Results 6.1
Interaction with MyVoice
In the first experiments the users employed MyVoice to control their PCs in order to carry out their daily activities. The experiments were performed both online (with the command grouping) and offline (with all the MyVoice commands). The experimental results are shown in Table 1. It can be observed that WER was lower for the online experiments because the vocabulary size was smaller. When using speaker-adapted models, the relative improvements achieved were 24.1% for Slovak and 28.3% for Spanish for the online experiments; whereas they were 46.65% and 56% respectively for the offline experiments. This shows that speaker adaptation caused a remarkable improvement in the offline recognition results; which were comparable to the ones obtained in the online experiments after speaker adaptation (around 2% WER for Slovak and 4% for Spanish). The experiments with Slovak showed a WER almost as small as for native Czech speakers (only 2.5%). The results with Spanish were much better than we initially expected. In fact, they differed in less than 2% compared with the ones that could be achieved recognizing the Czech language. 6.2
Impact of Speaker Adaptation
To test the performance of the adapted recognizers we used the Spanish corpus (12,686 words extracted from newspapers) and the Slovak corpus (1,978 Table 1. WER [in %] for the command-and-control task Language Experiment Online Slovak Offline Online Spanish Offline
Gender-dependent Speaker-adapted 2.9 2.2 4.6 2.5 6.0 4.3 10.0 4.4
336
Z. Callejas et al.
Table 2. WER with different adaptation techniques Language OOV Yes Slovak No Yes Spanish No
Speaker-independent Gender-dependent Speaker-adapted 47 43.6 38.8 29 24.4 17.9 55.8 54.6 33 48.9 47.4 22.5
words), which were described in Section 5. Additionally, we used a 10k words vocabulary for recognition instead of the small vocabulary comprised of 432 commands. As can be observed in Table 2, speaker-independent models yielded a WER of 47% for Slovak and 55.8% for Spanish. These results were improved by using gender-adapted models only in a 7.23% relative for Slovak and 2.15% for Spanish. However, speaker adaptation yielded a 17.6% relative improvement with respect to the speaker-independent models for Slovak, and a remarkable 40.8% relative improvement for Spanish. Most of the recognition errors for Slovak were due to OOV words. Thus, as the objective of this experiment was to measure the impact of speaker adaptation in the proposed cross-language approach, regardless of the dictionary and utterances used for recognition, we computed the recognition results without considering the OOV words. A 54% relative improvement with respect to speaker-independent models was achieved for Spanish, and a 38.2% in the case of Slovak, obtaining for both languages WERs around 20%. As Slovak is a language very similar to Czech, initially the proposed method attains accuracies around 70% (29% WER) for this language. Hence, speaker adaptation for Slovak only improves accuracy by an absolute 11.1% (38.2% relative) with respect to using speaker-independent models. However, for a language with a very different origin such as Spanish, speaker adaptation enhances the adapted recognizer substantially. The experiments showed that 26.4% absolute improvement (54% relative) can be achieved, with accuracy rates that are only 4.6% worse than the ones obtained for the Slovak language. The proposed cross-lingual approach in combination with speaker adaptation yielded accuracy rates around 80% (17.9% and 22.5% WER) for Slovak and Spanish. 6.3
Effect of the Size of the Recognition Dictionary
Finally, we were interested in studying to what extent the experimental results could be affected by increasing the size of the recognition vocabulary up to 149k words. The number of OOV words decrements drastically when the recognition vocabulary is very large. Hence, WER tends to decrease when such a dictionary is employed. As can be observed in Table 3, WER decreases
Cost-Efficient Cross-Lingual Adaptation of a Speech Recognition System
337
Table 3. Effect of dictionary size on WER [in %] taking into account OOV words and speaker adaptation 10k 46k 85k 149k Slovak 38.8 27.3 26.0 24.1 Spanish 33.0 28.4 28.0 27.9
14.7% relative for Slovak when we employ a vocabulary comprised of 149k words, compared to 10k, and 15.4% for Spanish.
7 Conclusions and Future Work As shown in Table 4, if OOV words are not taken into account, the relative reduction of WER from speaker-independent to speaker-adapted models, decreases when the accepted vocabulary is larger. This is due to an increment of the probability to find acoustically similar words in the dictionary. However, for increasingly larger recognition dictionaries we found that the WER tends to establish around 23% for Slovak and 27% for Spanish. This shows that the proposed cross-lingual approach, employing the proposed speaker adaptation procedure, yields accuracy rates which are around 70% when we adapt Czech to Spanish, i.e. two languages with very different origin. Whereas the accuracy rates are around 80% when we adapt Czech to Slovak, i.e. two very phonetically similar languages. We have presented in this paper a cross-lingual adaptation of a previouslycreated Czech speech recognizer to Spanish and Slovak. Phonetic crosslinguality is a research area that is gaining increasing interest, especially because it enables resource sharing between languages and thus represents a feasible way of developing systems for minority languages or dialects. However, the state-of-the-art systems are based on complicated and very effort and time demanding linguistic and phonetic studies. We have demonstrated that the adaptation of a speech recognizer to another language can be carried out in a straightforward way, employing a mapping between phonemes, and enhancing it with language and speaker adaptation procedures. Moreover, we have shown that the proposed adaptation method can be used not only with phonetically similar languages, such as Czech and Slovak, but also with languages from very different families, like Czech and Spanish. Table 4. Relative WER reduction [in %] yielded by speaker-adapted recognition in comparison with speaker-independent models 10k 46k 85k 149k Slovak 38.2 35 34.2 33.4 Spanish 54 50.4 49.6 49.4
338
Z. Callejas et al.
We have carried out experiments with MyVoice, a speech-based application designed for Czech handicapped people. Cross-lingual porting of voice operating systems for such a small group of target users, requires an investment that hardly can be paid back. However, our experimental results show that for a task involving a vocabulary of 432 commands, a 95.6% performance (4.4% WER) can be attained for Spanish and 97.5 (2.5% WER) for Slovak. Besides, for vocabularies up to 149k words, the proposed scheme yields around 72.9% accuracy (27.1% WER) for Spanish and 77.4% (22.6% WER) for Slovak. These promising findings encourage us to consider the future application of the proposed cross-language phonetic adaptation for minority languages with very small speech and linguistic resources.
References 1. Kirchhoff, K., Vergyri, D.: Cross-dialectal data sharing for acoustic modeling in Arabic speech recognition. Speech Communication 46, 37–51 (2005) 2. Nouza, J., Nouza, T., Cerva, P.: A Multi-Functional Voice-Control Aid for Disabled Persons. In: Proceedings of 10th International Conference on Speech and Computer (SPECOM 2005), Patras, Greece, pp. 715–718 (2005) 3. Cerva, P., Nouza, J.: Design and Development of Voice Controlled Aids for Motor-Handicapped Persons. In: Proceedings of 11th International Conference on Spoken Language Processing (Interspeech 2007 - Eurospeech), Antwerp, Belgium, pp. 2521–2524 (2007) 4. Gauvain, J.L., Lee, C.H.: Maximum A Posteriori Estimation for Multivariate Gaussian Mixture Observations of Markov Chains. IEEE Transactions on Speech and Audio Processing 2, 291–298 (1994) 5. Gales, M.J.F., Woodland, P.C.: Mean and Variance Adaptation Within the MLLR Framework. Computer Speech and Language 10, 249–264 (1996)
Pseudo Multi Parallel Branch HMM for Speaker Verification Donato Impedovo and Mario Refice Politecnico di Bari, DEE Dipartimento di Elettrotecnica ed Elettronica, via Orabona 4, 70125 Bari, Italy [email protected]
Summary. This paper describes an approach, called Pseudo Multi Parallel Branch (P-MPB) Model, for coping with performance degradation typically observed over (short and medium) time and trials in Text Dependent Speaker Verification’s task. It is based on the use of a fused HMM model having a multi parallel branch topology where each branch consists of an HMM referring to a specific frame’s length representation of the speaker. The approach shows reasonable ER’s reduction from the baseline system.
1 Introduction In the nowadays society biometric is at the centre of a wide debate as it is becoming a key aspect in a large number of applications [1,2,3]. Physiological or behaviour traits can be considered, depending on the specific requirements of the applications [1], currently the most reliable systems are based on the combination of both aspects [4] or, in general, considering multi-biometric systems [5], moreover even standard approaches (smart card, etc.) can be combined with biometrical measurements in order to increase security [6]. In this work a biometric system, based on speech, is considered to perform the verification of the person [6, 7]. The state of the art in this field is based on statistical classifiers such as Hidden Markov Model (HMM) [8] or Gaussian Mixture Model (GMM) [7]. These systems work in two phases: enrolment and recognition. During the training (enrolment) phase the system learns the models that statistically characterize the user, while during the verification phase the input is accepted if it positively matches with the expected claimant’s model. A big issue with models is their capability of generalization to unseen data [9]. From a Pattern Recognition perspective, Speaker Recognition is a difficult task since speech varies along time. In fact, in a real speaker verification application, many human factors can contribute to increase classification errors: since trials occur over time, emotional state, sickness and aging are the most intuitive affecting factors. Moreover it has to be considered that the vocal tract characteristics tend to naturally chance from one trial to the next one. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 339–346. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
340
D. Impedovo and M. Refice
In this work the approach called Pseudo Multi Parallel Branch (P-MPB) is proposed, it is based on the exploitation of different speech representations. The P-MPB model is obtained by fusing different HMMs in a new one having a multi branch parallel topology: each branch is a particular representation of the speaker adopting a specific frame’s length in the feature extraction phase. The approach is based on the observation that a single model could not be sufficient to fully represent a speaker [10] and that a specific frame’s length and specific combination (training/verification) of frame’s lengths could be better then others to solve speaker’s characteristics and to match them over time in a Text Dependent task [11,12]. The proposed approach is sub-optimal [10] since different characteristics of the speaker reflected in the different representations are considered in the modeling phase, they are complementary and do not cooperate: the most probable one is evaluated, at decision time, given the input to be verified.
2 System Description In the Speaker Verification (SV) task [6], the user claims an identity, then the system verify if the person is who he/she claims to be. In this application the verification of the identity is performed by pronouncing a password (e.g. account number, user ID, “name surname”, etc.); this means at least a double security level: the secrecy of the chosen password and the vocal characteristics of the speaker related to the specific utterance. This kind of approach is called Text Dependent. On the other hand in a Text Independent approach the system can process any kind of speech emission: this match better with the Speaker Identification (SI) task, but it can be useful even in SV when forensic applications are considered and in order to reveal tape (replay) attacks. 2.1
Features Extraction
The speech signal is generally framed into constant frame size: in the current state of the art, a length between 20 and 30ms and a shift, in the framing process, of about 10 ms is adopted [6, 7, 8, 10]. This allows the signal to be considered stationary inside the short observation window (frame) in order to apply the Discrete Fourier Transform (DFT). In this work features obtained from the power spectrum of the signal are considered, namely Mel Frequency Cepstral Coefficients (MFCCs) [6] which are very common in speech and speaker recognition task when clean data are considered. Many factors can be studied in the comprehension of the mismatch between features extracted in the training phase and those observed during verification. In this work natural variations of the vocal tract from one trial to the following one, over the same utterance (text dependent approach), are considered. Indeed there is evidence that even considering the same speaker repeating the same utterance in a few couple of seconds, variation are observed on many parameters. One of these parameters is the pitch (the fundamental frequency for a speech voiced emission), in fact a typical phenomenon
Pseudo Multi Parallel Branch HMM for Speaker Verification
341
is the so called “pitch mismatch” [12, 13]. Considerable variations are also observed on the first two formants which are related to the tongue-mouth geometry of the speaker when pronouncing a specific vowel. Variations over these parameters are inevitably conveyed into features thus generating a sensible degradation on performance. A first reasonably motivation sustaining the approach proposed in this paper, and more deeply described in the following, is that on one hand various phonemes have different optimal frame sizes and on the other hand phonemes vary in duration on different realizations [14]. A specific frame’s length could be better than another, in a particular trial, to better solve speaker’s characteristics [11, 12]. Now let’s consider the same portion of voiced speech emission, and let’s consider the “spectrogram” resulting from the difference of two (different) frame’s length representations of it (figure 1). As can be observed, the highest (darkest in figure) and dense differences are placed in the bandwidths where the pitch (broken line), and the first two formants (dotted lines) F1 and F2 are placed. From this perspective the natural variations over time of the parameters already discussed, could be statistically covered by the difference in representation due to the comparison of different frame’s length representation of speech between the training and the verification phase [11, 12].
Fig. 1. The “difference spectrogram” related to a female speech voiced emission. The broken line marked with ”F0” is the pitch, the dotted lines marked by “F1” and “F2” are respectively the first and second formant.
342
D. Impedovo and M. Refice
Since a single HMM based on a single observation window resolution (frame’s length) is not sufficient to fully represent the variability in the speech production of the utterance, different speech representations are used in speaker modelling [10]. The approach P-MPB is based on the use of multiple frame speaker generative models fused into a new HMM. Each one of the former model is a branch in the topology of the P-MPB, so there is a competition among the most probable branches in the verification phase performed by the Viterbi algorithm. 2.2
Pseudo Parallel Branch HMM
The speaker modelling process is performed in two sub-phases: 1. creation of different frame’s length speaker models; 2. fusion of the speaker models into a new model. First Sub-Phase Continuous Density HMMs (CD-HMMs) are here considered due to their capability to keep information and to model the sound, the articulation and the temporal sequencing of the speech. These last aspects are reported on the states transitions probabilities, and play an important role in the case of Text Dependent data. The continuous observation probability density is characterized by a mixture of Gaussian probabilities. The model parameters are estimated by the Expectation-Maximization algorithm [8,15]. During this first sub-phase, n HMM Frame Models per speaker are trained [11], being n the number of frame’s lengths to be considered. Second Sub-Phase In the second sub-phase, the n Speaker’s Models are fused into a new pseudo HMM with n parallel branches [4, 16, 17]. The final model is briefly sketched in figure 2. In the logic model two non emitting states are introduced: ST and FN. FN is the final state of the new model, when the system is in it, the evaluation of the most probable branch is already completed: all the ahb,F N probabilities are set equal to 1. ST is the first (starting) state of the new model and aST,hb is the transition probability from it to the sub-model in the h − branch. The following constrain must be satisfied: n
aST,hb = 1
(1)
h=1
In order to evaluate the unknown probabilities, the (re-)training of the PMPB could be performed adopting a specific data set. In this case the estimated probabilities would reflect the distributions related to the known data and the system would lose the fundamental statistical capability to represent variations. Consequently, since here each branch is considered as a
Pseudo Multi Parallel Branch HMM for Speaker Verification
343
Fig. 2. The Pseudo Multi Parallel Branch HMM: each branch includes a specific frame speaker HMM
“perturbed” model, and since the most probable representation is unknown, an equal probability is assigned to each one of them: aST,hb =
1 , n
(2)
with h=1,. . . ,n. The building process of the P-MPB is simplified since all branches adopt a left-to-right no skip topology: the final model results in a number of emitting states equal to the sum of the emitting states of the native models plus the two non emitting states ST and FN. This implementation solution preserves the sparseness of the transition matrix. Each speaker is now represented by a λk Multi Parallel Branch Model. In a truly HMM the probabilities aST,hb should be evaluated jointly with the other parameters in matrixes A, B and π [8], consequently the obtained HMM is a Pseudo one [4, 16, 17]. P-MPB has the advantage, when compared to approaches based on the lumping of different representations inside a single vector, of not increasing the dimension of the feature’s space. 2.3
Verification Process
Given O the speech signal related to the k th claimed identity to be verified, and f lh the frame’s length to be adopted in the feature’s extraction phase of the verification process, the quantity Sk is computed as follow: * + * + (3) Sk = log Pr(Of lh |λk ) − log Pr(Of lh |λkI ) , where • •
λkI is the model representing impostors for the k th genuine speaker; Pr(Of l |λk ) and Pr(Of l |λkI ) are evaluated by the Viterbi algorithm [8]. h
h
344
D. Impedovo and M. Refice
In the evaluation of P r(Of l |λk ), the Viterbi algorithm explores and exploits h the most probable branch of the λk P-MPB generating the given observation. In order to effective compare likelihoods from different frame’s length feature sets, the normalization of probabilities is performed into a unified space. Equation (3) represents a normalization technique, it compares the generation’s probability of the claimed speech input by the genuine claimant model versus the claimant impostor model. The impostor model is built by using concatenation of speaker independent phonemes in order to obtain the claimant password, moreover a cohort model can be also trained considering specific impostors recordings [18]. The λkI model improves the separation among the claimant and impostors and allows an easy set of the decision (acceptance/rejection) threshold.
3 Simulations The approach was tested using a database of 16 female and 18 male speakers with an age between 20 and 50 years. Each subject was requested to utter her/his “name surname” in multiple recording sessions covering a time span of about 3 months for a total amount of 3860 trials. The number of recording’s sessions (and consequently the total number of trials) is not uniform among speakers; this could be considered as an inconvenience, but it introduces realism into the database. However there is a minimum number of 100 trials per speaker. Utterances were recorded by a typical PC sound card in an home-office environment with 22Hz sampling rate, 16-bit quantization level and a single channel. The frame’s lengths considered in this work are the values of 22, 25, 28 and 31 ms: they equally divide the range of the most used values in literature (20-30ms) for speaker verification tasks; the shift adopted in the framing process is 10ms. 19 MFCCs, their time derivatives and the corresponding energy parameters were considered as features, thus resulting in a vector of 40 elements per frame. Each model was trained on the first 10 trials (belonging to the first session). The little amount of training data simulates real situations where an exhaustive training cannot be carried on. Each model, before the P-MPB fusion, has 8 emitting states and 3 Gaussian components in the mixture per state. The final P-MPB model results in 34 total states. Percentage performance of the system is reported in table 1. The Total Error Rate (ER) is computed as the sum of False Rejection (the rejection of a genuine speaker - FR) and False Acceptance (the acceptance of an impostor - FA). Table 1. Verification Performance (ER) 22
25
28
31
classic 2.92 3.19 3.60 4.01 P-MPB 2.79 2.93 3.08 3.54
Pseudo Multi Parallel Branch HMM for Speaker Verification
345
In the first column “classic” refers to a baseline system adopting the “XX”ms (reported on first row) in the framing process both for training and verification (the way systems usually work), while P-MPB indicates the use of the proposed approach and of XXms frame’s length in the verification phase. The P-MPB approach always outperforms the corresponding baseline system: improvements are between 5% and 14%. Improvements were observed on the most part of speakers, while on the other no degradation was observed. The success of the P-MPB approach is the capability to be applied on every speaker without causing degradation for any of them, and on the needing avoidance of additional steps in the real time verification process.
4 Conclusion This paper describes a novel approach for Text Dependent Speaker Verification systems which is able to enhance performance. The approach is based on the observation that different frame lengths for feature extraction during the two phases of training and testing can offer a better match of patterns thus increasing performance, and that a single model could not be sufficient to fully represent the user. For each speaker different frame models, separately trained, are fused into a new pseudo HMM having a multi parallel branch topology. The new model is able to better represent the speaker and to cope with short and medium term variations, ER’s reduction between 5% and 14% have been observed.
References 1. Boyer, K.W., Govindaraju, V., Ratha, N.K.: Special Issue on Recent Advances in Biometric Systems. IEEE Trans. on System, Man and Cybernetics - Part B 37(5) (2007) 2. Prabhakar, S., Kittler, J., Maltoni, D., O’Gorman, L., Tan, T. (eds.): Special Issue on Biometrics: Progress and Directions. IEEE Trans. on PAMI 29(4) (2007) 3. Jain, A.K., Flynn, P., Ross, A.: Handbook of Biometrics. Springer, Heidelberg (2007) 4. Bigeco, M., Grosso, E., Tistarelli, M.: Person authentication from video of faces: a behavioural and physiological approach using Pseudo Hierarchical Hidden Markov Models. In: Zhang, D., Jain, A.K. (eds.) ICB 2005. LNCS, vol. 3832, pp. 113–120. Springer, Heidelberg (2005) 5. Impedovo, D., Pirlo, G., Refice, M.: Handwritten Signature and Speech: Preliminary Experiments on Multiple Source and Classifiers for Personal Identity Verification. In: Srihari, S.N., Franke, K. (eds.) IWCF 2008. LNCS, vol. 5158, pp. 181–191. Springer, Heidelberg (2008) 6. Campbell, J.P.: Speaker Recognition: A tutorial. Proceedings of IEEE, 1437– 1462 (1997) 7. Reynolds, D.A.: Speaker Identification and Verification using Gaussian Mixture Speaker Models. Speech Communication, 91–108 (1995)
346
D. Impedovo and M. Refice
8. Rabiner, L.R.: A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2), 257–286 (1989) 9. Reynolds, D.A.: An overview of Automatic Speaker Recognition Technology. In: IEEE Proc. of International Conference of Speech, Acoustic and Signal Processing, vol. 4, pp. 4072–4075 (2002) 10. Chen, K.: On the Use of Different Speech Representations for Speaker Modeling. IEEE Trans. on System, Man and Cybernetics, Part C: Applications and Reviews 35(3), 301–314 (2005) 11. Impedovo, D., Refice, M.: Speaker Identification by Multi-Frame Generative Models. In: IEEE Proc. of the 4th International Conference on Information Assurance and Security (IAS 2008), September 8-10, pp. 27–32 (2008) 12. Impedovo, D., Refice, M.: Frame Length Selection in Speaker Verification Task. Transaction on Systems 7(10), 1028–1037 (2008) 13. Impedovo, D., Refice, M.: The Influence of Frame Length on Speaker Identification Performance. In: IEEE Proc. of the Fourth International Symposium on Information Assurance and Security, pp. 435–438 (2007) 14. Pelecanos, J., Slomka, S., Sridharan, S.: Enhancing Automatic Speaker Identification using Phoneme Clustering and Frame Based Parameter and Frame Size Selection. In: IEEE Proc. of the Fifth International Symposium on Signal Processing and its Applications, pp. 633–636 (1999) 15. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39(1), 1–38 (1977) 16. Fine, S., Singer, Y., Tishby, N.: The Hierarchical Hidden Markov Model: Analysis and Applications. Machine Learning 32, 41–62 (1998) 17. Wang, W., Brakensiek, A., Kosmala, A., Rigoll, G.: Multi-Branch and Two-Pass HMM Modeling Approaches for Off-Line Cursive Handwriting Recognition. In: IEEE Proc. of the 6th International Conference on Document Analysis and Recognition (ICDAR), pp. 231–235 (2001) 18. Kinnunen, T., Karpov, E., Franti, P.: Real-Time Speaker Identification and Verification. IEEE Trans. on Audio, Speech and Language Processing 14(1), 277–288 (2006)
Artificial Neural Networks in the Disabled Speech Analysis 1 ´ Izabela Swietlicka , Wieslawa Kuniszyk-J´o´zkowiak2, and El˙zbieta Smolka2 1
2
Department of Physics, University of Life Sciences, Akademicka 13, 20-950 Lublin, Poland [email protected] Laboratory of Biocybernetics, Institute of Computer Science, Maria Curie-Sklodowska University, Maria Curie-Sklodowska 1 sqr, 20-031 Lublin, Poland [email protected], [email protected]
Summary. Presented work is a continuation of conducted research concerning automatic detection of disfluency in the stuttered speech. So far, the experiments covered analysis of disorders consisted in syllable repetitions and blockades before words starting with stop consonants. Introduced work gives description of an artificial neural networks application to the recognition and clustering of prolongations, which are one of the most common disfluency that appears among stuttering people.The main aim of the research was to answer a question whether it is possible to create a model built with artificial neural networks that is able to recognize and classify disabled speech. The experiment proceeded in two phases. In the first stage, Kohonen network was applied. During the second phase, two various networks were used and next evaluated with respect to their ability to classify utterances correctly into two, non-fluent and fluent, groups.
1 Introduction Speech is one of the most effective and developed means of communication. Transfers a vast number of information, concerning not only the statement’s content but also an emotional state of the speaker, his or her age, intentions and many others factors, which seemingly do not have anything in common with the statement. However, that non-verbal factors strongly influence the opinion about the speaker as well as evaluation of the speaker’s competencies and feelings. The main aim of the speech process is to send and receive messages in a form of language communicates. Many a time such an announcement is being interrupted by the emotional, physiological or physical factors, becoming partly or completely incomprehensible to the listener. One of the distractions, which in great measure disrupt the process of conveying information is stuttering. Non-fluent speech is one of the areas where many researches are conducted now. The knowledge of all of the principles and features connected M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 347–354. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
348
´ I. Swietlicka, W. Kuniszyk-J´ o´zkowiak, and E. Smolka
with non-fluent signals could be useful when creating an artificial system of speech recognition and computer applications for automatic speech disfluency types diagnosing [1]. It could also make possible to identify the most suitable therapy and follow its progress [2]. Artificial recognition and disfluency classification are considered to be complicated and complex, but benefits of creating such a system are obvious, therefore some attempts at research in the domain of speech classification and processing has been already undertaken [3, 4, 5]. As previous results indicate [6, 7, 8], the method of recognition and classification based on neural networks also could be applied. Nowadays, neural networks play an important role both in the speech [9] and the speaker recognition [10] and are an irreplaceable tool in case of distinguishing between very similar signals [2, 6, 11]. Among all the networks, SOM became a useful and valuable means in speech recognition [11,12], because of its abilities to represent multidimensional input vector as one- or two-dimensional output describing the investigated issues as well as detecting clusters [13]. Multilayer Perceptron and Radial Basis Function network are mostly used in different types of classification, also in addition to speech [6, 7, 10, 14, 15, 16]. The authors of the following article applied Kohonen (SOM), Multilayer Perceptron (MLP) and Radial Basis Function (RBF) networks to recognize and classify fluent and non-fluent speech utterances.
2 Research Concept Presented research aim was to verify whether and to what extent artificial neural networks could describe, detect and classify fluent and non-fluent speech samples. Constructed model is based on the human sound perception system. Oversimplifying, people, receiving sound signals transform them into electric impulses (an inner ear) that are next transferred to the brain where signal analysis, leading to features isolation, occurs. Than, on the obtained features basis, processes of recognition and classification are carrier out. Due to the fact, that sound perception runs in two phases, two stages of ANN were used. The first of them, Kohonen network, was expected to model that part of the speech perception process, which takes place in the inner ear and in cochlea nerve. That network aim was to reduce the dimensions describing the input signals and bringing out the main signal features. The second network layer (MLP and RBF) was applied to group and classify the previously processed fluent and non-fluent utterances into two clusters. This phase was expected to be an equivalent to the cerebral processes consisted in speech classification and recognition.
3 Experiment The research materials were recordings taken from stuttering people. Disfluency consisted in prolongations. This disfluency type, such as repetitions,
Artificial Neural Networks in the Disabled Speech Analysis
349
Fig. 1. Non-fluent utterance oscillogram containing the prolongation
occurs more often in the speech of the person who stutters than in the general population and was chosen because of the difficulty with eliminating it during therapy. Fifty-nine 800ms utterances containing disfluency were selected from the statements of 8 stuttering people. Recordings had been made before therapy as well as during its various stages and included two situations: reading story fragments and describing illustrations (Fig. 2). The patients’ age oscillated between 10 and 23. The speech of fluent speakers containing the same fragments (Fig. 2)was recorded during reading the same stories and giving descriptions of the same illustrations. Four fluent speakers took part in that experiment, two of whom were females and two - males. Their age ranged between 24 and 50 years. The research material was recorded in a sound booth with the use of a SoundBlaster card controlled by the Creative Wave Studio application. The signal was transformed from analogue into digital form with the sampling frequency of 22 050 Hz and sampling precision of 16 bits. Voice signal could be interpreted correctly with the sampling frequency of 22 050 Hz due to the fact that upper range of frequency which can be generated by an articulation system is lower than 10 000 Hz and, according to Nyquist theorem, the sampling frequency should be at least two times higher than the maximum frequency occurring in the analysed signal. 800ms fragments were selected from the recorded fluent and non-fluent utterances. Each non-fluent utterance contained a fragment with a disfluency. All samples were analysed with the use of FFT 512 with the following frequency and time resolution: f = 22050Hz/512 = 43.066Hz,
(1)
f = 1/43.066Hz = 23.22s.
(2)
A-weighting filter and digital 1/3-octave filters were applied, according to the fact that human hearing aid works as a complex of connected filters. A-weighting filter makes it possible to transform speech signal, including the specific features of human hearing while the examination with digital
350
´ I. Swietlicka, W. Kuniszyk-J´ o´zkowiak, and E. Smolka
Fig. 2. Fluent utterance oscillogram - an equivalent of non-fluent fragment
1/3-octave filters reflects the physical sense of the frequency analysis, which is carried out on a basement membrane in the inner ear. In the presented experiment, 21 digital 1/3-octave filters of centre frequencies between 100 and 10,000 Hz were used. As a result of the analysis, 118 samples (59 fluent and 59 non-fluent) were obtained. The parameters of the utterances were similar to the signal which goes from the inner ear to the brain. These samples were used as an input for the first artificial neural network. As it was mentioned above, first applied network (SOM), working on a self-organizing basis, enable to introduce process which take part in human brain during speech perception. Kohonen network was applied in that field due to its abilities to create patterns using only input values and to determine the main features of the examined issue only on their basis. SOM is also an effective instrument used for visualization of multidimensional data and have capability to convert non-linear relations into simple, geometrical connections in a low-dimensional space. It was checked in the previous analysis [17, 18, 19, 20] that network with 25 and more neurons in the output layer could correctly model the syllabic structure of the fluent parts of the samples and expose fragments containing disfluency [17, 18]. Therefore, first network was built with 25 neurons in the output layer and 21 inputs in order to achieving 21-element vector from a spectral analysis. SOM network was learned within 100 epochs, with stable learning rate accounting for 0,1 and decreasing from 3 to 0 neighbourhood. Application of the SOM network caused a notable reduction of the dimensions describing the examined signals as well as made it possible to model the syllabic structure of fluent parts of utterances, and to identify the fragments containing disfluency with prolongations. All of the utterances, both fluent and non-fluent, were represented by figures showing the dependence of winning neurons on time (Fig. 3). The chart obtained as a result of the previous analysis was used during second network training process. Table, describing 118 utterances, consisted of the figures responding to the neurons winning in a particular time. MLP and RBF networks worked as an equivalent of a human nervous system. This part is responsible for sound identifying and its correct allocating or, in the case of the lack of appropriate class, creating a new one. Networks were built with 35 neurons in the input layer due to receiving 35 time points for each sample, and 1 output neuron. Architecture
Artificial Neural Networks in the Disabled Speech Analysis
351
Fig. 3. Non-fluent (a) and fluent (b) utterance representation with corresponding oscillogram
and learning parameters for each network were evaluated in the experimental way and the best networks, among the all tested, were chosen. The best MLP had one hidden layer with 19 neurons. The network was taught with the Back Propagation algorithm (BP) in 100 epochs and with Conjugate Gradient algorithm (CG) in 500 epochs. The learning rate decreased from 0.6 to 0.1 and 0.3 momentum was applied. As an activation function logistic function was used. Error was calculated with a cross-entropy error function, which is in particular used with classifying networks. RBF was built with 30 neurons in the hidden layer and was trained using sampling (drawing of random samples) and K-nearest neighbourhood algorithms. All networks were assessed using the following test precision quantities based on the error matrix: accuracy, sensitivity and specificity [21, 22]. Sensitivity and specificity variation was represented by a ROC curves (Receiver Operating Characteristic), which is a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier. ROC analysis is related in a direct and natural way to cost analysis of diagnostic decision-making. Sensitivity and specificity are statistical measures of the performance of a binary classification test, while accuracy is the degree of closeness of a measured or calculated quantity to its true value.
4 Results The classification correctness for all networks ranged between 88,1 and 94,9 % (Tab. 1).Errors made by the MLP during classification, in teaching, testing and verifying groups were close (respectively 0.33, 0.41 and 0.35), what indicates that network was able to generalize input data correctly. In case of RBF, errors were also stable however, slightly lower in teaching and testing group.
352
´ I. Swietlicka, W. Kuniszyk-J´ o´zkowiak, and E. Smolka
Fig. 4. ROC curve for MLP (continuous line) and RBF (broken line)
Networks correctness could be also evaluated paying special attention to the number of mistakes which was done by each of them. MLP incorrectly classified 9 utterances, 3 of which were fluent and 6 contain disfluency, which brings 92% of the categorizing correctness, while RBF made an 11 error answers (4 fluent and 7 non-fluent). In this case classification correctness was about 91%. The area below ROC curve (Fig. 4) accounts for 0,98 and 0,95 for MLP and RBF respectively. This area is usually used as a measure for a classification model comparison. This measure can be interpreted as the probability that when we randomly pick one positive and one negative Table 1. Quality and teaching error statement for teaching, verifying and testing data Network Teaching Validation Testing Teaching Validation Testing quality quality quality error error error MLP RBF
1.00 1.00
0.83 0.80
0.86 0.83
0.66 0.21
0.35 0.39
0.41 0.38
Table 2. Evaluation coefficients values for MLP and RBF Network Accuracy Sensitivity Specificity MLP RBF
0.92 0.91
0.90 0.88
0.95 0.93
Artificial Neural Networks in the Disabled Speech Analysis
353
example, the classifier will assign a higher score to the positive example than to the negative one. Additionally, as we can observe in Tab. 2, values of accuracy, sensitivity and specificity are comparatively high. When sensitivity or specificity values are close to one or reach one, it means that the test recognizes almost all or all positive cases as positive and negative as negative. High values of accuracy shows that the test identifies all positive and negative cases in the correct way.
5 Conclusion Conducted researches have shown that artificial neural networks can be a useful tool in speech analysis, especially in the non-fluent one. First neural network application allowed to reduce the dimensions describing the input signals and made it possible to represent non-fluent speech by reflecting the syllabic structure of utterances and exposing fragments containing disfluency. MLP and RBF networks were used to classify utterances into two, non-fluent and fluent, groups. In respect of considered criterions, networks achieved small error in all data groups and high accuracy, sensitivity and specificity values. Neural networks are the tool, which could support researches in the domain of intelligent speech recognizing systems. Networks, due to generalizing, modeling and complicate structures projecting abilities could also help to know the principles accompanying non-fluent signals.
References 1. Guntupalli, Z., Kalinowski, V.J., Saltuklaroglu, T.: The Need for Self-Report Data in the Assessment of Stuttering Therapy Efficacy: Repetitions and Prolongations of Speech. The Stuttering Syndrome. International Journal of Language and Communication Disorders 41(1), 1–18 (2000) 2. Czy˙zewski, A., Kaczmarek, A., Kostek, B.: Intelligent processing of stuttered speech. Journal of Intelligent Information Systems 21(2), 143–171 (2003) 3. Garfield, S., Elshaw, M., Wermter, S.: Self-organizing networks for classification learning from normal and aphasic speech. In: The 23rd Conference of the Cogntive Science Society, Edinburgh, Scotland (2001) 4. Kuniszyk-J´ o´zkowiak, W.: A comparison of speech envelopes of stutterers and nonstutterers. Journal of Acoustical Society of America 100(2), 1105–1110 (1996) 5. Robb, M., Blomgren, M.: Analysis of F2 transitions in the speech of stutterers and non-stutterers. Journal of Fluency Disorders 22(1), 1–16 (1997) 6. Geetha, Y.V., Pratibha, K., Ashok, P., Ravindra, S.K.: Classification of childhood disfluencies using neural networks. Journal of Fluency Disorders 25, 99– 117 (2000) 7. Nayak, J., Bhat, P.S., Acharya, R., Aithal, U.V.: Classification and analysis of speech abnormalities. ITBM-RBM 26, 319–327 (2005) 8. Ritchings, R.T., McGillion, M., Moore, C.J.: Pathological voice quality assessment using artificial neural networks. Medical Engineering and Physics 24, 561–564 (2002)
354
´ I. Swietlicka, W. Kuniszyk-J´ o´zkowiak, and E. Smolka
9. Chen, W.Y., Chen, S.H., Lin, C.H.J.: A speech recognition method based on the sequential Multi-layer Perceptrons. Neural Networks 9(4), 655–669 (1996) 10. Farrell, K., Mamione, R., Assaleh, K.: Speaker recognition using neural networks and conventional classifiers. IEEE Transaction on Speech and Audio Processing, part 2, 2(1), 194–205 (1994) 11. Leinonen, L., Kangas, J., Torkkola, K., Juvas, A.: Dysphonia detected by pattern recognition of spectral composition. Journal of Speech and Hearing Research 35, 287–295 (1992) 12. Suganthan, P.N.: Pattern classification using multiple hierarchical overlapped self-organizing maps. Pattern Recognition 34, 2173–2179 (2001) 13. Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (2001) 14. Cosi, P., Frasconi, P., Gori, M., Lastrucci, L., Soda, G.: Competitive radial basis functions training for phone classification. Neurocomputing 34, 117–129 (2000) 15. Hadjitodorov, S., Boyanov, B., Dalakchieva, N.: A two-level classifier for textindependent speaker identification. Speech Communication 21, 209–217 (1997) 16. Sarimveis, H., Doganis, P., Alexandridis, A.: A Classification Technique Based on Radial Basis Function Neural Networks. Advances in Engineering Software 37, 218–221 (2006) 17. Szczurowska, I., Kuniszyk-J´ o´zkowiak, W., Smolka, E.: The application of Kohonen and Multilayer Perceptron networks in the speech nonfluency analysis. Archives of Acoustics 31(4), 205–210 (2006) 18. Szczurowska, I., Kuniszyk-J´ o´zkowiak, W., Smolka, E.: Application of artificial neural networks in speech nonfluency recognition. Polish Journal of Environmental Studies 16(4A), 335–338 (2007) 19. Szczurowska, I., Kuniszyk-J´ o´zkowiak, W., Smolka, E.: Articulation Rate Recognition by Using Artificial Neural Networks. In: Kurzy˜ nski, M., et al. (eds.) Advances in Soft Computing, vol. 45, pp. 771–777. Springer, Heidelberg (2007) ´ 20. Swietlicka, I., Kuniszyk-J´ o´zkowiak, W., Smolka, E.: Detection of Syllable Repetition Using Two-Stage Artificial Neural Networks. Polish Journal of Environmental Studies 17(3B), 462–466 (2008) 21. Kestler, H.A., Schwenker, F.: Classification of high-resolution ECG signals. In: Howlett, R., Jain, L. (eds.) Radial basis function neural networks: theory and applications. Physica-Verlag, Heidelberg (2000) 22. Schwenker, F., Kestler, H.A., Palm, G.: Three learning phases for radial-basisfunction networks. Neural Networks 14, 439–458 (2001)
Using Hierarchical Temporal Memory for Recognition of Signed Polish Words Tomasz Kapuscinski1 and Marian Wysocki2 1
2
Rzeszow University of Technology, Department of Computer and Contol Engineering [email protected] Rzeszow University of Technology, Department of Computer and Contol Engineering [email protected]
Summary. The paper is concerned with automatic vision-based recognition of hand gestures expressing isolated words of Polish Sign Language (PSL). The Hierarchical Temporal Memory (HTM) [3] is applied. This tool replicates the structural and algorithmic properties of the human neocortex. The gestures are spatio-temporal entities. Therefore we believe the HTM can be able to identify and use the gesture’s subunits (counterparts of phonemes) organized in spatio-temporal hierarchy. The paper discusses the preparation issues of the HTM and presents results of the recognition of 101 words used in everyday life at the doctors and in the post office.
1 Introduction The automatic sign language recognition has attracted a lot of interest from researchers in recent years. These works are strongly justified socially because they aim at removing communication barriers between deaf and hearing people and at facilitation of integration the hearing impaired in the modern society. Most of works are based on the whole words modeling. The most often used tools are: hidden Markov models and artificial neural networks [8]. The realtime, vision-based recognizer of Polish signed expressions has been developed by the authors [4]. This tool exploits the hidden Markov models technique. Recognition accuracy achieved in the system for selected 101 words and 35 expressions ranged from 90% to 98% depending on classifier, lighting conditions, signing person etc. The approach based on the whole words modeling is problematic when recognizing of extended dictionaries of gestures is being planned. In such a case adding a new word requires adding and learning the new model. Sign languages have thousands of words. This means that the system should consist of thousands of models. Learning such an amount of models would be inconvenient. Moreover, the response time of the system would be very long. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 355–362. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
356
T. Kapuscinski and M. Wysocki
Therefore an alternative solution, based on modeling with subunits, which are similar to phonemes in spoken languages, has been proposed. In the subunit-based approach the main effort is related with extraction and modeling the subunits. An enlargement of the vocabulary can be achieved by composing new signs through concatenation of convenient subunit models and by tuning the composite model with only a handful of examples. However additional knowledge of how to break down signs into subunits is needed. There are two approaches possible: (i) linguistics-oriented, based on the research in the domain of sign language linguistics (e.g. notation system by Stokoe, and notation system by Liddell and Johnson for ASL) [10], (ii) visually-oriented, based on a data-driven process when signs are divided into segments that have no semantic meaning - then similar segments are grouped and labeled as a subunit. In this vision-based approach the subunits are called visemes [2]. The first steps toward subunit-based recognition have been already undertaken [1,2,10]. The method of visually-oriented subunit (viseme) construction and modeling them with parallel HMMs for the recognition of signed Polish expressions has been developed by the authors [5, 6]. The results obtained for 101 words and 35 sentences performed by two signers with different proficiencies in PSL were quite promising (recognition rate of about 96% for words and 91% for sentences). However the data-driven methods of viseme construction have some drawbacks. We are never sure that the obtained word’s representation is unique over the large dictionary. Moreover, the given word can have several representations because its performance may vary depending on the signer emotional state, word position in a sequence, etc. Having it in mind, in this paper the new approach based on the usage of the hierarchical temporal memory (HTM) [3] has been proposed. HTM is a technology that replicates the structural and algorithmic properties of the neocortex. All objects in the world can be built from the smaller parts organized in spatio-temporal hierarchy. HTM learns captured by sensors part of the world exploring its hierarchical structure. Within the hierarchy representations are shared among different objects. Therefore, the system can easily recognize new objects that are made up of previously learned subunits. We believe that this applies to the hand gestures as well. During the learning stage HTM discovers the causes that are behind the sensor reading so it should be able to identify the visemes automatically. The paper is organized as follows. Section 2 contains a brief overview of HTM. Section 3 gives details of the proposed approach. Results of words recognition are given in section 4. Section 5 concludes the paper.
2 Hierarchical Temporal Memory Concept Hierarchical Temporal Memory is a technology that replicates the structural and algorithmic properties of the neocortex. HTM is organized as a treeshaped hierarchy of nodes. All objects in the world have a structure. This
Using HTM for Recognition of Signed Polish Words
357
Fig. 1. HTM concept
structure is hierarchical in both space and time. HTM is also hierarchical in both space and time, and therefore it can efficiently represent the structure of the world. HTM receives the spatio-temporal pattern coming from the senses. Through a learning process it discovers what the causes are and develops internal representations of the causes in the world (Fig. 1). After an HTM has learned what the causes in its world are and how to represent them, it can perform inference. Inference is similar to pattern recognition. Given a novel sensory input stream, the HTM will infer what known causes are likely to be present in the world at that moment. Each node in HTM implements a common learning and memory function. The basic operation of each node is divided into two steps. The first step is to assign the node input pattern to one of a set of quantization points (spatial grouping). The node decides how close (spatially) the current input is to each of its quantization points. In the second step, the node looks for common sequences of these quantization points (temporal grouping).
3 Developing HTM Application 3.1
Defining the Problem and Representing Data
The task consisted in recognizing 101 words of Polish Sign Language used in typical situations: at the doctor and in the post office. Gestures have been recorded using the vision system. For detection of the signer’s hands and face we used a method based on a chrominance model of human skin. In our solution the signer introduces himself by presenting his/her open hand to the camera at the beginning of the session. A rectangular hand segment is used to build a skin-color model in the form of a 2D Gaussian distribution in the normalized RGB space. To detect skin-toned regions in a color image, the image is transformed into a gray-tone form using the skin color model (the individual pixel intensity in a new image represents a probability that the pixel belongs to a skin-toned region) and thresholded. The areas of the objects toned in skin color, their centers of gravity and ranges of motion are analyzed to recognize the right hand, the left hand and the face. Comparison of neighboring frames helps to notice whether the hands (the hand and the
358
T. Kapuscinski and M. Wysocki
face) touch or partially cover each other. In order to ensure correct segmentation there were some restrictions for the background and the clothing of the signer. The following features are used in this paper: xr , yr - the coordinates of the gravity center of the right hand with respect to the origin in the gravity center of the face, Sr - the area of the right hand, ψr - the orientation of the right hand (i.e. the angle between the maximum axis and the x coordinate axis), α - the direction of movement of the right hand (calculated from the hand’s gravity center position in the current and previous frame), and corresponding parameters for the left hand. These features have been divided into groups (channels) in the following manner: position of the right hand (xr , yr ), shape of the right hand (Sr , ψr ), movement of the right hand (α) and similar three channels for the left hand. The selection of the features and channels was inspired by the works done by the authors previously [4, 5, 6] and by the results of linguistic research on the PSL [9]. Actual values in individual channels constitute sensor inputs for the HTM at the given time. 3.2
Designing and Creating HTM Structure
An HTM gains its power from the fact that its hierarchical structure mirrors the nested hierarchical structure of the world. Therefore, in order to design the HTM suitable for recognition of PSL words, the gestures (in the sequel ”word” and ”gesture” are equivalent) have been treated as objects having a hierarchical structure, both in space and time (Fig. 2). In the spatial hierarchy, at the bottom we have the channels, then at the higher level, the channels are grouped into right and left hand, and finally at the top level two hands constitute the gesture. In the temporal hierarchy two levels: visemes and words have been distinguished. The spatio-temporal hierarchy of the gesture has been mapped in the HTM topology (Fig. 3). In the sensor layer we have 6 sensors corresponding to the channels for both hands and the category sensor. The category sensor is connected only during the learning stage and it supplies the information about the word to which the current sensors reading belongs. The temporal hierarchy is mapped by putting two layers corresponding to visemes and the whole words. Each of these layers consists of two sub-layers making spatial and temporary grouping appropriately [3, 7]. Above the layer of words classifier
(a)
(b)
Fig. 2. The hierarchical structure of the gesture (a) spatial, (b) temporal
Using HTM for Recognition of Signed Polish Words
359
Fig. 3. The HTM topology
and effector nodes are put. The effector node writes the results to the file. The spatial hierarchy manifests itself in linking the nodes between the viseme and word layers and between word layer and the classifier. 3.3
Running HTM to Perform Learning
Learning is done one level at a time. First, the bottom level is trained. The user selects the parameters of the nodes and initiates the learning. When the learning is finished the user judges the level’s output. The obtained groups are checked and if they are not distinguishable the nodes’ parameters are refined and the learning process is repeated. After learning of the first level, this level is switched to the inference mode, the parameters for the second level nodes are set and the learning of the second level is initialized. This procedure is repeated for all levels until the classifier node is reached. During the training the classifier node uses its input from the category sensor to assign the category labels to the results obtained from the previous layer. In our approach, for the spatial nodes the parameter maxDistance has been used. It sets the maximum Euclidean distance at which two input vectors are considered the same during learning. For the temporal nodes the requestedGroupCount parameter has been used. This parameter denotes the number of requested groups. For the temporal nodes the Time-Based Inference algorithm has been used. This algorithm uses the current as well as past inputs.
360
3.4
T. Kapuscinski and M. Wysocki
Running HTM to Perform Inference
After an HTM has learned what the causes in its world are and how to represent them, it can perform inference. Inference is similar to pattern recognition. Given a novel sensory input stream, the HTM infers what known causes are likely to be present in the world at that moment. The result is a distribution of beliefs across all the learned causes.
4 Experiments The Numenta Platform for Intelligent Computing (NuPIC) has been used [7]. NuPIC implements a hierarchical temporal memory system. To make the experimentations easier the auxiliary scripts, dedicated to some specific tasks, as data organization, network structure creating, have been developed in Matlab. We used a vocabulary of 101 words and a data set W consisting of 40 realizations of each word performed by two signers (20 realization each). One person was a PSL teacher, the other has learned PSL for purposes of this research. 4.1
Cross Validation
Four mutually separated subsets Z1 , Z2 , Z3 , Z4 of the set W have been randomly chosen, such that: (1) Z1 + Z2 + Z3 + Z4 = W , and (2) each subset consists of 10 realizations of each word performed by two signers (5 realizations each). Sample results of recognition are given in table 1. The results are comparable with the results presented in our previous papers [5, 6]. There, however, heuristic definitions of the visually-oriented subunits (visemes) and related parallel hidden Markov models have been used. In contrast to that approach, here the subunits are automatically extracted during the learning process. In each of the six channels defined in section 3.1 the HTM identified three subunits which turned out to appear most frequently, and some which appeared only in a small number of words in the used dataset. An interpretation of the visemes identified by HTM is presented in Fig. 4. The plots characterize position of the right hand (xr , yr ) during sample realization of the signed word W1 (send - solid line) and W2 (post office - dashed Table 1. Subunit-based model recognition rates [%]: variant a - training on W −Zi , testing on Zi , variant b - training on Zi , testing on W − Zi variant a variant b Z1 Z2 Z3 Z4 mean
94.46 93.76 94.06 93.96 94.06
92.90 93.00 93.20 92.81 92.98
Using HTM for Recognition of Signed Polish Words
361
Fig. 4. Interpretation of visemes identified by HTM
line). Visemes v1, v2, v3 have been detected by HTM in W1, and visemes v4, v2, v3 have been detected in W2. Boundaries of line fragments representing the visemes can be identified by changes of the brightness. The fragments of two plots corresponding to v2 (v3) are quite similar, the fragments for v1 and v4 differ considerably. HTM explores the hierarchical structure of gestures and tries to build models having some parts in common. Therefore, the word made up from the known subunits, learned while training the HTM for other words, can be easily recognized.
5 Conclusions and Future Work Hierarchical Temporal Memory has been used for recognition of signed polish words. We have chosen this tool because it replicates the structural and algorithmic properties of the human neocortex, therefore it should be able to identify the gesture’s subunits in a way humans do. Moreover the hierarchical structure of the HTM allows for sharing the representations. Different objects in the higher level can be composed from the same lower level parts. This should lead to the better generalization properties and storage efficiency. It is possible that the presented HTM application, applied to the large vocabulary, will not suffer problems of scale. The storage efficiency manifests itself in the proposed memory topology, which is much simpler than the complicated nets of connected models used in our previous solutions. It is possible that the same or slightly modified memory will be able to recognize expressions as well. To our knowledge this is the first application of HTM to sign language recognition. Obtained results are quite promising. However, further refining the memory structure and parameters is needed. Future works may also include testing the recognition of the words for which only small number of examples have been shown during the learning (scalability and generalization properties) and trying to recognize the expressions composed from sequences of signed words.
362
T. Kapuscinski and M. Wysocki
References 1. Bauer, B., Kraiss, K.F.: Video-Based Sign Recognition Using Self-Organizing Subunits. In: Proc. Int. Conf. Pattern Recognition, vol. 2, pp. 434–437 (2002) 2. Bowden, D., Windridge, D., Kadir, T., Zisserman, A., Brady, M.: A Linguistic Feature Vector for the Visual Interpretation of Sign Language. In: Proc. 8th Eur. Conf. Comput. Vis., pp. 391–401. Springer, New York (2004) 3. Hawkins, J., Blakeslee, S.: On Intelligence. Times Books, New York (2004) 4. Kapuscinski, T., Wysocki, M.: Automatic Recognition of Signed Polish Expressions. Archives of Control Sciences 15(3) (LI), 251–259 (2006) 5. Kapuscinski, T., Wysocki, M.: Recognition of signed Polish words using visually-oriented subunits. In: Proc. of the 3rd Language and Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, Poznan, pp. 202–206 (2007) 6. Kapuscinski, T., Wysocki, M.: Automatic Recognition of Signed Polish Expressions Using Visually Oriented Subunits. In: Rutkowski, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J. (eds.) Computational Intelligence: Methods and Applications, pp. 267–278. AOW EXIT, Warszawa (2008) 7. Numenta Platform for Intelligent Computing (NuPIC), Numenta Inc., http://www.numenta.com 8. Ong, S.C.W., Ranganath, S.: Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning. IEEE Trans. PAMI 27, 873–891 (2005) 9. Szczepankowski, B.: Sign language in school. WSiP, Warszawa (1988) (in Polish) 10. Vogler, C., Metaxas, D.: A Framework for Recognizing the Simultaneous Aspects of American Sign Language. Computer Vision and Image Understanding, 358–384 (2001)
Part IV
Medical Applications
Strategies of Software Adaptation in Home Care Systems Piotr Augustyniak AGH-University of Science and Technology, 30 Mickiewicza Ave., 30-059 Krakow, Poland [email protected]
Summary. In this paper the control theory approach to the auto-adaptive telemedical ECG-based surveillance system is presented. The control is implemented as dynamic linking of alternative procedures and aims at optimization of diagnostic result quality. It faces several limitations in a discrete, nonlinear environment, where results cannot be reliably predicted. The paper presents also tests of the prototype implementation showing the feasibility of the idea as well as reasonable data convergence (80.7 %) and response time (17.1 s).
1 Introduction Two trends are typical for current approach to medical software. First, the standardization, aims at using general-purpose tools and formats what facilitates data exchange and interoperability, but limits adequacy of tools in particular cases [4], [5]. Second, the customization, makes towards software- and, less frequently, hardware-based adaptation of the equipment functionality and algorithms in use to the recognized medical disease. Following the latter paradigm, we recently proposed a remotely reconfigurable system for seamless cardiac surveillance [3]. This system combines advantages and eliminates drawbacks of two classic approaches: the data reliability is almost as high as in central interpretation-based system, while the load of transmission channel, and resulting operation cost is almost as low as in remote interpretation-based system. Modification of interpretive software is performed in an unlimited-distance digital wireless feedback, and consequently all rules of control theory are applicable here. Such approach is presented in this paper with a particular consideration of non-linearity and discrete nature of agile software composed automatically on demand.
2 Components of the Optimization Loopback 2.1
General Overview
Following a very typical scenario, the physicist with his experience and knowledge about the particular patient, decides what kind of data is necessary for M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 365–372. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
366
P. Augustyniak
making the diagnosis precise and accurate, and selects the diagnostic equipment upon necessity and availability. Certain initial patient data (e.g. sex, age or medication) determines the interpretive algorithms or sets normal-abnormal borderline values appropriately for each examined individual. The auto-adaptive diagnostic process was designed and prototyped to perform by analogy to the scenario above. The remote ECG interpretation is optimal in the sense of best available reliability of diagnostic outcome. The optimization loopback (fig. 1) includes: remote interpretation procedures, quality estimation routine and adaptation manager. The error function represents differences between remotely calculated diagnostic parameters and their reference values occasionally computed by the central server from the same strip of raw signal. The adaptation manager sends a multidimensional modification vector which is applied by the remote operating system to improve the ECG interpretation process. In result of few successive iterations, the adapted remote process issues diagnostic parameters converging to their references and consequently the error value achieves its temporal minimum. The optimized signal interpretation process may suddenly become suboptimal in result of two factors external to the loopback: • •
the patient status may change resulting in modification of relevance ranking of diagnostic parameters errors; the availability of resources (battery, memory, connection etc.) may change resulting in worse quality of issued diagnostic parameters;
If the value of error function returned by the quality estimation routine exceeds a specified threshold, information about non-compliant data (error vector) is sent to the adaptation manager (fig. 1).
wireless link
occasional
adaptation manager
interpretation library
adaptation raw ECG
remote software
acceptance permanent
occasional
reference software
patient health record
Fig. 1. General scheme of the auto-adaptive system for ubiquitous cardiology
Strategies of Software Adaptation in Home Care Systems
2.2
367
Executable Code as Control Argument
The adaptation manager includes modification vector generator and is an expert system for management of the interpretation subroutines. It works with a knowledge base consisting of task-oriented libraries corresponding to all replaceable blocks of the interpretive software (fig. 2a). Within libraries, the subroutines are designed for the same computational purpose, but with different prerequisites for resources requirements and result quality. They also have standardized gateways (fig. 2b) enabling replacement between functionally identical subroutines belonging to the same library in course of process of interpretive software optimization. In the generator’s expert system each subroutine is described by attributes of quality, resources requirements and external dependency attributes specifying relations with other elements in signal interpretation tree. Additionally, interpretation subroutines have unified modification interfaces providing external access to few parameters used in calculations as factors or thresholds and considered as constants (read only) within the subroutine. Diagnostic data error and resources availability are principal arguments for the modification vector generator to propose variants of possible solution for optimization of the remote interpretive software. At the beginning, in the inter-procedure dependencies tree an interpretive procedure is identified as most probable common origin of all erroneous outcomes. The procedure influencing maximum number of non-compliant diagnostic data is the first target of software modification. In case when slight changes are required, the modification interface (fig. 2b) is used to adjust calculation parameters in working subroutine (software update). Otherwise the alternative subroutine is selected in the knowledge base and thanks to identical gateways (fig. 2b) replaces the precedent one in the interpretive software. In case new subroutine requires unavailable resources, the diagnostic parameter relevance ranking is scanned in order to find a procedure yielding medically irrelevant result. Such procedure, disregarding possibly high accuracy of result, is then removed or replaced by its simplest version, making more resources available for new subroutine. In case no irrelevant result could be found, the generator modifies calculation parameters, since this can be done without extra resources requirements. The generator’s output is restricted to a set of calculation parameters with limited range of variability or to a subroutine selected as more appropriate from the knowledge space. In any of these cases, the control over the interpreting software is discrete and nonlinear. The effect of software modification can be only roughly predicted. 2.3
Interpreting Software as Control Object
The interpreting software is designed to run in the wearable patient-side recorder, thus strict rules apply for computational complexity and management of resources. This software has basic knowledge layer (fig. 3) and elective interpretive procedures linked dynamically to their communication gateways. If a particular subroutine is linked, the procedure is enabled and returns
368
P. Augustyniak
a)
interpretation-oriented repository (QCLA)
b)
interpretation-oriented repository (RHAN)
communication gateway ↑
↓
⇑
⇓
interpretation-oriented repository (QTIV) quality CPU 26±11 MEMmax 33
QTIV (3) CPU 14±5 MEMmax 17
QTIV (2)
QTIV (1)
CPU 22±8 MEMmax 26
QTIV (4) CPU 34±15 MEMmax 47
resources requirements
processing routines
CPU 26±11 MEMmax 33
description: QTIV (3)
external dependency attributes
modification interface complexity
Fig. 2. a) Task-oriented libraries of interpretation subroutines; b) internal structure of replaceable interpretive subroutine
results to the basic layer. Otherwise, the procedure is passed over and the result is absent in the report. In prevalence of cases, several procedures are mutually dependent and with use of external dependency attributes (fig. 2b), the modification vector generator loads and activates all such components simultaneously. The unlinking or replacement of the subroutine is restricted as long as the processing control (e.g. stack pointers etc.) remains within its executable code. In such cases the software management is performed immediately after the control returns to the calling subroutine. The quality of automatic interpretation measured through diagnostic parameters requires the processing of the same section of the record to be performed by subsequent versions of agile software. Therefore the ECG has to be buffered in the remote recorder for the time interval sufficient for completion of assumed iteration number of interpretive software modification. The raw signal buffer is thus an important element of the recorder, and allows for seamless acquisition of new ECG while the software adaptation is performed. Fortunately, the automatic interpretation is sufficiently faster than the data acquisition (up to 5 times than real time), so the interpreter can securely keep up with the incoming signal, even if it spends a considerable amount of time on adaptation for a relatively short segment of the record. Nevertheless, buffering the incoming signal for delayed processing affects the recorder response time. Its value may significantly excess the typical value (2 seconds) [6] what may expose the patient to danger when the adaptation occurs in presence of life-threatening cardiac events. 2.4
Diagnostic Parameters Quality as Control Goal
The purpose of the auto-adaptive interpretation is to provide most adequate diagnostic parameters of best possible quality in given patient status. A firsthand approach involves the human assistance for assessment of data quality and for specification of requests for software adaptation. Although computerized, interpretation of the ECG is not able to resolve all medical cases as reliably as the expert, aiming for a practical implementation we had to
Strategies of Software Adaptation in Home Care Systems
QT dispersion library
arrhythmia detection library
version 3 version 2
369
software replacement
infarct interpretation library heuristics set 1
heuristics set 2 heuristics set 3 basic knowledge layer system and communication procedures, user interface
software upgrade
Fig. 3. Methods of control of remote software performance
restrain the necessity of human intervention. Since resources availability is the principal limiting factor for the performance of home-care wearable recorders, we assume the complementary interpretation performed occasionally by the server in unconstrained environment yields diagnostic parameters which, although not absolutely true, are accurate enough to play the role of reference. For the purpose of validation, a strip of raw ECG record accompanies every 20-th or 30-th diagnostic report. Comparing the results of calculations in remote restricted environment with their unlimited counterparts yielded by the server is the first approach to fully automated adaptivity of the system. This validation method, however, is expected to provide two results: • •
quantitative assessment of the severity of remote diagnosis error, qualitative description specifying the area of possible improvement.
Although particular diagnostic parameters are not independent in the statistical sense, we defined the global estimate of diagnostic error as the absolute value of errors in the multidimensional space of parameters. To simulate reliably the doctor’s assessment, the relevance ranking of diagnostic parameter errors in context of the patient status was applied to modulate the contribution of particular error values. This reflects various expectations the expert has from further diagnostics depending on what he or she already knows about the patient. The hierarchy of diagnostic parameters in context of most frequent diseases was revealed experimentally and described in [1]. This modulation is the crucial procedure of the auto-adaptive system, since it causes relocation of remote resources in order to calculate the most relevant diagnostic parameters with best accuracy and accordingly to the resources availability, the marginal parameters may show lower quality. 2.5
Convergence of the Loopback Control
As a general estimate of convergence quality, we proposed the value C being a weighted sum of relative errors of 12 most frequently used diagnostic
370
P. Augustyniak
parameters. Weighting coefficients are calculated as normalized relevance ranking results, calculated for diagnostic parameters in 17 most common patient diagnoses [3]. C=
12 i=1
Δpi ·wi
, where
12 i=1
wi = 1
(1)
The convergence represents the correctness of management procedure decisions about the software components of interpretation processing chain. Taking the analogy from the theory of control, the software adaptation plays the role of feedback correcting the diagnoses made automatically. If the software modification decisions are correct, the outcome altered by the interpreting software modification approaches to the reference value, the modification request signal extincts in consequence of decreasing error and the system is stable. Incorrect decisions lead to the growth of diagnostic outcome error and imply even stronger request for modification. The outcome value may stabilize on an incorrect value or swing the control range in response to subsequent trials. In such case the system is unstable and the only practical solution is relying on server-side calculations as long as the interpretation task is too difficult to be performed by the patient-side recorder.
3 Results For the testing purpose, the remote recorder has been implemented in a development kit of the PXA270 microprocessor with a widely updated XScale core, which is Marvell’s implementation of the fifth generation of the ARM architecture [7]. It provides excellent MIPs/mW ratio of 4.625 at 150 MHz and is currently used in a series of handheld computers. The agile software of remote recorder uses 9 adaptive interpreting procedures, for which the modification vector generator selects one of 2 to 5 specialized subroutines from the knowledge base. Table 1 presents these subroutines with the memory and calculation requirements of their versions. The relevance ranking of diagnostic parameters errors was studied and presented in details in [1]. That survey included 1730 ECG analysis cases and allowed to pursuit the cardiologists’ preferences in 12 most frequently observed diseases (normal sinus rhythm, sinus tachycardia, sinus bradycardia, probable AV block, ventricular escape beats, atrial fibrillation, AV conduction defect, myocardial infarction, atrial enlargements, ventricular hypertrophies, left and right bundle branch block). The observations count for these pathologies ranged from 16 to 323 cases. The rules of software adaptation for the telemedical system was first modeled in order to study the influence of medical and technical dependencies to the diagnostic result quality [2]. For simulation of rapid patient status changes, ECG test signals representing various pathologies and transients were artificially combined from custom-recorded signals.
Strategies of Software Adaptation in Home Care Systems
371
Table 1. Inventory of interpreting procedures with respective CPU/memory requirements [%]/[kB] version number 1 2 heartbeat detector 7/25 8/32 heartbeat classifier 15/18 17/26 wave delimitation procedure 35/22 65/26 ST-segment assessment 10/10 12/17 arrhythmia detector 5/7 8/10 heart rate variability analyzer 30/25 51/38 electrical axes calculator 7/8 27/21 rhythm identification procedure 5/12 6/18 QT-segment assessment 13/20 17/31 procedure name
3 10/38 20/35 13/25 13/12 68/44
4 15/42
5 16/48
19/15
8/26 33/35
The total of 2751 one-hour 12-leads ECG records were processed off-line in the prototype system. In case of 857 records (31,2%) the software adaptation was required, next 86 records (3,1%) were found too complicated and interpreted by the server software. Among the software adaptation attempts, 768 (89,6%) were correct, while the remaining 10,4% failed due to incorrect estimation of available resources. The overestimation of resources, resulting in the operating system crash and thus monitoring discontinuity occurred in 27 (1%) cases. Single iteration was sufficient to modify the remote interpretation enough for satisfying the data consistency requirement in 63.1 % of cases. In 19.3% of cases, after four iterations the results were still not converging to the reference values. The adaptation delay measurement was performed with use of a real wireless GPRS connection. The longest value was 6.0 s in case of single iteration and 17.1 s for four iterations.
4 Discussion Strategies of automatic adaptation of ECG interpretive software in a distributed surveillance system were presented in aspect of control theory. The patient device was build in a test environment designed for prototyping of mobile applications. Values of basic parameters: data convergence (80.7 %) and response time (17.1 s) motivate the hope for future clinical application of the system. Main reason for limitation of auto-adaptive approach is the discrete control domain (some procedures existed in only two versions) and the lack of results predictability. This could be solved by the application of artificial intelligencebased system in the central server. The analysis of modification results (e.g. data convergence) yields indications for optimal system behavior in presence of similar cases. The auto-adaptive ECG-based surveillance system is a promising tool for simulation of continuous presence of medical expert at the patient in motion.
372
P. Augustyniak
Acknowledgment Scientific work supported by the AGH-University of Science and Technology grant No 10.10.120.783.
References 1. Augustyniak, P.: How a Human Ranks the ECG Diagnostic Parameters: The Pursuit of Experts Preferences Based on a Hidden Poll. Proc. Computers in Cardiology 35, 449–452 (2008) 2. Augustyniak, P.: Modeling the adaptive telemedical system with continuous data-dependent quality control. Polish Journal of Environmental Studies 17(2A), 29–32 (2008) 3. Augustyniak, P.: Diagnostic quality-derived patient-oriented optimization of ECG interpretation. In: Pietka, E., Kawa, J. (eds.) Information technologies in biomedicine, pp. 243–250. Springer, Heidelberg (2008) 4. Chiarugi, F., et al.: Real-time Cardiac Monitoring over a Regional Health Network: Preliminary Results from Initial Field Testing. Computers in Cardiology 29, 347–350 (2002) 5. Gouaux, F., et al.: Ambient Intelligence and Pervasive Systems for the Monitoring of Citizens at Cardiac Risk: New Solutions from the EPI-MEDICS Project. Computers in Cardiology 29, 289–292 (2002) 6. IEC, 60601-2-51. Medical electrical equipment: Particular requirements for the safety, including essential performance, of ambulatory electrocardiographic systems. First edition 2003-02, International Electrotechnical Commission, Geneva (2003) 7. (2007), http://www.toradex.com/e/Factsheet Colibri Intel Marvell XScale PXA Computer Modules.php (visited on March 31, 2008)
Database Supported Fine Needle Biopsy Material Diagnosis Routine Maciej Hrebie´ n and J´ ozef Korbicz Institute of Control and Computation Engineering University of Zielona G´ ora, ul. Licealna 9, 65–417 Zielona G´ ora, Poland {m.hrebien, j.korbicz}@issi.uz.zgora.pl
Summary. This paper describes cytological image segmentation and diagnosis method. The analysis includes an expert database supported Hough transform for irregular structures, image pre-processing and pre-segmentation, nuclei features extraction and final diagnosis stage. One can also find here experimental results collected on a hand-prepared benchmark database that show the quality of the proposed method for typical and non-typical cases.
1 Introduction What can be easily observed mainly through the last decade is a dynamic growth in the number of research works conducted in the area of breast cancer diagnosis. Many university centers and commercial institutions [7] are focused on this issue because of the fact that breast cancer is becoming the most common form of cancer disease of todays female population. The attention covers not only curing the external effects of the disease [2, 15] but also its fast detection in the early stadium. Thus, the construction of a cancer diagnosis computer system supporting a human expert has became a challenging task. Many nowadays camera-based automatic breast cancer diagnosis systems have to face the problem of cells and their nuclei separation form the rest of the image content [6, 8, 11, 14, 18] because the nucleus of the cell is the place where breast cancer malignancy can be observed. The main difficulty of this process is due to incompleteness and uncertainty of the information contained in the image which is caused by: imperfections of the data acquisition process in the form of noise and chromatic distortion, deformity of the cytological material caused by its preparation, the nature of the image acquisition (3D to 2D transformation), the method of scene illumination which affects the image’s luminance and sharpness and a low-cost CCD sensor whose quality and resolution capabilities are in many cases rather small. Since many nowadays cytological projects assume full (or close to full) automation and real-time operation with a high degree of efficacy, a method which meets above mentioned requirements and restrictions has to be constructed. Thus, a system supporting a human expert in cancer diagnosis M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 373–380. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
374
M. Hrebie´ n and J. Korbicz
which is additionally based on the expert knowledge collected in a prepared database would be a very useful tool. Such a database supported system could not only speed up but also unburden a human expert when a typical case is analyzed. In this paper a diagnosis supporting algorithm which takes into consideration the knowledge collected in a hand-prepared expert database is presented. The description includes the operations performed on a Fine Needle Biopsy cytological image which are needed to filter, localize, match and diagnose nuclei in the image. Experimental results collected on a hand-prepared benchmark database present the quality and efficacy of the proposed method.
2 Problem Formulation Mathematical formulation of the segmentation and diagnosis process is very difficult because it is a poorly conditioned problem and in many situations the segmentation process is domain specific. Thus, we give here only an informal definition of the problem we have to face. What we have on input is a cytological material obtained using the Fine Needle Biopsy (FNB) technique and imaged with a Sony CCD Iris camera mounted atop of an Axiophot microscope. The material comes from female patients of Zielona Gora’s Onkomed medical center [9]. What is expected on output is an expert database supported segmentation mask and preliminary diagnosis of the analyzed case. Additionally, the method should be insensitive to colors of contrasting pigments used for preparation of the cytological material because they may change in the future. Also, the texture of the analyzed objects should also be considered because it can carry the information about case malignancy. And segmented nuclei should be separated with one pixel separation rule or labeled respectively to its type – based on the type of the database best match.
3 Image Pre-processing and Pre-segmentation An image being the subject of the diagnosis is at this stage passed through a group of well known pre-processing algorithms that prepare it for the final segmentation stage. Thus, what can be found here is: an image enhancement technique due to low contrast problems, namely the cumulated sum approach [13], deinterlacing by a group of low-pass filters [12] since a CCD camera is used, and some gradient estimations. The pre-segmentation part searches for circles in a given feature space (the estimated gradient) using classic Hough transform [1] to create a fast nuclei pre-segmenation mask which will approximately define where are the objects we have to segment and where is the background (Fig. 1a–c). This temporary results are then processed by a histogram thresholding algorithm [16] moved to local level due to bad illumination problems during data acquisition. The
Database Supported Fine Needle Biopsy Material Diagnosis Routine
(a)
(b)
(c)
(d)
(e)
(f)
375
Fig. 1. Exemplary results of the pre-segmentation stage: (a-c) pre-segmentation masks, (d-f) detailed pre-segmentation results
histogram thresholding stage separates the nuclei on an image from the rest of the image content, that is plasm (if it is present) and background (see an example in Fig. 1d–f). Before the final segmentation stage the result obtained in the more detailed pre-segmentation stage is processed by a classic watershed algorithm [17, 5]. This stage separates the nuclei using one-pixel separation rule and also reduces the amount of memory needed by the Hough transform used in the final segmentation stage, namely its accumulator which can be split into local regions and assigned to each pre-segmented nucleus. At this stage we also model a cytological image as a 2.5D convex terrain where hills correspond to the nuclei and valleys to the background. The terrain is modeled as a color distance from the currently analyzed pixel to the mean background color [10]. An exemplary result of the 2.5D nucleus mesh is given in Fig. 2b.
4 Final Segmentation The stage of the final templates matching and 2.5D segmentation of regions determined in the previous steps is based in the proposed approach on an adopted version of the Hough transform. The irregular structures described by a mesh of 3D points have to be detected with a support of hand-prepared templates stored beforehand in the expert database. Also their geometric transform parameters which will maximize the level of template fitness to the region being currently analyzed have to be found (see the algorithm in Fig. 3). Since the mesh describing a nucleus is tree dimensional (x, y, z), where z corresponds to object’s texture, and additionally the analyzed object can be rotated the Hough transform has to work with four dimensional accumulator
376
M. Hrebie´ n and J. Korbicz
A
B
(a)
(b)
Fig. 2. Illustration of the longest base diagonal and its rotation angle (a) and modeled 3D nucleus mesh (b)
(Δx, Δy, Δz, Δα). The last dimension, which is the rotation, can be eliminated if we consider the slope of the longest base diagonal of the currently analyzed region of segmentation and a database template (Fig. 2a). Since the complete search of the template database for each region of segmentation is practically impossible due to time costs for each database template and each region of segmentation a group of features is calculated which allow a quick discrimination and grouping of nuclei with the similar shape. In the proposed approach Fourier descriptors as the features are used [3] because they quite well approximate the shape of an object, they include the scale factor and the phase shift can be easily eliminated. They also allow to speed up the database search process because the templates can be sorted depending on their level of similarity to the analyzed region of segmentation. A list of match candidates is thus assigned to each region – from the most to the least similar in the meaning of the calculated features. The error of a template fitness to a given region depends on its base shape and texture. In the proposed approach the fitness error is calculated for all Tij template points (including those which do not have the (x, y) coordinates in common with the Ri region): / // + / (1) ERT = /, /Dmn − Tmn m = xT + Δx+ , +
n = yT + Δy , T = Tij + Δz + , +
(2) (3) (4)
for all Tij ’s points, where D is the scaled to [0 . . . 255] terrain modeled in the pre-segmentation stage, (m, n) coordinates are equal to Tij ’s (x, y) coordinates shifted by the (Δx+ , Δy + ) pair detected by the Hough transform and T + stands for texture (z) shift. The selection of the better matched template is based on the ERT error and its relation to the template’s volume. The Vij volume was introduced because there can be situations where an object has a smaller fitness error but its volume is so small that the region of segmentation is covered only on a small area. Thus, the smaller the ERT /Vij ratio
Database Supported Fine Needle Biopsy Material Diagnosis Routine
377
For each region Ri : Tij ← get the next template from Ri ’s list, rotate Tij by Δα = dRi − dTij angle, ∀i ∀j ∀k A(i, j, k) ← 0, // clear the local accum., ∀p ∈ Ri : // Ri ’s mesh points, ∀q ∈ Tij : // Tij ’s mesh points, Δx = px − qx , Δy = py − qy , Δz = pz − qz , align Δx, Δy, Δz to the nearest cell, A(Δx, Δy, Δz) + = 1, // incr. voting end end // the best parameters of the geometric transf. + + Δx+ ij , Δyij , Δzij ← arg max(A), end Fig. 3. The Hough transform adopted for detection of irregular structures (their best shift parameters in our case)
the higher the level of template fitness. Additionally 10% overlay strategy is used which means that if a template’s base is too much outside the region of segmentation it is omitted and no exchanges are made. Exemplary results of the Hough transform based segmentation is given in Fig. 4.
(a)
(b)
(c)
Fig. 4. Exemplary results of the Hough transform based segmentation after 500 iterations: (a) non-malicious, (b) malicious, (c) adenoma-fibroma
378
M. Hrebie´ n and J. Korbicz
5 Classification The final decision concerning malignancy of the analyzed case can be made by the proposed system using quantitative analysis based on the types of nuclei on the image or with a support of an automatic classifier located at the end of the system. In the proposed solution a combination of three binary classifiers are used which adaptively update its decision every iteration of the algorithm – the decision is added to the global decision vector which defines the probability of the case (non-malicious, adenoma-fibroma, malicious). Each binary classifier is based on the k-nearest neighbors (kNN) method [4] with the mean value of absolute Fourier descriptors for the entire image as the feature vector. At the end the majority rule is applied to obtain the final decision.
6 Experimental Results To conduct the experiments the 538-element benchmark database was first hand-prepared using the expert knowledge. The database itself consist templates of three different types: non-malicious (154 cases), adenoma-fibroma (160 cases) and malicious (224 cases). During the experiments about 65.24% of the cases were found out to be typical what means that they can be easily detected (using for example nearest neighbor rule) based on the other cases collected in the knowledge database. The experiments were performed using leave-one-out cross-validation technique (the case being currently diagnosed was temporary removed from the template database) and randomly selected 99 cases were used (33 per disease case). The algorithm was repeated 500 times for each analyzed image which means that in each case less then 1% of the template database content was searched and used (500 templates per region per 69242 objects in the database). The kNN classification quality equals to 60.61% for all cases and 75.81% for typical cases. The quantitative analysis classification equals to 76.77% for all cases and 82.26% for typical cases. The quantitative analysis results are in the authors’ opinion at good level if we recall the fact that the experiments were performed on a benchmark database where close to easy cases there is a large number of very difficult ones to diagnose. Weaker kNN results are excused by the fact that the database is not very large in the sense of a training set where “the bigger the database the better the results” rule applies as in our case. The Fourier descriptors describe considered nucleus well if we take few of them (8–16) thus the dimensionality for kNN method with a little more then 500 samples as a training set constitute a problem in this case. The most errors of the classifiers were done for adenoma-fibroma cases. Studies of the obtained results show that the adenoma-fibroma is something between the non-malicious and malicious cases in the sense of shape structure and its size. Thus, such mistakes seem to be our current problem. The presented routine needs about 10-15 minutes per 100 iterations on today’s machines (Athlon 64 3500+ 2.8 GHz, Pentium 4 2.2 GHz) depending on
Database Supported Fine Needle Biopsy Material Diagnosis Routine
379
image complexity. Since all the simulations were performed in the Matlab environment the authors believe that the time consumption can be significantly reduced in lower level or parallelized implementations.
7 Conclusions Conducted experiments show that the Hough transform for irregular structures with a support of an expert database can be effectively used for the 2.5D segmentation of cytological images. The transform with a support of a classifier can also be considered as an automatic diagnosis system supporting human expert with the cancer diagnosis. The experiments show that the system works satisfying for typical cases what can speed up and unburden a human expert in diagnosis of very similar cases that were previously diagnosed and collected in a database. Unfortunately, the proposed method is not free of non-solved and open problems. The stated problem is not a trivial one and at this stage of research better discriminatory features that will be the base for a classifier have to be found or a better classifier working with Fourier descriptors has to be constructed. Also the template database should be increased by new cases. Summarizing, the presented solution is promising and gives a good base for further research in the area of cytological image segmentation and diagnosis. Additionally, all the preparation steps like pre-processing and presegmentation stage as well as the final segmentation result can be (re)used with other segmentation algorithms which need such a information.
Acknowledgment This work has been supported by the Ministry of Science and Higher Education of the Republic of Poland under the project no. N N519 4065 34 and the decision no. 9001/B/T02/2008/34.
References 1. Ballard, D.: Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognition 13(2), 111–122 (1981) 2. Boldrini, J., Costa, M.: An application of optimal control theory to the design of theoretical schedules of anticancer drugs. Int. Journal of Applied Mathematics and Computer Science 9(2), 387–399 (1999) 3. Dinh, N., Osowski, S.: Shape recognition using FFT preprocessing and neural network. Compel 17(5/6), 658–666 (1998) 4. Duda, R., Hart, P., Stork, D.: Pattern Classification, 2nd edn. John Wiley & Sons, Chichester (2001) 5. Hrebien, M., Korbicz, J., Obuchowicz, A.: Hough transform, (1+1) search strategy and watershed algorithm in segmentation of cytological images. In: Proc. of the 5th Int. Conf. on Computer Recognitions Systems CORES 2007, Wroclaw, Poland. Advances in Soft Computing, pp. 550–557. Springer, Berlin (2007)
380
M. Hrebie´ n and J. Korbicz
6. Jelen, L., Fevens, T., Krzyzak, A.: Classification of breast cancer malignancy using cytological images of fine needle aspiration biopsies. Int. Journal of Applied Mathematics and Computer Science 18(1), 75–83 (2008) 7. Kimmel, M., Lachowicz, M., Swierniak, A. (eds.): Cancer growth and progression, mathematical problems and computer simulations. Int. Journal of Applied Mathematics and Computer Science 13(3) (2003) (special issue) 8. Lee, M., Street, W.: Dynamic learning of shapes for automatic object recognition. In: Proc. of the 17th Workshop Machine Learning of Spatial Knowledge, Stanford, USA, pp. 44–49 (2000) 9. Marciniak, A., Obuchowicz, A., Monczak, R., Kolodzinski, M.: Cytomorphometry of fine needle biopsy material from the breast cancer. In: Proc. of the 4th Int. Conf. on Computer Recognition Systems CORES 2005, Rydzyna, Poland. Advances in Soft Computing, pp. 603–609. Springer, Berlin (2005) 10. Obuchowicz, A., Hrebien, M., Nieczkowski, T., Marciniak, A.: Computational intelligence techniques in image segmentation for cytopathology. In: Smolinski, T., Milanova, M., Hassanien, A. (eds.) Computational Intelligence in Biomedicine and Bioinformatics, pp. 169–199. Springer, Berlin (2008) 11. Pena-Reyes, C., Sipper, M.: Envolving fuzzy rules for breast cancer diagnosis. In: Proc. of the Int. Symposium on Nonlinear Theory and Application, vol. 2, pp. 369–372. Polytechniques et Universitaires Romandes Press (1998) 12. Pratt, W.: Digital Image Processing. John Wiley & Sons, New York (2001) 13. Russ, J.: The Image Processing Handbook. CRC Press, Boca Radon (1999) 14. Setiono, R.: Extracting rules from pruned neural networks for breast cancer diagnosis. Artificial Intelligence in Medicine 8(1), 37–51 (1996) 15. Swierniak, A., Ledzewicz, U., Schattler, H.: Optimal control for a class of compartmental models in cancer chemotherapy. Int. Journal of Applied Mathematics and Computer Science 13(3), 357–368 (2003) 16. Tadeusiewicz, R.: Vision Systems of Industrial Robots. WNT, Warszawa (1992) (in Polish) 17. Vincent, L., Soille, P.: Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans. on Pattern Analysis and Machine Intelligence 13(6), 583–598 (1991) 18. Wolberg, W., Street, W., Mangasarian, O.: Breast cytology diagnosis via digital image analysis. Analytical and Quantitative Cytology and Histology 15(6), 396– 404 (1993)
Multistrategic Classification System of Melanocytic Skin Lesions: Architecture and First Results Pawel Cudek1 , Jerzy W. Grzymala-Busse1,2, and Zdzislaw S. Hippe1 1 2
University of Information Technology and Management, Rzeszow, Poland {PCudek, ZHippe}@wsiz.rzeszow.pl University of Kansas, Lawrence KS 66045, USA [email protected]
Summary. The paper presents a verified project of a computer system for multistrategic classification of melanocytic skin lesions, based on image analysis reinforced by machine learning and decision-making algorithms with the use of voting procedures. We applied Stolz, Menzies and Argenziano strategies.
1 Introduction Melanocytic skin lesions may present various degrees of hazard to human health, while so-called malignant lesions lead most often to melanoma (skin cancer, Melanoma malignant, MM), one of the most dangerous tumours. According to [1], melanoma is the sixth most common type of cancer among men and the seventh among women; among people within the age range of 2529 it is the most common type of all cancers. Due to the above quoted facts, early lesion type classification may facilitate suitable medical procedures, giving an indication for the necessity of the lesion’s surgical removal. Melanocytic skin lesions classification and hazard degree assessment related furthermore are generally done by dermatologists using certain standard procedures (often called melanocytic algorithms, or strategies). The most widely known and used strategies of this type include: the Stolz algorithm (strategy) [3], formally based on the primary ABCD rule [3], described in detail in Section 2, the Menzies strategy [4] and the Argenziano strategy [5]. All the mentioned strategies consist, in general, in a human or a suitable machine detecting certain characteristic features of the analysed lesion and indicating on that basis the necessity of surgery: to excise or to leave the lesion. In machine learning categories it means assigning to the diagnosed lesion a specific category (class), which in the given example would be binary. Use the above mentioned algorithms makes it possible for physicians to avoid ad hoc diagnosing on the basis of presumed experience; and automatising those algorithms is a way to create completely automatic or computer-supported diagnostic methods. It needs to be emphasised, however, that the issue of M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 381–387. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
382
P. Cudek, J.W. Grzymala-Busse, and Z.S. Hippe
automatic classification of melanocytic skin lesions has so far not been solved in a straightforward and satisfying manner; furthermore, available literature lacks precise descriptions of algorithms used to automatically extract vital features from the lesion images, say their shape, symmetry, colors or diversity of internal structures. That situation inspired us to undertake research on developing a new intelligent tool joining the possibilities of automatic naevus classification with the simultaneous use of the three above-mentioned strategies (Stolz, Menzies and Argenziano), using advanced AI technologies for recognising features, reinforcing the choice of final solution (i.e. diagnosis) applying voting procedures [6]. In this way - as we expect - it shall be possible to fill the so-called third layer of the Internet Melanoma Diagnosing and Learning System (IMDLS) [7] developed in our group, i.e. the automatic classification layer. At the first research stage an attempt was undertaken to extract diagnostic features of the most dangerous lesion types (i.e. from the Lesion group: Junctional lesion, Junctional and dermal lesion, Atypical /dysplastic dermal lesion, Dermal lesion and Palmo-plantar lesion, and from the Melamoma group: Superficial melanoma and Nodular melanoma), using Stolz strategy. That intent dictates the necessity to give more detailed characteristics of those features as well as methods of their application in the particular strategies.
2 Strategies of Melanocytic Skin Lesion Classification In this chapter, Stolz strategy has been discussed in more detail and elementary information on other strategies being the object of further research have been presented. Stolz strategy In this strategy, supported by the above-mentioned ABCD rule, four parameters are analysed (Asymmetry, Border, Color and Diversity of structures). Asymmetry (A) of the naevus shape is assessed on the basis of its symmetry in relation to one or two two-fold axes of symmetry, intersecting at a right angle. If the lesion has no symmetry, it is assigned the value of 0; in case of symmetry in relation to one axis - it is assigned the value of 1, and if it is symmetric, it gains the value of 2 points. The Border (B) parameter indicates the number of periphery segments where the naevus has sharp pigment contrasts on a grid of four axes intersecting at an angle of 45◦ . The Border parameter value has the possible range of 0 to 8. The third parameter (C) of the (ABCD) rule stands for the number of colors in the naevus from the collection of allowed colors (white, red, light brown, dark brown, blue-gray and black). Color detection has certain limitations described in [3]. The result of the assessment of the considered parameter is a number from 1 to 6. The occurrence of pigmented globules, structureless areas, pigment dots, branched streaks or pigment networks makes it possible to define the value
Multistrategic Classification System of Melanocytic Skin Lesions
383
of the degree of Diversity of structures (D) according to the ABCD rule. To determine this parameter it is necessary to consider specific patterns, available in [2], [3]. Values thus gained are used in the formula proposed by Stolz and defining the total value of epiluminescent microscopy (TELMS): TELMS = (1.3 ∗ A) + (0.1 ∗ B) + (0.5 ∗ C) + (0.5 ∗ D)
(1)
The border value of TELMS, classifying the naevus as malignant, is 5.45 with 94% diagnosis precision [9]. It is also known that there are no malignant lesion with TELMS value below 4.25. Argenziano strategy The Argenziano strategy isolates and gives points to so-called minor and major features. Among the major features we distinguish: atypical pigment network, atypical vascular pattern and blue-white veil. Major features are scored on a two-point scale. Minor features include: irregular streaks (pseudopods, radial streaming), irregular pigmentation, irregular dots/globules and regression areas. Minor features are scored on a one-point scale. The Argenziano strategy assumes that the lesion which have gained 3 or more points may be malignant [5]. Menzies strategy This strategy examines the symmetry and occurrence of the following features: blue-white veil, multiple brown dots, pseudopods, radial streaming, scarlike depigmentations, peripheral black dots/globules, multiple colors, multiple blue or grey stains, wide pigment network. The Menzies strategy assumes that malignant naevus is characterised by asymmetry, occurrence of more than one color and at least one of the above-mentioned features [4].
3 Architecture of the Planned System The planned system of automatic lesion classification (Fig. 1.) assumes three basic modules, i.e. module of information extraction from a digital image of a lesion, automatic classification module and voting module. Data processed by the system are digital images of melanocytic lesion, registered as .jpeg files. The module of extracting information from an image (Fig. 2.) makes it possible to define all parameters required by the Stolz algorithm. In further research the module will be extended to encompass methods allowing to gain information used in Argenziano and Menzies strategies. Isolation of that module is justified by the need to create a global set of extraction methods including a subset of methods common for particular strategies, avoiding thus redundancy of the necessary solutions. The automatic classification module consists of three paths implementing naevus classification by the Stolz, Argenziano and Menzies strategy, taking from the extraction module methods providing information on features used
384
P. Cudek, J.W. Grzymala-Busse, and Z.S. Hippe
Fig. 1. Architecture of the automatic melanocytic lesions classification system
in the particular strategies. The result of classification with the particular methods is then transmitted to the third (voting) module. The voting module finally classifies the melanocytic naevus with statistical methods. Fundamental significance for the system lies in the information extraction module, since the result of naevus classification with the use of the particular strategies and thus final classification in the voting module depends on the precision of assessing the particular image features. To this extent, we have analysed the literature from the perspective of existing solutions which allow to gain the information necessary for the Stolz strategy from the image. Image segmentation Correct detection of the examined area has fundamental significance due to the possibility of overlooking vital areas or indicating healthy body parts, which may influence other parameters of the Stolz algorithm. For that reason we assume conducting comparative tests of three solutions: thresholding method [8], Canny algorithm [9] and Sobel procedure [10], distinguished in literature due to their effectiveness in recognising edges in images. This stage is additionally supported by the possibility to introduce supplements and/or corrections by the user: the correction may consist in changing parameters of the automatic segmentation method used or in marking the naevus border on the skin manually. Determining asymmetry Exemplary solutions of the problem of determining asymmetry have been presented in [11], [12], consisting in determining the centre of gravity through
Multistrategic Classification System of Melanocytic Skin Lesions
Image segmentation
385
Optional human interference
Asymmetry assessment
Border assessment
Color assessment
Diversity of structures assessment
Fig. 2. Module of extracting information from a digital image of a melanocytic lesion - methods used in Stolz strategy
which two perpendicular symmetry axes are drawn. It is known that in the process of determining asymmetry you also have to take into account colors and structures, whose lack may exclude primary diagnosis of asymmetry [3]. Determining border Applying four axes intersecting at an angle of 45◦ on the skin lesion defines areas in which you need to recognise the character of the lesion’s transition into healthy skin. The transition may be acute or mild. Assessment of that parameter has been discussed in [13], [14], and consists in using the value of grey gradient counted from the border of the lesion at an angle of 90◦ . Narrow segments determined from the border area indicate acute contrasts, broad ones indicate more fuzzy contrasts. Recognising colors Using color patterns defined for each color sought, all pixels in the defined area are analysed. Problematic might be the occurrence of single pixels of the sought color range resulting from the method of compressing the examined pictures. The solution will be to define a minimum amount of pixels of the given color which must appear for the occurrence of the particular color to be detected in the naevus. The minimum amount will be determined statistically basing on samples of images where the given color was found. While detecting the white color, the exceptions described earlier in the Stolz strategy have to be taken into account. Determining diversity of structures At the current stage of analysing solutions concerning recognition of structures, the scale index method (SIM) [15] allows to define whether the analysed pixel belongs to a dot, a line, a more complex structure or the
386
P. Cudek, J.W. Grzymala-Busse, and Z.S. Hippe
background. For example, SIM equal 0 indicates a single dot without accompanying dots in direct neighbourhood, 1 means a straight line and 1,5 a curve. Further research shall prove in how far the scale index method is useful and complete as applied to assessing the degree of the skin lesion’s diversity of structures.
4 System Testing To complete work on the system, the efficiency of melanocytic lesion classification needs to be tested using the prepared set of selected lesion type images. The necessary skin lesion images were obtained among others from [2], making up a set of 53 images of lesion classified with the ABCD rule. To multiply the testing set, each image was rotated by 90, 180 and 270 degrees. This procedure, formally irrelevant for a specialist, enables us to obtain a set of 212 real lesion images, completely different from the perspective of classification automation. A list of obtained naevi types is presented in Table 1 below. Table 1. Number of different types of melanocytic skin lesion in a learning set Diagnosis Benign lesion Blue lesion Suspicious lesion Malignant lesion
Number of cases % 140 8 12 52
66 4 6 25
5 Conclusion The developed system architecture is a starting point for further work on a solution supporting automatic classification of melanocytic lesions. Extract the modules and grouping features extraction techniques in one of them makes possible, as we found, the optimization of the solutions in transparent way to the other elements of the system. The possibility to extend the second module of the presented system with further classification strategies and using their results of voting will allow to obtain the least erroneous diagnosis. It seems necessary to create additional module to management database of real images of melanocytic lesions, allowing learn and view the historical cases.
References 1. http://www.dermoncology.med.nyu.edu/MMfacts.htm (April 28, 2001) 2. Triller, R., Aitken, G., Luc, T.: Atlas of Dermatoscopy of Pigmented Skin Tumors (2008), http://www.plessfr/dermatoscopie
Multistrategic Classification System of Melanocytic Skin Lesions
387
3. Stolz, W., Braun-Falco, O., Bilek, P., Landthaler, M., Burgdorf, W., Cognetta, A.: Atlas of Dermatoscopy. In: Czelej (ed.) Office, Lublin, Poland, p. 210 (2006) (in Polish) 4. Menzies, S.W.: Surface microscopy of pigmented skin tumors. Australas J. Dermatol. 38, 40–43 (1997) 5. Argenziano, G., Fabbrocini, G., Carli, P., De Giorgi, V., Sammarco, E., Delfino, M.: Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions. Comparison of the ABCD rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Arch. Dermatol. 134, 1563–1570 (1998) 6. Cichosz, P.: Learning Systems, pp. 565–567. WNT, Warsaw (2000) (in Polish) 7. Grzymala-Busse, J.W., Hippe, Z.S., Knap, M., Paja, W.: Infoscience Technology: The Impact of Internet Accessible Melanoid Data on Health Issues. Data Science Journal 4, 77–81 (2005) 8. Triller, R.: Quiz of Dermatoscopy. Diagnosis of pigmented skin tumors (2008) 9. Ludwiczuk, R.: Canny Edge Detection Algorithm in Medical Image Segmentation Process. In: II Conference of Computer Enthusiasts, Che¸sm (2003) (in Polish) 10. Czechowicz, A., Mikrut, S.: Review of Edge Detection Algorithms in Close Range Photogrammetry. Archive of Photogrammetry, Cartography and Teledetection 16, 135–146 (2006) (in Polish) 11. Ng, V., Cheung, D.: Measuring Asymmetry of Skin Lesions. Proc. Computational Cybernetics and Simulationapos, 4211–4216 (1997) 12. d’Amico, M., Ferri, M., Stanganelli, I.: Qualitative Asymmetry Measure for melanoma detection. In: Proc. International Symposium on Volume, pp. 1155– 1158 (2004) 13. Seidenari, S., Burroni, M., Dell’Eva, G., Pepe, P.: Computerized evaluation of pigmented skin lesion images recorded by a videomicroscope: compersion between polarizing mode observation and oil/slide mode observation. Skin Res. Technol., 187–191 (1995) 14. Green, A., Martin, N., Pfitzner, J., O’Rourke, M.: Computer image analysis in the diagnosis of melanoma. Journal of the American Academy of Dermatology 31, 958–964 (1994) 15. Horsch, A., Stolz, W., Neiss, A., Abmayr, W., Pompl, R., Bernklau, A., Bunk, W., Dersch, D.R., Glassl, A., Achiffner, R., Morfill, G.: Improving early recognition of malignant melanomas by digital image analysis in dermatoscopy. Stud. Health Technol. Inform. 43, 531–535 (1997)
Reliable Airway Tree Segmentation Based on Hole Closing in Bronchial Walls Michal Postolski1,2, Marcin Janaszewski1,2, Anna Fabija´ nska1 , 1 3 4 Laurent Babout , Michel Couprie , Mariusz Jedrzejczyk , and Ludomir Stefa´ nczyk4 1
2 3
4
Computer Engineering Department, Technical University of L o ´d´z, Stefanowskiego 18/22, 90-924 L o ´d´z, Poland {mpostol,janasz,an_fab,lbabout}@kis.p.lodz.pl Department of Expert System and Artificial Intelligence, The College of Computer Science in L o ´d´z, Rzgowska 17a, 93-008 L o ´d´z, Poland Universit´e Paris Est, LABINFO-IGM, A2SI-ESIEE 2, boulevard Blaise Pascal, Cit´e DESCARTES BP 99 93162 Noisy le Grand CEDEX, France [email protected] Department of Radiology and Diagnostic Imaging, Medical University of Lodz, ˙ S. Zeromskiego 113, 90-710 L o ´d´z, Poland [email protected], [email protected]
Summary. Reliable segmentation of a human airway tree from volumetric computer tomography (CT) data sets is the most important step for further analysis in many clinical applications such as diagnosis of bronchial tree pathologies. In this paper the original airway segmentation algorithm based on discrete topology and geometry is presented. The proposed method is fully automated, reliable and takes advantage of well defined mathematical notions. Holes occur in bronchial walls due to many reasons, for example they are results of noise, image reconstruction artifacts, movement artifacts (heart beat) or partial volume effect (PVE). Holes are common problem in previously proposed methods because in some areas they can cause the segmentation algorithms to leak into surrounding parenchyma parts of a lung. The novelty of the approach consists in the application of a dedicated hole closing algorithm which closes all disturbing holes in a bronchial tree. Having all holes closed the fast region growing algorithm can be applied to make the final segmentation. The proposed method was applied to ten cases of 3D chest CT images. The experimental results showed that the method is reliable, works well in all cases and generate good quality and accurate results.
1 Introduction Modern medical computer tomography which uses multidetector spiral CT scanners can produce three-dimensional volumetric images of very high quality and allows to look into inside of a human body. This is a very powerful and useful technique being used in a variety of medical application. 3D volumetric scans of human organs provide an excellent basis for virtual colonoscopy (VC) [1], virtual angioscopy (VA) [2], virtual bronchoscopy application (VB) [3], M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 389–396. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
390
M. Postolski et al.
surgical planning [4] or quantification of anatomical structures [5]. In all presented applications accurate and reliable segmentation is the first and crucial step. However proper segmentation of many human organs, for example an airway tree, is very difficult and is a challenging problem because of the complex anatomy and the limitations in image quality or errors in image acquisition. Airway tree segmentation algorithms operate on a CT chest image represented by a large 3D array of points with associated values. Each point is represented as a quadruple (x, y, z, v) where the first three values represent its spatial location in 3D space. The last element v represents attenuation coefficient of a small portion of the tested object with the centre in (x, y, z) and is measured with Hounsfield units (HU). In well calibrated CT images points which represent an interior of an airway tree should be at approximately −1000HU (air) surrounded by walls which points having relatively high value at approximately −100 to 200HU (soft tissues). Unfortunately, this situation is very rare in real applications. Noise, image reconstruction artifacts, movement artifacts (heart beat), non standard patient anatomy, airway obstruction or partial volume effect (PVE) significantly decrease the difference between HU for bronchial wall points and points which represent surrounded air. Therefore, values of airway wall points at different bronchial parts can present different intensity values and can be similar to values of interior points of a tree, in particular for high order branches. In addition, different reconstruction kernels (smooth kernels) can increase this effect. As a result small holes in a wall structure appear and high order segments of a tree disappear which cause leakages of segmentation algorithms in surrounding parenchyma parts of a lung (see Fig. 1). Leakage is a main problem in segmentation of airway tree. A lung has a texture very similar to the small airways, which leads to a failure of simple segmentation algorithms like region growing [6] and the user has to adjust the algorithm parameters manually for each image separately. However, manual adjustment is impractical and not reliable because it is very hard or, in some cases, even impossible to find suitable parameters. Previous work on airway segmentation can be divided into five groups: region growing based methods [7] [8] [9], mathematical morphology based methods [10] [11], and combination of the two [12], rule based methods [13] and energy function minimalization [14]. However previously published methods focus on how to detect and eliminate leakages when they occur or how to avoid them using complex rules. Some of these algorithms must be run several times with different parameters, another ones analyse very large sets of points using complicated nonlinear filters or semantic rules. In this paper we propose a new segmentation algorithm based on 3D hole closing in bronchial walls. The presented method eliminates the leakage problem by closing all holes in an airway tree wall and then performs the standard region growing algorithm. Thanks to the 3D hole closing algorithm (HCA) [15] the method is simple, reliable and based on well defined
Reliable Airway Tree Segmentation Based on Hole Closing
391
Fig. 1. Example of a severe segmentation leak. Result of segmentation with the standard region growing algorithm
mathematical notions like simple points, square Euclidean distance and a hole defined on the basis of discrete topology. Moreover the method is fast and fully automated. This paper is organized as follows. In section 2 the proposed method is explained in detail. Section 3 describes our experimental results. Discussion and summary are given in section 4.
2 Methodology 2.1
Histogram Analysis and Preliminary Wall Extraction
The first, important step in our segmentation method is airway wall extraction (see Fig. 2). After histogram analysis of ten CT scans we can distinguish three intensity ranges with are of great importance from the bronchial tree segmentation point of view (see Fig. 3). The first one represents air voxels, the second one corresponds to voxels of internal border of bronchial walls and the last one represents soft tissue and blood voxels. Bronchial walls belongs to soft tissue range. However, differences in wall pixel intensities and wall thickness, at low and high level of an airway tree and common occurrence of other soft tissues, make very difficult to extract walls directly from borders of the range as threshold parameters. Fortunately, in our application, we do not need to segment walls, instead the algorithm extracts only internal border of bronchial walls which is enough to perform a hole closing procedure. The lower threshold value for this purpose is approximately situated on the border
392 (a)
M. Postolski et al. (b)
(c)
(d)
(e)
Fig. 2. Airway tree segmentation: a) One 2D slice obtained from a 3D CT data set b) zoomed fragment of the airway tree c) extracted airway walls d) an input image merged with the (c) image e) Example of a final segmentation result
between air voxels range and internal border of bronchial walls. The higher threshold value is approximately situated between the range which represents internal border of bronchial walls and soft tissue range. Using this values, which can be easily and automatically selected, the algorithm can extract internal border of the walls on different levels of an airway tree. Moreover, small number of points in this range leads to a ”clear” output image (without unnecessary soft tissues). 2.2
3D Hole Closing in Airway Walls
3D hole closing algorithm [15] is linear in time and space complexity. Because of a space limitation we cannot present the algorithm with all mathematical
Fig. 3. A typical histogram generated from a 3D chest image. Marked ranges (air, internal part of bronchial walls, soft tissue and blood) have been determined based on histogram analysis from ten lung scans with different airway pathologies.
Reliable Airway Tree Segmentation Based on Hole Closing
393
Fig. 4. An example result of hole closing applied to a torus object. The hole closing ”patch” is represented in dark grey colour.
details. Interested readers can refer to [15] for more details. The short presentation of the algorithm might be as follows: first it computes an enclosing cuboid Y which has no cavities and no holes and which contains the input object X. Then it iteratively deletes points of Y \ X which are border for Y and which deletion does not create any hole in Y . The deletion process is ordered by a priority function which is defined as the Euclidean distance from X. The algorithm repeats this deletion of points until stability. An example of the algorithm results when applied to a torus is presented in Fig. 4. 2.3
Merging and Final Segmentation
On the last step, the algorithm combines the binary image B, produced in previous step, with an input data set A. The algorithm sets intensity of a voxel from image A to the maximal possible HU value Hmax , only if the corresponding voxel from the image B has value 1. Then the standard region growing algorithm (RGA) is applied to produce the final segmentation result. The RGA needs two parameters: the first represents a threshold value which constrains the growth process, the second one corresponds to a starting point which is called the seed. The first parameter is set to the Hmax value and the seed can be selected manually or automatically using, for example, a simple method proposed in [16].
3 Results The proposed method has been applied to test the segmentation of ten chest CT images acquired using GE LightSpeed VCT multidetector CT scanner. The set of stack images is of size 512x512 and voxel dimensions are: x = y = 0.527 mm, z = 0.625 mm. All tests were performed on standard PC platform computer (CPU: INTEL 2Ghz). Fig. 5 shows the comparison between the proposed method and standard region growing approach on the same set of images.
394
M. Postolski et al.
The experimental results showed that presented method works well in all tested cases. The leakage problem is fully eliminated and results are of much better quality than for simple region growing approach. The numbers of extracted branches by the proposed method and region growing method with optimal parameters selected manually are presented in table 1. It occurs that for all tested cases our algorithm gives better results than RGA. Moreover, for the first two cases (3D images of healthy patients with well defined airway tree) almost 100% of the branches are extracted up to 5th order of a bronchi using the proposed method. The superiority of our approach is also clearly visible for the next two cases (case 3 and 4 in Fig. 5) which correspond to unhealthy patients. The presented method extracted 100% branches up to 4th level in both cases, while RGA extracted only 37% or finished with a severe leak (bottom image of case 4 in Fig. 5). The computation time for these two algorithms has been also evaluated. The proposed algorithm is much slower than RGA but its runtime does not exceed several minutes per volume, and it is faster than previously presented approaches e.g. [16] [17]. It is worth mentioning that the presented segmentation method is optimal (linear complexity) because it is only based on linear complexity algorithms. Compared to other methods like the one proposed in [16] [17], our method resolves less orders of the bronchi. However, it has been found that these approaches are not optimal as well, and the complexity of their algorithm makes it difficult to estimate their respective time complexity.
Fig. 5. 3D visualization of segmentation results. Upper row - proposed method, lower row - the region growing procedure [6].
Reliable Airway Tree Segmentation Based on Hole Closing
395
Table 1. Fraction (in %) of extracted branches at different levels of the airway tree. Results obtained using the proposed method (HC) and region growing method (RG) with manually selected optimal parameters. case 1
case 2
case 3
case 4
HC RG HC RG HC RG HC RG 2nd level[%]: 3rd level[%]: 4th level [%]: 5th level[%]:
100 100 100 93,7
100 100 87,5 62,5
100 100 68,7 12,5
Working time[sec]: 360 4,5 270
100 62,5 25 0
100 100 100 68,7
50 50 37,5 6,2
3
340
3
100 100 100 62,5
-
340 8
4 Conclusions The authors have presented a bronchial tree segmentation algorithm which is flexible, fast (runtime, for a PC, does not exceed several minutes per volume), efficient and based on well defined mathematical notions. Firstly, the algorithm eliminates sources of leaks using HCA, and secondly performs a very fast region growing procedure. By doing so, the approach is, for the authors point of view, the simplest among all well known solutions, published so far. Moreover the algorithm is fully automated. The set of measurements carried out to estimate the efficiency of the proposed approach have shown that the algorithm is reliable as it produces accurate segmentation results from all the tested cases. It should be emphasised that, from the medical point of view, the results are good enough to become an input for quantitative analysis which, in turn, is considered to be a baseline for an objective medical diagnosis. Although the presented algorithm does not segment as many tree segments as the best published algorithms [16] [17], it is far easier to implement and is less time consuming. Moreover it is possible to obtain better results with simple improvements on which we are working on. Taking into account all above considerations, it is worth emphasising that the main goal of the work - construction of a simple and fast algorithm of bronchial tree segmentation which gives good enough results for clinical applications - has been achieved.
References 1. Hong, L., et al.: 3D Virtual Colonoscopy. In: Loew, M., Gershon, N. (eds.) 1995 Biomedical Visualization, pp. 26–33 (1995) 2. Do Yeon, K., Jong Won, P.: Virtual angioscopy for diagnosis of carotid artery stenosis. Journal of KISS: Software and Applications 30(9-10), 821–828 (2003) 3. Perchet, D., Fetita, C.I., Preteux, F.: Advanced navigation tools for virtual bronchoscopy. Proceedings of the SPIE The International Society for Optical Engineering 5298(1), 147–58
396
M. Postolski et al.
4. Fatt, C.C., Kassim, I., Lo, C., Ng, I., Keong, K.C.: Volume Visualization for Surgical Planning System. Journal of Mechanics in Medicine and Biology (JMMB) 7(1), 55–63 (2007) 5. Pal´ agyi, K., Tschirren, J., Hoffman, E.A., Sonka, M.: Quantitative analysis of pulmonary airway tree structure. Computers in Biology and Medicine 36, 974–996 (2006) 6. Mori, K., et al.: Automated extraction and visualization of bronchus from 3D CT images of lung. In: Ayache, N. (ed.) CVRMed 1995. LNCS, vol. 905, pp. 542–548. Springer, Heidelberg (1995) 7. Chiplunkar, R., Reinhardt, J.M., Hoffman, E.A.: Segmentation and quantitation of the primary human airway tree. SPIE Medical Imaging (1997) 8. Tozaki, T., Kawata, Y., Niki, N., et al.: Pulmonary Organs Analysis for Differential Diagnosis Based on Thoracic Thin-section CT Images. IEEE Transaction on Nuclear Science 45, 3075–3082 (1998) 9. Law, T.Y., Heng, P.A.: Automated extraction of bronchus from 3D CT images of lung based on genetic algorithm and 3D region growing. In: SPIE Proceedings on Medical Imaging, pp. 906–916 (2000) 10. Pisupati, C., Wolf, L., Mitzner, W., Zerhouni, E.: Segmentation of 3D pulmonary trees using mathematical morphology. Mathematical morphology and its applications to image and signal processing, 409–416 (1996) 11. Preteux, F., Fetita, C.I., Grenier, P., Capderou, A.: Modeling, segmentation, and caliber estimation of bronchi in high-resolution computerized tomography. Journal of Electronic Imaging 8, 36–45 (1999) 12. Bilgen, D.: Segmentation and analysis of the human aiway tree from 3D X-ray CT images. Master’s thesis (2000) 13. Park, W., Hoffman, E.A., Sonka, M.: Segmentation of intrathoracic airway trees: a fuzzy logic approach. IEEE Transactions on Medical Imaging 17, 489– 497 (1998) 14. Fetita, C.I., Preteux, F.: Quantitative 3D CT bronchography. In: Proceedings IEEE International Symposium on Biomedical Imaging, ISBI 2002 (2002) 15. Aktouf, Z., Bertrand, G., Perroton, L.: A three-dimensional holes closing algorithm. Pattern Recognition Letters 23(5), 523–531 (2002) 16. Tschirren, J., Hoffman, E.A., McLennan, G., Sonka, M.: Intrathoracic Airway Trees: Segmentation and Airway Morphology Analysis from Low-Dose CT Scans. IEEE Transactions on Medical Imaging 24(12), 1529–1539 (2005) 17. Graham, M.W., Gibbs, J.D., Higgins, W.E.: Robust system for human airwaytree segmentation. In: Medical Imaging 2008: Image Processing. Proceedings of the SPIE, vol. 6914, pp. 69141J–69141J-18 (2008)
Analysis of Changes in Heart Ventricle Shape Using Contextual Potential Active Contours Arkadiusz Tomczyk1 , Cyprian Wolski2 , Piotr S. Szczepaniak1 , and Arkadiusz Rotkiewicz2 1
2
Institute of Computer Science, Technical University of Lodz, Wolczanska 215, 90-924 Lodz, Poland [email protected] Medical University of Lodz, Barlicki University Hospital, Department of Radiology and Diagnostic Imaging Kopcinskiego 22, 91-153 Lodz, Poland
Summary. In this paper the application of potential active contour method (PAC) for heart ventricle segmentation is presented. Identification of those contours can be useful in pulmonary embolism diagnostic since the obstruction of pulmonary arteries by the emboli causes changes in the shape of heart chamber. The manual process of contour drawing is time-consuming. Thus its automatic detection can significantly improve diagnostic process.
1 Introduction Image segmentation is a crucial element of every system where recognition of image elements is necessary. There exist many different traditional techniques of image segmentation (thresholding, edge-based methods, regionbased methods, etc.). Most of them, however, possess limitations in utilization of high-level knowledge that can be obtained from experts and which can remarkably improve results of object identification. As a solution of that problem active contour techniques were proposed ( [1, 2, 3]). In this paper potential active contour method is applied for heart ventricle segmentation. The paper is organized as follows: in section 2 medical background of pulmonary embolism diagnostic is presented, section 3 describes potential active contour (PAC) method, section 4 is devoted to application of that method to heart ventricle segmentation and presentation of example results, the last section focuses on the summary of the proposed approach.
2 Medical Background Pulmonary embolism determines a serious diagnostic problem and according to American and European Union data, which are similar, it constitutes the third cause of death among the population. Approximately 300 000 incidents M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 397–405. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
398
A. Tomczyk et al.
are noted every year from which 30% lead to sudden death. In the surviving group, chronic right heart failure and pulmonary hypertension form ( [4, 5]). Those patients must stay under rigorous cardiologic check-up and changes in heart ventricles morphologic and functional parameters must be assessed. A significant remodeling of pulmonary embolism diagnostic algorithm has taken place since last decade mainly due to leading in multidetector computed tomography (MDCT). As a consequence of pulmonary arteries obstruction by the emboli, increased pressure in right ventricle arises, which results in the chamber’s enlargement and the interventricular septum shape and curvature alteration. Many scales and estimation procedures of pulmonary embolism have been elaborated. Miller score and Qanadli score are the most popular and the most frequently cited. They are based on the site and degree of pulmonary vessels obstruction ( [6, 7]). For a complete view on the morphology and function of the heart after pulmonary embolism incident, it is indispensable to gather more information. MDCT gives a chance to calculate some very important parameters concerned with right and left ventricles. End Diastolic Volume (EDV), End Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF) and RV/LV ratio are the most important for clinical purpose and treatment strategy. In [8] authors proposed a new, noninvasive method for the right ventricle systolic pressure assessment linked with shifts in interventricular septum curvature. The research presented in this paper relies on the last approach. To obtain image data the patient examination is ECG-gated using 64-row computed tomography scanner (LightSpeed VCT, General Electric Healthcare) in standard chest protocol after intravenous injection of contrast media (Iomeron 400). The heart cycle is reconstructed in 10 phases (0-90%) for each 4 millimeter slice, from the base to the apex of the heart. Subsequently, the two chamber short axis view is generated (Fig. 1) using Cardiac Reformat, which
a)
b)
Fig. 1. Image preparation: (a) - four chamber long axis view (original data), (b) two chamber short axis view (reconstructed data)
Analysis of Changes in Heart Ventricle Shape Using Contextual PAC
399
Fig. 2. Sample 4D image sequence (the first, sixth and tenth phase of different slices). Each image contains manually drawn: contour of right ventricle (on the left), contour of left ventricle (on the right, inside), contour of myocardium (on the right, outside)
is an integral part of workstation software. This process allows to obtain 4D image sequences (Fig. 2). For the assesment purposes, endocardial and epicardial contours must be drawn. As yet, they are drawn manually, which is the most time-consuming (2-5 hours, mainly depends on the examination quality and the heart size) part of postprocessing (Fig. 2). Hundreds of left and right ventricle outlines create possibility of technical mistakes and automatic system for contour detection would significantly decrease time consumption.
3 Potential Active Contours In [9] the relationship between active contour methods ( [1, 2, 3]) and techniques of supervised and unsupervised classifiers construction ( [10, 11, 12]) was described. It was shown that contours can be interpreted as contextual classifiers of pixels where the context of currently classified pixel is constituted by other pixels from its neighborhood. Consequently, the search of an
400
A. Tomczyk et al.
optimal contour can be interpreted as a method of optimal classifier construction where energy is a performance index (in both cases during optimization the parameters of the assumed contour or classifier model, respectively, are sought). There are two important consequences of that relationship: •
•
3.1
The existing active contour methods can be adapted to construct classifiers capable of classifying data other than image pixels. Those data can be still elements of the image as it was presented in [13] or can have any other form as in [14, 15, 16, 17]. The main advantage of that approach is a possibility of usage of almost any kind of expert knowledge during classifier construction which can be encoded in energy function. The existing models of classifiers can be adapted for the purposes of image segmentation. This allows to create new models of contour description which either can posses some useful properties (for example, can be more intuitively interpreted) or can allow to utilize experience already gathered while working with original classification method. An example of such method is potential active contour approach presented in [18]. Potential Function Classifier
Potential function method ( [10]), similar to other methods of classification, assumes that label of the classified object should depend on the labels of currently known, similar objects. The similarity measure considered in this case is distance function ρ : X × X → R and, consequently, the method can be applied in"any metric #space X . Let D l = xl1 , . . . , xlNl denote a subset of X containing Nl reference objects corresponding to the label l ∈ L(L) where L(L) = {1, . . . , L} and let the function P : R → R denote the potential function. For example, inverse potential function can be of use here: PΨ,μ (d) =
Ψ 1 + μd2
(1)
where Ψ and μ are parameters controlling the maximum strength of the potential field in its center and its distribution, respectively. The potential function classifier k : X → L(L) is then defined in the following way: N l
k(x) = arg maxl∈L(L)
Pψil ,μli (ρ(xli , x))
(2)
i=1
To put it differently, the object x ∈ X receives label l ∈ L(L) if the summary potential of objects from Dl has the maximum value. 3.2
Potential Contour Model and Evolution
To create potential contour model one has to assume that X = R2 and L = 2 which means that this is a binary classifier defined over the image plane. Such
Analysis of Changes in Heart Ventricle Shape Using Contextual PAC
401
a classifier assigns to pixels from image plane two types of labels (1 or 2): one representing the object and one representing the background. Contour can be defined as a set of points separating areas with different labels. In other words these are those points where summary potentials generated by reference points from D1 and D2 are equal (detailed description of this method is presented in [18]). Parameters describing potential contour are reference points from D1 and D2 as well as parameters of potential functions associated with them: Ψi1 , μ1i for i = 1, ..., N1 and Ψi2 , μ2i for i = 1, ..., N2 . For image segmentation apart from contour model, as in other active contour methods, there should be also defined energy function and method of contour evolution. The energy should be able to evaluate contour placed on the given image. It usually depends on the current segmentation task and allows to take into consideration any available knowledge from the considered domain. The method of evolution determines the method of optimal contour identification. In fact it can be any optimization process with energy function used as an objective function that is able to find optimal parameters of contour model. In this paper, simulated annealing algorithm was used, as when proper parameters are chosen, it is able to find the solution close to the global one. Moreover, it requires only the values of objective function and not its derivatives, which in case of more complicated function would have to be approximated numerically.
4 Application This paper presents initial results of application of the method described in the previous section for heart ventricle segmentation. For tests the left ventricle was chosen. Currently the method is not fully automatic as it requires manual selection of the threshold separating the inner part of ventricles from the other image elements. Further research is conducted to find this threshold automatically. The whole process of segmentation consists of two phases. The first one, further called a preprocessing phase, aims at automatic separation of left and right ventricle. Second, called a segmentation phase, represents actual segmentation of left ventricle. Both phases utilize the PAC algorithm. 4.1
Preprocessing
The first step of preprocessing is a choice of threshold used to localize the interior of the ventricles (Fig. 3). As it was mentioned, so far it is chosen by an expert. Further, the approximate contour containing ventricles is searched using PAC algorithm where: • •
Initial contour was placed in the center of the image with N1 = 1 and N2 = 2 reference points. Energy function assigned low energy when: – the contour was relatively small – all the pixels above the given threshold lay inside the contour
402
A. Tomczyk et al.
a)
d)
b)
e)
c)
f)
Fig. 3. Sample results of preprocessing phase: (a), (d) - white area represents pixels above given threshold, (b), (e) - automatically found contour (PAC method), (c), (f) - automatically found line separating ventricles
•
Simulated annealing solution generator randomly modified all parameters of potential contour model.
This contour (Fig. 3) is applied to find a line that separates left and right ventricles. Identification of that line is performed by simple testing each line starting at the top edge of the image and ending at its bottom edge. The used criterion chooses the line that has the longest part inside the contour and has, at that part, the lowest amount of points above the given threshold (Fig. 3).
Analysis of Changes in Heart Ventricle Shape Using Contextual PAC
4.2
403
Segmentation
The line found during preprocessing phase is used in second PAC algorithm used for left ventricle segmentation. In this algorithm: • •
•
Initial contour was placed in the center of the image with N1 = 1 and N2 = 3 reference points. Energy function assigned low energy when: – the contour was relatively small – all the pixels above the given threshold on the left side of the found line lay inside the contour – all the pixels below the given threshold on the left side of the found line lay outside the contour – whole contour lay on the left side of the found line Simulated annealing solution generator randomly modified all parameters of potential contour model.
First results prove that application of PAC method for heart ventricle segmentation can give satisfactory results (Fig. 4). Those results, so far, are not that precise that proposed approach could be directly used in diagnostic process. The cause of it may be the definition of energy function as well as
a)
b)
c)
d)
Fig. 4. Sample results where white contour was manually drawn by an expert and black contour was found automatically (PAC method)
404
A. Tomczyk et al.
a some problems with solution generator used in simulated annealing algorithm. The other problem is fact that images, as it was described in section 2, are reconstructed from the original images obtained from computed tomography scanner, which leads to averaging of image data. To find the proper contour, the method should take into consideration, as it is done by an expert, information from previous and next slices (as an element of the energy function there can be used properly trained neural network imitating expert decisions). However, though current results cannot be used directly, it can be used as a tool which significantly decreases time of manual segmentation (the expert can only slightly improve parameters of the potential contour instead of precise delineation of whole ventricle outline).
5 Summary In this paper a method of application of potential active contours for heart ventricle segmentation was proposed. One of interesting aspects of the presented approach is that it tries to imitate human way of perception, where first, the general outline of heart ventricles is localized, then interventricular septum is identified, which in consequence allows to find the contour of left ventricle. The obtained results are promising and in the present form can be used to make expert’s work easier. Further work will involve preparation of better energy functions (especially those taking into account additional information from adjacent slices) and solution generators as well as creation of fully automatic method (without necessity of manual threshold choosing). Investigated will also be the PAC adaptation mechanism described in [18], possibility of simultaneous segmentation of disjoint objects (both ventricles could be segmented by a single PAC method) and the simultaneous segmentation of either all the slices or all the slices and phases using PAC algorithm for 3D and 4D segmentation.
Acknowledgement This work has been supported by the Ministry of Science and Higher Education, Republic of Poland, under project number N 519007 32/0978; decision no. 0978/T02/2007/32.
References 1. Kass, M., Witkin, W., Terzopoulos, S.: Snakes: Active contour models. International Journal of Computer Vision 1(4), 321–333 (1988) 2. Caselles, V., Kimmel, R., Sapiro, G.: Geodesic active contours. International Journal of Computer Vision 22(1), 61–79 (2000) 3. Grzeszczuk, R., Levin, D.: Brownian strings: Segmenting images with stochastically deformable models. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(10), 1100–1113 (1997)
Analysis of Changes in Heart Ventricle Shape Using Contextual PAC
405
4. Cohen, A.T., Agnelli, G., Anderson, F.A., et al.: Venous thromboembolism (VTE) in europe. The number of VTE events and associated morbidity and mortality. Thrombosis and Haemostasis 98(4), 756–764 (2007) 5. Egermayer, P., Peacock, A.J.: Is pulmonary embolism a common cause of chronic pulmonary hypertension? limitations of the embolic hypothesis. European Respiratory Journal 15(3), 440–448 (2000) 6. Miller, G.A., Sutton, G.C., Kerr, I.H., et al.: Comparison of streptokinase and heparin in treatment of isolated acute massive pulmonary embolism. British Medical Journal 2(5763), 681–684 (1971) 7. Qanadli, S.D., El, H.M., Vieillard-Baron, A., et al.: New CT index to quantify arterial obstruction in pulmonary embolism: comparison with angiographic index and echocardiography. American Journal of Roentgenology 176(6), 1415– 1420 (2001) 8. Dellegrottaglie, S., Sanz, J., Poon, M., et al.: Pulmonary hypertension: accuracy of detection with left ventricular septal-to-free wall curvature ratio measured at cardiac mr. Radiology 243(1), 63–69 (2007) 9. Tomczyk, A., Szczepaniak, P.S.: On the relationship between active contours and contextual classification. In: Kurzy˜ nski, M., et al. (eds.) Proceedings of the 4th Int. Conference on Computer Recognition Systems (CORES 2005), pp. 303–310. Springer, Heidelberg (2005) 10. Tadeusiewicz, R., Flasinski, M.: Rozpoznawanie obrazow. Wydawnictwo Naukowe. PWN, Warszawa (1991) (in Polish) 11. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006) 12. Devijver, P.A., Kittler, J.: Pattern Recognition. A Statistical Approach. Prentice-Hall, Inc., Englewood Cliffs (1982) 13. Tomczyk, A., Szczepaniak, P.S.: Contribution of active contour approach to image understanding. In: Proceedings of IEEE International Workshop on Imaging Systems and Techniques (IST 2007) (May 2007) ISBN: 1-4244-0965-9 14. Tomczyk, A., Szczepaniak, P.S.: Adaptive potential active hypercontours. In: ˙ Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2006. LNCS, vol. 4029, pp. 499–508. Springer, Heidelberg (2006) 15. Szczepaniak, P.S., Tomczyk, A., Pryczek, M.: Supervised web document classification using discrete transforms, active hypercontours and expert knowledge. In: Zhong, N., Liu, J., Yao, Y., Wu, J., Lu, S., Li, K. (eds.) Web Intelligence Meets Brain Informatics. LNCS(LNAI), vol. 4845, pp. 305–323. Springer, Heidelberg (2007) 16. Tomczyk, A., Szczepaniak, P.S., Pryczek, M.: Active contours as knowledge discovery methods. In: Corruble, V., Takeda, M., Suzuki, E. (eds.) DS 2007. LNCS(LNAI), vol. 4755, pp. 209–218. Springer, Heidelberg (2007) 17. Pryczek, M.: Supervised object classification using adaptive active hypercontours with growing neural gas representation. Journal of Applied Computer Science 16(2) (2008) 18. Tomczyk, A.: Image segmentation using adaptive potential active contours. In: Kurzynski, M., Puchala, E., Wozniak, M., Zolnierek, A. (eds.) Proceedings of 5th International Conference on Computer Recognition Systems, CORES 2007, pp. 148–155. Springer, Heidelberg (2007)
Analysis of Variability of Isopotential Areas Features in Sequences of EEG Maps Hanna Goszczy´ nska1 , Leszek Kowalczyk1, Marek Doros1 , Krystyna Kolebska1 , Adam J´ o´zwik1, Stanislaw Dec2 , Ewa Zalewska1 , 2 and Jan Miszczak 1
2
Institute of Biocybernetics and Biomedical Engineering PAS, 4, Trojdena str., 02-109 Warsaw [email protected] Military Institute of Aviation Medicine, 54 Krasinskiego str., 01-755 Warsaw
Summary. The aim of the study was to analyse differences in dynamics of variability of extreme isopotential areas in sequences of EEG maps containing seizure activity episode. Analysis of dynamics of alternating variability of extreme isopotential areas was performed in the three steps: visual examination, calculation of the differencing coefficient and statistical analysis. Results of the study performed on two groups of totally 17 subjects reveal the different dynamics of isopotential areas variability in considered groups of patients.
1 Introduction Brain Electrical Activity Mapping (BEAM) is a routine method used in electroencephalography (EEG) for visualization of values of the different parameters characterizing the bioelectrical brain activity [1,2,3]. Sequences of maps created in every several milliseconds are very useful for presentation of the variability of constellations in maps. This may have significant meaning in the evaluation of seizure activity due to its dynamics. Evaluation of the changes of consecutive maps requires, however, the quantitative methods. The aim of the study was to analyse differences in dynamics of isopotential areas variability in sequence of BEAMs containing seizure activity episode.
2 Material and Methods The present approach concerns the analysis of alternating variability of extreme isopotential areas and estimation of dynamics of these changes in the sequence of BEAM containing seizure activity episode. EEG recordings were acquired using the system NeuroScan 4.3. The material comprised of 17 subjects divided into two groups (common numbering for both groups). The first group (I) consisted of the 10 clinically healthy M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 407–414. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
408
H. Goszczy´ nska et al.
subjects with the seizure activity. The second group (II) comprised the seven patients with epilepsy [4]. The sequences of 1000 frames of amplitude maps taken from the 10 s EEG record containing the seizure activity episode for each subject were analyzed (Fig. 1). The dimensions of images of those maps were 628×790 pixels in 17 colors scale referred to the 17 ranges of values of electrical potentials in scale from -20 μV to 20 μV. The maps were generated in instances of 10 ms. Fig. 1 shows the example of images from the sequence of 1000 maps referring to 10 s EEG recorded signal chosen from three periods: before, during and after seizure activity episode for patient no 3 from group II (a) and for patient no 9 from group I (b). Present approach concerns the analysis of the variability and the relationships between areas for the ranges of minimum and maximum potentials denoted as A−20 and A20 , respectively (Fig. 2). The analysis of variability of the areas in the sequence of the maps was performed using the normalized histograms of images of the maps and was described in [4]. Figure 3 presents the example of normalized values of the areas for the minimum and maximum potentials (A−20 and A20 ) for patients from both groups for 1000 frames of maps. The relationship between A−20 and A20 in sequences of EEG maps was evaluated by alternating changes of the values of areas A−20 and A20 . Analysis has been performed in three steps: 1. visual evaluation of the alternating variability of the values of areas A−20 and A20 , 2. calculation of the coefficient(s) describing quantitatively the principal factors of the alternating variability, 3. statistical analysis.
Fig. 1. Selected images from EEG maps sequence containing seizure activity episode for subject no 3 from II group (a) and for subject no 9 from I group (b)
Analysis of Variability of Isopotential Areas Features
409
Fig. 2. Example of EEG map (with magnified parts of color scale): areas for the ranges of minimum and maximum potentials are denoted as A−20 and A20
Fig. 3. The example of normalized values of the areas for the minimum and maximum potentials (A−20 and A20 ) for subject no 3 from II group (a) and for subject no 9 from I group (b)
3 Results Step 1 Fig. 4a,c and Fig. 5a,c show separated normalized values of areas A−20 and A20 presented in Fig. 3. Visual evaluation of the relationship between extreme isopotential areas was based on the analysis of the A−20 and A20 ratios. Fig. 4 and Fig. 5 show these ratios for subject from group II and from group I, respectively: •
ratio of the area corresponding to -20μV potential range to the area corresponding to 20μV potential range Rbr =A−20 /A20 (Fig. 4b, Fig. 5b),
410
H. Goszczy´ nska et al.
Fig. 4. The normalized values of areas A−20 (a) and A20 (c) and the values of ratios for the range from 0 to 1 Rbr (b) and Rrb (d) for subject no 3 from group II and diagram A−20 versus A20 (e)
•
ratio of the area value regarding 20μV potential range to the area value regarding -20μV potential range Rrb =A20 /A−20 (Fig. 4d, Fig. 5d).
The values of ratios in the range from 0 to 1 were analysed. The envelope curves of the values of the ratios of the areas A−20 and A20 during seizure activity seemed to be more regular for patients (Fig. 4b, d) whereas these values of ratio are more variable for healthy subjects (Fig. 5b, d). Fig. 4e and Fig. 5e show the values of the areas A−20 versus A20 . For the patients the points p(A−20 , A20 ) are distributed in rather regular way (Fig. 4e) except the cluster close to the origin. The values of p(A−20 , A20 ) obtained for healthy subjects above certain threshold set the trend to cluster in two separated regions (Fig. 5e).
Fig. 5. The normalized values of areas A−20 (a) and A20 (c) and the values of ratios for the range from 0 to 1 Rbr (b) and Rrb (d) for subject no 9 from group I and diagram A−20 versus A20 (e)
Analysis of Variability of Isopotential Areas Features
411
Fig. 6. Set of lines which indicates three ranges of points p(x, y)
Step 2 The principle of the coefficient differentiating of the alternating variability of the A−20 and A20 was to separate three ranges of points in diagrams A−20 versus A20 (Fig. 4e or Fig. 5e). Set of lines indicating three areas of points p(x, y), where x = A−20 and y = A20 , was assumed (Fig. 6): • • •
A* – the points to be removed, A – the rest of points, A – the points for which a’x+ b’ > y < ax+b,
For preliminary analysis the parameters of lines presented in Table 1 were used. Table 1. Parameters of set of lines parameter value
treshold 0,2
a a b b 2 0,5 -0,3 0,15
Fig. 7 illustrates the procedure of separation between A and A . Fig. 7a shows normalized diagrams from Fig. 4e and Fig. 5e with superimposed set of lines. Fig. 7b shows the diagrams with cutting of the points with coordinates below threshold and Fig.7c shows points for which the coordinate y is: y < ax + b and y > a x + b . Fig. 8 presents the values of ratio A /A for 17 subjects. Step 3 In statistical approach 11 factors were calculated for each subject: • • •
for A−20 : standard deviation (1), median (2), average (3), maximum (4), maximum-median (5), for A20 : standard deviation (6), median (7), average (8), maximum (9), maximum-median (10), ratio of average values for A−20 and A20 areas (11).
412
H. Goszczy´ nska et al.
Fig. 7. Normalized diagrams from Fig. 4e and Fig. 5e with superimposed sets of lines (a,b) and calculated regions A (c, d) and A (e, f)
Table 2 presents the values of each factor for 17 subjects (learning data set). The error rate, determined for the k-NN rule by the leave one out method, was used as a criterion for feature selection [5]. The four factors, given below, were chosen as the most significant features: • • •
for A−20 : average (3), maximum (4), for A20 : maximum (9), ratio of average values (11).
Analysis of Variability of Isopotential Areas Features
413
Fig. 8. The values of A /A for 17 subjects (1-7: group II, 8-17: group I) Table 2. Values of 11 factors for 17 subjects
The objects (subjects) no 4, 5 and 14 were misclassified whilst running the leave one out method.
4 Discussion The differences between analyzed ratios of areas in EEG maps of healthy subjects and epileptic patients have been found. The lower envelope (as shown in Fig. 4b,d and Fig. 5b,d) in patients data was more regular than in data of healthy subjects. The visual evaluation of the isopotential region distribution indicated the regular dispersion (except the region close to the origin) of patient data (as shown in Fig. 4e). In data of healthy subjects there were two clusters for points with coordinates above certain threshold (Fig. 5e). Results of the analysis of the relationships between isopotential areas on the EEG maps using defined coefficient confirmed its properties for the
414
H. Goszczy´ nska et al.
differentiation, however, an optimisation of normalised values, lines parameters or using of another distribution coefficients should be necessary. In statistical approach 11 factors were calculated. Using a misclassification rate, found for the k-NN rule by use of the leave one out method, 4 factors were chosen which offer the smallest number of wrong decisions (3 out of 17 classifications). It was found that the dependence between the true and the class assigned on the basis of selected features (both the true and the assigned class are qualitative variables) is statistically significant (p=0.035, Fisher test). The assigned class represents ”voice” of selected features and for this reason the dependence between the true and the assigned class is in fact the dependence between the classes and the selected features. In summary, we have defined a coefficient that evaluates the relationships between the total area of regions representing the lowest and highest amplitudes on maps. Analysis of the changes of this coefficients for consecutive maps allows to indicate the differences between recordings in two groups of patients that was also confirmed in visual inspection.
Acknowledgement This work was partially supported by the Ministry of Science and Higher Education grant no 3291/B/T02/2008/35.
References 1. Duffy, F.H.: Topographic Mapping of Brain Electric Activity. Butterworths, Boston (1986) 2. Lehman, D.: From mapping to the analysis and interpretation of EEG/EP maps. In: Maurer, K. (ed.) Topographic Brain Mapping of EEG and evoked potentials, pp. 53–75 (1989) 3. Li, L., Yao, D.: A new method of spatio-temporal topographic mapping by correlation coefficient of k-means cluster. Brain Topography 19(4), 161–176 (2007) 4. Goszczy´ nska, H., Doros, M., Kowalczyk, L., Kolebska, K., Dec, S., Zalewska, E., Miszczak, J.: Relationships between isopotential areas in EEG maps before, during and after the seizure activity. In: Pietka, E., Kawa, J. (eds.) Information Technologies in Biomedicine. Advances in Soft Computing, pp. 315–324. Springer, Heidelberg (2008) 5. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern classification. John Wiley & Sons, New York (2001)
Tumor Extraction From Multimodal MRI Moualhi Wafa and Ezzeddine Zagrouba Equipe de Recherche Systemes Intelligents en Imagerie et Vision Artificielle, Institut Suprieur d’Informatique, Abou Raihane Bayrouni, 2080, Tunisia [email protected], [email protected]
Summary. Manual segmentation of brain tumor from 3D multimodal magnetic resonance images (MRI) is time-consuming task and leading to human errors. In this paper, two automated approaches has been developed for brain tumor segmentation to discuss which one will provide accurate segmentation that is close to the manual results. The MR feature images used for the segmentation consist of three weighted images (enhanced T1, proton density(PD) and T2) for each axial slice through the head. The first approach is based on multi-features Fuzzy-c-means (FCM) algorithm followed by a post-processing step based on prior knowledge to refine the tumor region. The second approach is three-pass step. First, each single modality MRI is classified separately with FCM algorithm. Second, classified images are fused by Dempster-Shafer evidence theory to get the final brain tissue labeling. Finally, prior knowledge are used to refine the tumor region. For validation, ten tumor cases of different size, shape and location in the brain are used with a total of 200 multimodals MRI.The brain tumor segmentation results are compared against manual segmentation carried out by two independent medical experts and used as the ground truth. Our experimental results suggest that the second approach produces results with comparable accuracy to those of the manual tracing compared to the first approach.
1 Introduction Brain tissue segmentation in MR images has been an active research area [1]. Multimodal MR images can be obtained via measuring different independent parameters. As a tumor consists of different biologic tissues and enhances differently depending on the MR modality, one type of MRI cannot give complete information about abnormal tissues. That is why there is an obvious gain in trying to fuse multimodal MRI and to account for various sources of information in the same procedure to segment tumor. Liu et al. [2] have developed a method for image segmentation and tumor volume measurement. It is based on the fuzzy-connectedness theory and requires a prior knowledge of the estimated location of the tumor. Mazzara et al. [3] have used a supervised k-nearest neighbor (KNN) method and an automatic knowledge-guided (KG) method for MRI brain tumor segmentation. Prastawa et al. [4] have developed an automatic segmentation method that use atlas as geometric M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 415–422. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
416
M. Wafa and E. Zagrouba
prior to segment the tumor as well as edema. Mahmoud-Ghoneim et al. [5] have proposed a 3D co-occurrence matrix based tumor texture analysis with increased specificity and sensitivity. In this paper, we developped two automatic segmentation approaches to segment brain tumors. The fisrt approach uses multi-features FCM to classify pixels according to their intensity values extracted from T1, PD and T2 MR images. The second approach classifies each single modality MR image with FCM algorithm and classified images are then fused with Dempster-shafer (DS) evidence theory. The advantage of using DS theory in data fusion lies in its ability to deal with ignorance and imprecision. Both approaches are followed by a post-processing step based on prior knowledge to refine the tumor region. The accuracy of these automated approaches are compared with those of manual segmentation carried out by trained personnel. The main contribution of this work is to make decision about the effective way to fuse multimodal MRI by comparing two different segmentation approaches. The first approach fuses mulitmodal MRI during the classification process by the means of FCM algorithm whereas the second approch fuses these modalities after the classification process with DS evidence theory.
2 Background We first present the principle of the conventional Fuzzy c-means algorithm. Next, we present a brief description of the Dempster-Shafer theory of evidence. 2.1
Fuzzy-c-Means
The Fuzzy-c-means (FCM) algorithm assigns pixels to clusters according to their fuzzy memberships. Let X=xi , i = 1, 2.., N/xi ∈ Rd denote an image with N pixels and xi represents feature data. The algorithm is an iterative optimization that minimizes the objective function defined as follows [6]: Jm =
c N
um ki xi − vk
(1)
k=i i=1
with the following constraints: c k=1
uki = 1, ∀i; 0 ≤ uki ≤ 1, ∀k, i;
N
uki > 0, ∀k
(2)
i=1
where uki represents the membership of xi in the k th cluster, vk is the k th class center, . denotes the Euclidean norm, m> 1 is a weighting exponent on each fuzzy membership. The parameter m controls the fuzziness of the resulting partition. The membership functions and cluster centers are updated by the following expressions:
Tumor Extraction From Multimodal MRI
417
N m i=1 uki xi , v = N m k 2 xi −vk (m−1) i=1 uki l=1 ( x −v ) 1
uki = c
i
(3)
l
The FCM algorithm converges when the objective function is minimized. 2.2
Dempster-Shafer Theory
Dempster-Shafer theory provides a way to clearly distinct between conflicting information from different sources. We provide a brief introduction to this theory and more explanation can be found in Shafer’s book [7] [8]. Let us consider a finite set Ω of mutually exhaustive and exclusive hypothesis called the frame of discernment. Formally, the basic probability assignment is a mass function m from 2Ω , the power set of Ω, into an unit interval [0, 1] such that: m(A) = 1 (4) m(φ) = 0 and A⊂Ω
where φ is an empty set. The subsets A of Ω with non zero mass m(A) > 0 are called the focal sets of m. The quantity m (A) can be interpreted as a measure of the belief that an element of Ω belong exactly to A and not to any of its subset. The mass function quantify the available imprecise information. In many system based on evidence theory, this mass is defined subjectively by experts. Belief function (Bel) and plausibility function (pl) are two measures introduced by Shafer, both of which can be derived from the basic assignment m and represented by: Bel(φ) = 0 and Bel(A) = m(B) (5) B⊂A
pl(φ) = 0 and pl(A) =
m(B)
(6)
B∩A =φ
where B ⊂ Ω and A ⊂ Ω. The plausibility function pl(A) represents potential amount of support given to A. The belief function Bel(A) measures the total belief assigned to A. Let m1 and m2 be two mass functions based on information given by two different sources in the same frame of discernment Ω. The result of their combination ⊕ according to the Dempster’s orthogonal rule of combination is defined as follow: m(φ) = 0 , (m1 ⊕ m2)(A) = k=
1 1−k
m1(B)m2(C)
(7)
B∩C=A
m1(B)m2(C) > 0
(8)
B∩C=φ
k is the summation of the product of the mass functions of all sets where the intersection is null. The denominator 1 − k is used as a normalisation term.
418
M. Wafa and E. Zagrouba
3 Overview of the Segmentation Approaches Fuzzy c-means is the core algorithm in our approaches for tumor segmenting and labeling. The set of class labels used for this study is τ = { white matter, gray matter, Cerebrospinal Fluid(CSF),pathology (tumor, necrosis), air, bone, skin, fat, muscles, non-brain }. A schematic overview of the two approaches is shown in figure1. The first segmentation approach is a two-pass step. First, FCM clustering algorithm is used to segment brain tissues. Each pixel has three attributes: an intensity value in T1, PD and T2 image. Figure2 shows respectively a sample of T1, PD and T2 MR images. For each pixel, these three values form what is called the feature vector. The FCM classify pixel into one of the tissue classes based on its feature vector. The initial FCM classification is passed to an expert as a post-processing step to isolate the tumoral class and to get the final tumor mass by using prior knowledge concerning anatomical information about clusters. Initial tumor class contains many pixels that do not represent tumor but are classified as tumor because it have the similar MRI feature vectors with tumor pixels due to different image artefacts. Figure3 show the initial tumor class of the data presented in figure2 by the two segmentation approaches. These results show that in addition to the soft palate that is adhesive to the tumor, a number of spatially disjoint small areas (scatter points) which are also regarded as tumor. In general, the tumor mass appears as a morphologically continuous region in the image. In contrast, non-tumor areas which are classified as tumor are always disconnected and the pixel number in those areas is relatively small. Therefore, those areas which are spatially disjoint to the tumor region and contain low pixel number will be removed by research of the biggest related component. In most cases, inside the tumor area we can found small holes which represent necrosis. Then mathematical morphology operators are used to fill these holes and to get the total tumor region. The second segmentation approach is a three-pass step. First, each MRI single modality (T1, PD and T2) is classified separately by the FCM algorithm. Therefore, classified images are fused using DS evidence theory to get the final tissues labeling. Mass functions in DS evidence theory are estimated for each pixel using its fuzzy membership degrees. Each pixel is characterized
3 MRI data sets
T1 MR image
Fuzzy classification
T2 MR image
The same as above
Fuzzy classification
PD MR image Initial tumor region
Post−processing
The same as above
DS evidence theory
Initial tumor region
Labeled images
Post−processing
Final tumor region Final tumor region
Fig. 1. Overview of the segmentation approach (left) MFCM; and (right) FCM-DS
Tumor Extraction From Multimodal MRI
419
Fig. 2. T1-weighted axial image(Top left),PD-weighted axial image(Top right), T2-weighted axial image(bottom left), Ground truth (bottom right)
White matter Pathology (Tumor, Necrosis)
Grey matter CSF of lateral ventricule
White matter
Grey matter
Pathology (Tumor, Necrosis)
CSF of lateral ventricule
Fig. 3. MFCM segentation result(Top left), MFCM initial tumor (Top right), FCMDS segentation result(bottom left), FCM-DS initial tumor (bottom right)
420
M. Wafa and E. Zagrouba
by a membership grade, generated by the FCM to a given cluster. Once the mass function values of each classified image are estimated, DS combination rule and decision are applied to obtain the final labeled image. Finally a post-processing step based on prior knowledge described above is used to get the final tumor region. The segmentation results of the two approaches are given in figure 3. The first approach is denoted MFCM whereas the second is denoted FCM-DS.
4 Validation The image repository used in this study consists of 10 MR image datasets of patients with brain tumors. Details on the pathology and location of tumors are presented in Table 1. Patient heads were imaged in the axial plane with a 1.5 T MRI system (Signa, GE Medical Systems, Milwaukee, WI), with a postcontrast 3-D sagittal spoiled gradient recalled (SPGR) acquisition with contiguous slices (flip angle, 45◦; repetition time (TR), 35 ms; echo time (TE), 7 ms; field of view, 240 mm; slice-thickness, 1.5 mm; 256×256×124 matrix). Manual tracing was performed using a graphic user interface developed by our laboratory on a PC workstation. Trained human operators outlined the tumor slice by slice by pointing and clicking with a mouse. The program connected consecutive points with lines. The tumor region was then defined by a closed contour. The area inside the outline was calculated and multiplied by the MR slice thickness plus the interslice gap to calculate a per-slice tumor volume. The total tumor volume was obtained by summing the volume calculations for all slices. The ground truth segmentation was defined on the basis of the segmentations of four independent human operators. A single 2D slice was randomly selected from the subset of the MR imaging volume that showed the tumor. The two human operators then independently outlined the tumor on this slice by hand. The ground truth segmentation of tumor in Table 1. Details on the Current 10 Magnetic Resonance Imaging Datasets of Brain Tumors Proposed as Part of the Validation Framework Patient number Tumor
Location
1 2 3 4 5 6 7 8 9 10
Right frontal Left temporal Right frontal Right parietal Left temporal Left frontal Left frontal Left frontotemporal Left parasellar Left temporal
Astrocytoma Astrocytoma Low-grade glioma Meningioma Low-grade glioma Low-grade glioma Meningioma Astrocytoma Meningioma Meningioma
Tumor Extraction From Multimodal MRI
421
each patient data set was defined as the area of those voxels in which the two raters agreed regarding their identification. An example is shown in figure3. To quantitatively evaluate the segmentation quality, validations on volumel and pixel level were performed. In volume level, measurement error (ME) is defined as: |VS − VGT | ME = × 100 (9) VGT where VS (either VF CM for the first method or VF CM−DS for the second method) was the volume obtained by the automated approaches and VGT was the volume obtained by the manual tracing on the same patient.Quantitative validation of segmentation results with respect to GT in pixel level was also performed. The following were calculated: true positives (TPs, GT tumor pixels found algorithmically), false positives (FPs, pixels isolated as tumor but not within GT). The tumor mass identified by automated segmentation were compared to the GT on a per-slice and matching between them was measured using percent matching (PM) and correspondence ratio (CR), defined, respectively by: PM =
T Ps T Ps − 0.5F Ps × 100, CR = GT GT
(10)
An ideal PM value was 100%, whereas a value of 0 indicated that there was a complete missing. The CR compares the isolated tumor with GT tumor in terms of correspondence in size and location. VGT , VS , ME, PM and CR were all expressed by minimum, maximum and mean standard deviation format.
5 Discussion and Conclusion A total of 200 axial tumor-containing slices obtained from 10 patients were evaluated using manual tracing and automated segmentation approaches. The results of the tumor volume measurement given by the different approches are presented in table 2. There were no significant differences between the manually traced volume and automatically segmented volumes. In addition to the visual validation, three quantitative measures for tumor segmentation quality estimation, namely, CR, PM and DSC were performed. These measures of automated segmentations for all 10 patients compared with manual tracing segmentation are shown in tables 3. For the FCM-DS segmentation of brain tumor, the volume total PM varies from 79.54 to 92.86 % with the mean of 85.57±4.34 while the volume total CR varies from 0.82 to 0.93 with the mean of 0.84±0.08. However, for the MFCM approach, the volume total PM varies from 78.32 to 87.03 % with the mean of 84.47±4.50 while the volume total CR varies from 0.74 to 0.88 with the mean of 0.83±0.14. Results show that both approaches could achieve satisfactory segmentation results for brain tumor. Therefore, the FCM-DS segmentation approach perform slightly better than the MFCM approach in PM and CR. Satisfactory segmentation results were achieved using FCM-DS approach. This method
422
M. Wafa and E. Zagrouba
Table 2. Results of tumor volume measurement VGT (cm3)
VM F CM (cm3) M EM F CM (%) VF CM −DS (cm3) M EF CM −DS (%)
Minimum 4.29 5.39 Maximum 47.56 48.86 Mean ± SD 29.39±12.49 29.56±12.64
2.73 20.64 5.98±5.88
5.01 48.05 29.45±12.53
1.03 16.78 5.28±5.14
Table 3. Percentage match and correspondence ratio comparing manual tracing and automated segmentation of brain tumors P MF CM (%) CRM F CM (%) P MF CM −DS (%) CRF CM −DS (%) Minimum 78.32 0.74 Maximum 87.03 0.88 Mean ± SD 84.47 ± 4.50 0.83 ± 0.14
79.54 92.86 85.57 ± 4.34
0.78 0.93 0.84 ± 0.08
can be used as a clinical image analysis tool for doctors or radiologists to obtain MRI tumor location and volume estimation.
References 1. Iftekharuddin, K.M.: On techniques in fractal analysis and their applications in brian MRI. In: Cornelius, T.L. (ed.) Medical imaging systems: technology and applications, Analysis and Computational Methods, vol. 1, pp. 993–999. World Scientific Publications, Singapore (2005) 2. Liu, J., Udupa, J.K., Odhner, D., Hackney, D., Moonis, G.: A system for brain tumor volume estimation via MR imaging and fuzzy connectedness. Comput. Med. Imaging Graph., 21–34 (2005) 3. Mazzara, G.P., Velthuizen, R.P., Pearlman, J.L., Greenberg, H.M., Wagner, H.: Brain tumor target volume determination for radiation treatment planning through automated MRI segmentation. Int. J. Radiat. Oncol. Biol. Phys., 300– 312 (2004) 4. Prastawa, M., Bullitt, E., Ho, S., Gerig, G.: A brain tumor segmentation framework based on outlier detection. Med. Image Anal., 275–283 (2004) 5. Mahmoud-Ghoneim, D., Toussint, G., Constans, J., Certains, J.-D.d.: Three dimensional texture analysis in MRI: a preliminary evaluation in gliomas. Magn. Reson. Imaging., 983–987 (2003) 6. Runkler, T.A., Bezdek, J.C.: Alternating cluster estimation: a new tool for clustering and function approximation. IEEE Trans. Fuzzy Syst., 377–393 (1999) 7. Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press, Princeton (1976) 8. Smets, P., Kennes, R.: The Transferable Belief Model. Artif. Intell., 191–243 (1994)
Combined T1 and T2 MR Brain Segmentation Rafal Henryk Kartaszy` nski1 and Pawel Mikolajczak2 1
2
Maria Curie Sklodowska University, Maria Curie-Sklodowska 1 square, 20 - 031 Lublin, Poland [email protected] Maria Curie Sklodowska University, Maria Curie-Sklodowska 1 square, 20 - 031 Lublin, Poland [email protected]
Summary. In following article we present new approach to segmentation of brain from MR studies. Method is fully automated, very efficient, and quick. Main point of this algorithm is subtraction of T1 series form T2 series (therefore we’ve called it combined), proceed and followed by few image processing steps. Method has been tested and graded by experts, also the segmentation results where compared numerically to those produced by experts. Results of these tests point at great effectiveness of presented algorithm.
1 Introduction Segmentation is the process of partitioning a digital image into multiple regions (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. The result of image segmentation is a set of regions that collectively cover the entire image, or a set of contours extracted from the image. Each of the pixels in a region are similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s) [1]. This operation, as one may notice, plays a very important role in medical image processing. Directly by physicians for, for example, diagnostic purposes or surgery planning. Indirectly by software developers who often use segmentation as a mid-step in more complex operation. Method, presented in this article, was designed and developed in accordance to our needs, as a part of our research into perfusion analysis in brain. Therefore it is based on methodology of study acquisition used in hospitals we are cooperating with.
2 Segmentation Method Presented method, at this point, works in axial plane and requires T1 and T2 axial series, of the same part of brain, to be present in MR study. Algorithm M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 423–430. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
424
R.H. Kartaszy` nski and P. Mikolajczak
is divided into several stages, as shown on Figure 1. It uses two data sets, first a T1 series and second: T2 series. Each of them covers a certain part of brain, as stated in a DICOM header. From this header information on slice location, in space, is obtained. Generally, in hospitals we cooperate with, studies are acquired in manner which ensures that slices, in a different series are in a same location. Information from both series is used to produce a general segmentation mask for a given study. This mask can be then used with each series in a study.
Fig. 1. Steps of the segmentation method
Background Removal This is a preliminary stage which is applied to both T1 and T2 series. Its purpose is to remove all background noise from images. By noise, in this case we understand all the ”static” that can by observed outside the body tissues. The best way to get rid of it, is to apply a simple thresholding operation to each slice in the display set. To choose which frequencies should be removed we’ve analyzed histograms of those slices. Generally they have shape as shown in Figure 2. The high peak on the left side of the plot represents numerous low frequencies pixels of the background (also meninges between brain and bone-skin tissues). They are followed by minimum denoted on the figure 2 as tb. Next
Combined T1 and T2 MR Brain Segmentation
425
Fig. 2. General shape of the histogram plot of MR brain slices
is the rest of the image data. Therefore tb is the best place to set the end of the thresholding range. Range, of course, starts with 0 intensity. Method for finding this local minimum is rather straight forward. Considering the average shape of the histogram plot, this minimum is always located in the first 1/5 part of the diagram. For this part it is a local minimum. Noise Removal This is simple filtering operation, which removes remaining single pixels from image [2]. Series Fitting As mentioned before, in most cases T1 and T2 series are acquired at same patient position. However in some few cases it’s different. In such cases it is necessary to transform one data set to match the other (see Figure 3).
Fig. 3. Slices of T1 and T2 series of the study where the patient was slightly rotated between series acquisition. It didn’t affect slices location, as can be read from DICOM header (underlined value). White lines denote symmetry planes.
426
R.H. Kartaszy` nski and P. Mikolajczak
Fig. 4. Region distribution throughout the brain volume. For each region its ratio is given along with its size. Size is provided as a fraction of the whole brain data height and width.
To detect situation when position of the patient is different in two needed series, symmetry planes [3] are found for both of them and compared. Next one of the series is rotated (and translated if needed) to fit the second. Anatomy Based Data Subtraction This operation is a center point of the whole algorithm, when two series are subtracted. To be precise, T1 series are subtracted from T2 series. However, we must remember that brain is a complex structure and simple subtraction of one voxel from another will do us no good. (Our main goal is to obtain a contour of the brain as accurate as it is possible.) First look on a T2 MR brain slice gives us very important information. Gray matter (which lies on the surface of the brain) is very well produced as high intensity voxels. Those voxels will form a brain contour - our goal - in next steps of the algorithm. Of course anatomic structure of the brain differs in a different parts of the brain. Therefore we must make subtraction aware of a brain anatomy details. To accomplish this we have divided brain into several regions, according to how tissues are distributed on a given slice. Figure 4 shows those regions. For each region a ratio was chosen, on empiric bases (through subtraction analysis for twenty different studies). During subtraction, voxels in each region are treated differently, according to following formula: VM = VTR2 − ratioR ∗ VTR1
(1)
Combined T1 and T2 MR Brain Segmentation
427
Fig. 5. Distribution of voxels intensities along marked lines for sample sliece being result of previous algorithm operations
where VM is a result voxel of subtraction, VTR2 is a voxel in region R of T2 image, VTR1 is a voxel in region R of T1 image, and ratioR is ratio for region R. As a result, of this step, we will get well segmented brain structures, surrounded by other tissues, in most cases not in direct contact with the ROI - brain. Only few groups of voxels may be ”disturbing” quality of the segmentation. These anomalies will be removed in the penultimate step of this segmentation method. Connected Components Analysis Connected components labeling is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. A graph, containing vertices and connecting edges, is constructed from relevant input data. The vertices contain information required by the comparison heuristic, while the edges indicate connected ’neighbors’. An algorithm traverses the graph, labeling the vertices based on the connectivity and relative values of their neighbors. In our case connectivity is determined by the 8-connectivity and algorithm is applied for each slice separately. [4] After labeling is done only certain components are left on each slice, ie. the largest (having most voxels) and those not smaller than 15. Post-processing As mentioned before, there is still the issue of a few anomalies at the brain edge. To fix this, we have looked at the distribution of voxel intensities along lines - intersections of slices. Consider figure 5. The A plot shows distribution of voxels values (along showed lines) on a well segmented boundary, whereas plot B re badly segmented part of the brain border. Ideal border intersection has steep linear outer slope, while border with artifacts is disturbed with additional local peeks and roughness. Obvious thing to do is to remove them by smoothing intensities values, as shown on Fig. 6.
428
R.H. Kartaszy` nski and P. Mikolajczak
Fig. 6. Visualization of border smoothing
In our algorithm it is done for vertical and horizontal intersections, plus for each analyzed line, two neighboring lines are also taken into account. Thanks to this continuality of border is ensured, and there is no situation when there are insertion in borders (as the effect of too steep smoothing). Mask Creation The last step of the segmentation involves filling shapes, contours of structures acquired during previous segmentation steps. We have used flood fill algorithm with 4-connectivity. As a starting point we have used inner border of the shape boundary.
3 Results and Tests At this point we have tested our method on several dozen (about twenty) studies from two different hospitals we are cooperating with. Method produces very good ”visual” results. Boundaries of the brain tissue are well marked, smooth and not rigged. Spinal cord (often shown on first few slices in series), along with cerebellum is also well segmented. Figure 7 show several examples of method outputs. Of course visual estimation is not authoritative, especially when done by non physician. Therefore we have chosen two indicators to measure effectiveness of the method. First: we have asked experts - physicians to rank results of segmentation according to their quality and precision. Each segmented data set, could be graded as: perfect (there are no mistakes in segmentation, I wouldn’t do it better), very good (there are some minor mistakes, that don’t affect in any way diagnostic quality of data), good (results are average, some segmentation may have been done better, however mistakes, would not affect diagnosis), bad (only some slices are well segmented, but also contains some mistakes, diagnosis could be affected), very poor (almost nothing is done well, why to use this method). Each description had corresponding numeric value from 5 to 1, respectively. Second: We have asked our experts to manually segment few data sets for us. These data sets where then segmented with our method
Combined T1 and T2 MR Brain Segmentation
429
Fig. 7. Results of the segmentation
and results where compared with ones acquired manually. We have measured number of over- and under-segmented pixels, to the whole number of pixel. Over-segmented pixel is one that has been included into segmentation mask by our algorithm, but hasn’t been included by experts. Under-segmented pixel, on the other hand, is such a point that was included by the physicians but not by our method. Following are some results obtained for seven data sets. These are some of the best, worst and average results. Generally, for twenty data sets, method has achieved average of 4.3 points in the first test. In second test average under-segmented pixel percentage was 2.6% Table 1. Results of the first test (ranking of results performed by experts) for seven data sets
Expert Expert Expert Expert
1 2 3 4
DS 1
DS 2
DS 3
DS 4
DS 5
DS 6
DS 7
4 4 5 4
5 4 4 5
4 3 4 3
4 3 4 4
4 4 5 4
4 5 4 5
4 4 4 4
430
R.H. Kartaszy` nski and P. Mikolajczak
Table 2. Results of the second test (comparison with segmentation performed by experts) for same seven data sets
aver. over-seg. % aver. under-seg. % std. dev. of over-seg. % std. dev. of under-seg. %
DS 1
DS 2
DS 3
DS 4
DS 5
DS 6
DS 7
2.1% 1.5% 0.6% 0.7%
2.0% 1.7% 0.6% 0.6%
2.8% 2.1% 0.9% 0.8%
2.2% 2.0% 0.8% 1%
1.9% 1.7% 0.7% 0.7%
1.8% 1.7% 0.9% 0.9%
2.3% 1.9% 0.8% 0.8%
with standard deviation of 1.7%, and average over-segmented pixel percentage was equal 3.1% with standard deviation of 1.65%. As we can see method is very accurate. Furthermore it is not time consuming. Average time of its execution is about one, one and a half minute on rather standard PC configuration. Of course it might be faster but in our case precision is more important.
4 Conclusions and Future Work As we could see, presented method produces very good results. Its drawback is requirement for T1 and T2 series to be present in a study, and that they cover same part of the brain. Fortunately in most cases this is satisfied. Nevertheless algorithm is rather simple and could be applied in each situation when accurate segmentation of a brain is required. In our future work we will focus on improving method effectiveness and efficiency. We will also use it in our research into perfusion analysis in brain.
References 1. Shapiro, L.G., Stockman, G.C.: Computer Vision, pp. 279–325. Prentice-Hall, New Jersey (2001) 2. Windyga, P.S.: Fast impulsive noise removal. IEEE Transactions on Image Processing 10(1) (2001) 3. Tuzikov, A., Colliot, O., Bloch, I.: Evaluation of the symmetry plane in 3D MR brain images. Pattern Rec. Lett. 24, 2219–2233 (2003) 4. Samet, H., Tamminen, M.: Efficient Component Labeling of Images of Arbitrary Dimension Represented by Linear Bintrees. TIEEE Trans. Pattern Anal. Mach. Intell. (1988)
The Preliminary Study of the EGG and HR Examinations Dariusz Komorowski1 and Stanislaw Pietraszek2 1
2
Institute of Electronics, Division of Microelectronics and Biotechnology, Silesian University of Technology, Gliwice, Poland [email protected] Institute of Electronics, Division of Biomedical Electronics, Silesian University of Technology, Gliwice, Poland [email protected]
Summary. Electrogastrographic examination (EGG) can be considered as a noninvasive method for an investigation of a stomach slow wave propagation [1] [2]. The EGG signal is non-invasively captured by appropriately placed electrodes on the surface of the stomach. This paper presents the method for synchronously recording and analyzing both the EGG and the heart rate signal (HR). The HR signal is obtained by analyzing the electrocardiographic signal (ECG). The ECG signal is recorded by means of the same electrodes. This paper also presents the method of reconstruction the respiratory signal (RESPIRO). In this way it is possible to examine mutual interaction among ECG, HR and RESPIRO signal respectively. This paper also depicts the preliminary results of a comparison of the EGG and the RESPIRO signals obtained using two different methods, firstly by classical pass-band filtering, secondary by estimation of the baseline drift.
1 Introduction The standard surface EGG signals were captured by means of disposable electrodes placed on the patient’s stomach surface. During the signal registration process standard electrodes were applied configured according to the standard [4], including four signal electrodes (E1..E4), reference electrode (Ref) and ground electrode (Gnd) [1] [2]. An example of the electrodes placement is shown in Fig. 1 [4]. The signals which are available on the stomach surface include not only the EGG signal but also ECG signals and signals connected with respiratory movements. Both useful component of the EGG and RESPIRO signals are localized in the same range of frequency. The example signals recorded from one of the available EGG leads (E1) are shown in the Fig.2. The signals were recorded within the 0.015Hz ÷ 50Hz frequency range. The wide range of the recorded frequency emphasizes the structure of the presented signals. In popular commercial systems for capturing the EGG, the signals are usually obtained by means of the method based on classical pass-band filtration. The filtration typically is made by using the analog filters placed M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 431–438. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
432
D. Komorowski and S. Pietraszek
Fig. 1. The standard placement of the EGG electrodes [4]
0.3
0.15 0.1 0.05
0.1
Voltage (mV), ACS (e.u.)
Voltage (mV), ACS (e.u.)
0.2
0
−0.1
−0.2
0 −0.05 −0.1 −0.15
−0.3
−0.2
−0.4
−0.25
0
10
20
30 Time [s]
40
50
60
0
0.5
1
1.5
2 Time [s]
2.5
3
3.5
4
Fig. 2. Measurements of data (E1), time period 60s and 4s
after a preamplifier stage. The typical cutoff frequencies are 0.015Hz and 0.15Hz [3]. The recording of additional signals requires other devices, so that synchronous signals analyzes becomes difficult. The presented in this paper methods of capturing and analyzing signals obtained from stomach surface, allows to separate the components of the EGG, HR and RESPIRO signals. In this way simultaneous analysis the EGG and the HR obtained from the ECG signals is possible. The HR signal can by obtained from photoelectric pulse sensor, but the precision of RR intervals calculation is insufficient for detail heart rate variability analysis [5]. The RESPIRO signals can be extracted based on the recorded signals. Additionally the accelerometer sensor was placed near the electrodes, to record respiratory movements of abdomen and chest to verify the RESPIRO signals.
2 Methodology The EGG signals have been recorded by means of the four-channel amplifier which can be characterized by the set of the following parameters: frequency range from 0.015Hz to 50Hz, gain k = 5000, amplitude resolution
The Preliminary Study of the EGG and HR Examinations
433
Fig. 3. The block diagram of examination stand
- 12 bits, sampling frequency 200Hz per channel and useful signal amplitude range 2mV . For respiratory motion the special triple-axis acceleration sensor type MMA 7260Q manufactured by Freescale company has been placed near electrodes, with parameters as follows: number of axis 3(XYZ), sensitivity 800mV /g, measurement range 1.5g [8]. Both the acceleration sensor signal (ACS) and the EGG signal have been synchronously recorded. Only one component (AY) has been applied in the examination process due to the fact, that other components in the surfaces perpendicular to the direction of the chest motion did not include visible component correlated with respiratory motions. Signals lasting from 20 to 120 minutes have been recorded during the examination process. The block diagram of the recording system is in the Fig.3. Relatively high sampling frequency (i.e. 200Hz per channel) allowed for further analysis of the ECG signal as well. Preliminary filtration of both extraction and visualization of required signals has been applied. The following useful signals have been extracted from the joint recorded signal: the EGG, the ECG and the RESPIRO. These signals have been further extracted using two different methods. The Fig.4 shows the block diagram of the described methods. The first method uses the set of classical filters. The EGG signal extraction is performed by application of the band-pass filter covering the range 0.015Hz ÷ 0.15Hz [9]. The lower cutoff frequency results from the high-pass RC filter applied in the amplifier hardware and digital fourth order high pass Butterworth filter, whereas the upper cutoff frequency results from the application of the digital fourth order Butterworth filter. The RESPIRO signal (i.e. the component of the signal caused by respiratory motions) has been extracted by an application of the Butterworth fourth order band-pass filter (0.15Hz ÷ 0.5Hz). The mentioned filters are given by the formula (1)
Fig. 4. The block diagram of signals preprocessing
434
D. Komorowski and S. Pietraszek
0.2
Amplitude (mV), ACS (e.u.)
0.1
0
−0.1
−0.2
−0.3
−0.4
−0.5
0
10
20
30 Time (s)
40
50
60
Fig. 5. The comparison of whole EGG, respiratory and ACS signals (from the top: measurement data (0.015 ÷ 0.5Hz), EGG data(0.015 ÷ 0.15Hz), respiratory data (0.15 ÷ 0.5Hz), smooth ACS data)
and have been implemented by an application of the appropriate software. The coefficients of the applied filters are in Tab.1. a0 y(n) =
4 i=0
bi x(n − 1) −
4
ai y(n − 1),
(1)
i=1
The 60 seconds registration after the filtration process has been illustrated in the Fig.5. In the second method both signals, i.e. the EGG and the RESPIRO have been extracted using analysis of the baseline obtained from the ECG signal. The ECG signal required for the purpose of the QRS detection has been extracted by means of the band-pass filter tuned in the range 1Hz ÷ 35Hz. Finding the fiducial points in this method requires the ECG signal Table 1. The coefficients of applied filters
a0 a1 a2 a3 a4 b0 b1 b2 b3 b4
EGG filters RESPIRO filters ECG filters LP F HP F LP F HP F LP F HP F 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 −3.9918 −3.9988 −3.9590 −3.9877 −1.1751 −3.9179 5.9754 5.9963 5.8777 5.9631 0.9256 5.7571 −3.9754 −3.9963 −3.8785 −3.9632 −0.3104 −3.7603 0.9918 0.9988 0.9598 0.9878 0.0477 0.9212 ∗ 0.0061 0.9994 ∗∗ 0.0037 0.9939 0.0305 0.9598 ∗ 0.0243 −3.9975 ∗∗ 0.0149 −3.9754 0.1219 −3.8391 ∗ 0.0364 5.9963 ∗∗ 0.0224 5.9632 0.1829 5.7587 ∗ 0.0243 −3.9975 ∗∗ 0.0149 −3.9754 0.1219 −3.8391 ∗ 0.0061 0.9994 ∗∗ 0.0037 0.9939 0.0305 0.9598 ∗ ∗∗ ∗ 10−9 ∗ 10−6
The Preliminary Study of the EGG and HR Examinations
435
Fig. 6. The model of respiratory signal influence on an ECG signal
resampling after implementation of the interpolation procedure. The new sampling frequency has been set to 1000Hz and therefore the accuracy of the procedure has significantly increased. An example of the ECG signal associated with both R and T waves, and the measurement points whole baseline has been recovered are in the Fig. 7. The whole baseline has been recovered based upon estimated fiducial points, sampling the base line (between P and Q waves) and interpolation. Perfect coherence has been observed between the recovered baseline and the line obtained by means of the proper filtration and the first method. The respiratory motions influence not only changes of the baseline as it can be seen in the Fig.5, but also the amplitude of the characteristic waves of the EGG. The motions also influence the changes of the heart rate. The measurements of both R and T waves have been used for baseline estimation. The P wave has not been applied due to very small amplitude. The mathematical model consist of the following equations (2÷4) [7]: izo(t) = a1 R(t),
(2)
am(t) = (1 + a2 R(t))(ECG),
(3)
HR(t) = HR0 + a3 R(t),
(4)
where: izo(t) is baseline, am(t) is R wave amplitude, R(t) is RESPIRO signal and HR(t) is an instantaneous heart rate. The block diagram of the presented model has been shown in the Fig. 6. Further analysis requires implementation of the interpolated baseline removing procedure. Such an approach prevents additional disturbances, which always appear when the traditional high-pass filtering is applied. Processing of the ECG signal in such a way leads to elimination of the pure EGG and the RESPIRO. This is crucial when the component extraction is the effect of the amplitude modulation by respiratory motions. The described examinations refer only to the changes of both R and T waves amplitude, which have been interpolated by the cubic spline. As a result of such a procedure an influence of a hypothetic recovery of respiratory motions on the ECG signal is possible according to the model presented above and described by (3). Both interpolated waves, i.e. R and T, have been presented in the Fig.8. This figure presents also signal recordered from the acceleration sensor (ACS) showing respiratory motions. This signal allows for verification of the assumption concerning the influence of respiratory motions on the amplitude of both the ECG characteristic waves (R and T).
436
D. Komorowski and S. Pietraszek
0.85
1000
0.8 800 Amplitude of ECG (uV), ACS [e.u.]
0.75
Voltage (mV)
0.7 0.65 0.6 0.55 0.5 0.45
400 200 0 −200 −400
0.4 0.35
600
0
0.5
1
1.5 Time (s)
2
2.5
3
Fig. 7. The interpolation of baseline and measurement points of R and T waves. (top line: ECG data, bottom line: baseline data, sign +:peaks of R wave, sign : peaks of T wave, sign o: points of baseline)
−600
0
10
20
30 Time (s)
40
50
60
Fig. 8. The reconstructed amplitude modulation data of both R and T peaks detected (from the top: ACS data, modulation data of R peaks, modulation data of T peaks, ECG data without baseline)
Such an influence is more visible in case of R wave than in case of wave T. References [6] [7] conclude that the respiratory motions modulate instantaneous the heart rate. It is possible to extract the heart rate variability based upon the determined position of R wave in the ECG signal. Such changes are not only resulting from the respiration motion but also can be considered as effects of other sophisticated control processes performed by the autonomic nervous system [7]. However, usually it is assumed that respiratory motions influence the most the heart rate.
3 Results The changes of distance between R peaks as well as the first derivative of RR distances have been calculated in the presented work. The filtered EGG signal and several respiratory dependent components have been obtained as a result of presented data processing. In the Fig.9 the comparison of reconstructed respiratory dependent data has been presented. It seems that the influence of the respiratory movements may be noticed in all presented signals except the EGG. The correlation coefficients between the ACS signal and all other signals have been calculated to verify this influence. The correlation has been calculated in the 60 seconds length window. The window has been moved from the beginning to the end of data. Preliminary examination indicates that the highest correlation exists among respiratory movements and both the filtered respiratory signal and amplitude of R peaks modulation in the ECG signal. The average values of the correlation coefficient have been shown in TAB.2. Special attention should by paid for irregular respiration, when
The Preliminary Study of the EGG and HR Examinations
437
2
EGG (mV), others (e.u.)
1.5
1
0.5
0
−0.5
−1
0
10
20
30 Time (s)
40
50
60
Fig. 9. The comparison of reconstructed respiratory dependent data (from the top: RR derivative data, RR data, amplitude of T data, amplitude of R data, ACS data, respiratory data, EGG data) Table 2. The average correlation coefficients ACS-amp.R ACS-RESP. ACS-der.RR ACS-amp.T ACS-EGG ACS-RR 0.56 0.52 0.33 0.0058 0.027 −0.4
additional peaks in filtered signals may be observed. This observation may require further investigations.
4 Conclusions The main agents which caused the EGG signal disturbances are artifacts caused by the respiratory movements. Several methods of reconstruction signals related to respiratory movements have been presented in this paper. The following conclusions may be drawn based on presented analysis. The reconstruction of respiratory related signals based on analysis of the amplitude of R wave in the ECG signal highly correlates with recorded respiratory movements. The obtained results are comparable with signals obtained by means of band pass filtering. In the research process, relatively high influence of baseline interpolation method for changes of the ECG T waves amplitude has been observed. Relatively low values of the correlation coefficient between the ACS signal and the reconstructed amplitude of T waves may be caused by improper selection of a baseline interpolation method. The selection of the baseline interpolation method may require further examinations. Finally simple comparison between respiratory signals obtained by R wave detection as well as by he heart rate variability analysis and the signal from the acceleration sensor allows to conclude that the presented methods recover well the RESPIRO signal.
438
D. Komorowski and S. Pietraszek
References 1. Alvarez, W.C.: The electrogastrogram and what it shows. JAMA 78, 1116–1119 (1922) 2. Tomczyk, A., Jonderko, J.: Multichannel electrogastrography as a non-invasive tool for evaluation of the gastric myoelectrical activity – a study on reproducibility of electrogastrographic parameters before and after a meal stimulation. Ann. Acad. Med. Siles. 61, 5 (2007) 3. Chen, J., McCallum, R.W.: Electrogastrogram: measurement, analysis and prospective applications. Med. Biol. Eng. Comput. 29, 339–350 (1991) 4. Medtronic, A.S.: Polygram NetTM Reference Manual. Skovlunde, Denmark (2002) 5. Task Force of The European Society of Cardiology and The North American Society of Pacing and Electrophysiology, Heart rate variability Standards of measurement, physiological interpretation, and clinical use. European Heart Journal 17, 354–381 (1996) 6. Kristal-Boneh, E., Raifel, M., Froom, P., Ribak, J.: Heart rate variability in health and disease. Scand. J. Work, Environ. Health 21, 85–95 (1995) 7. Tkacz, E.: The new possibilities of diagnostic analysis the heart rate variability (HRV). Warsaw (1996) ISSN 0239-7455 8. MMA7260QT, Technical Data, ´s1.5g-6g,Three Axis Low-g Micromachined Accelerometer. Rev. 5 (2008), http://www.freescale.com/support 9. Mintchev, M.P., Bowes, K.L.: Conoidal dipole model of electrical field produced by the human stomach. Med. Biol. Eng. Comput. 33, 179–184 (1995)
Electronic Records with Cardiovascular Monitoring System Angelmar Constantino Roman1 , Hugo Bulegon1 , Silvio Bortoleto1 , and Nelson Ebecken2 1
2
Universidade Positivo Prof. Pedro Viriato Parigot de Souza, 5300, Campo Comprido, Curitiba PR, CEP 81280-330 [email protected], [email protected], [email protected] UFRJ - Av. Pedro Calmon, nij 550, Cidade Universitria, Rio de Janeiro RJ CEP 21941-901 [email protected]
Summary. Cardiovascular diseases are the most responsible for the deaths of adults in most parts of the world. To facilitate its clinical management in primary health care is fundamental to improving the efficiency and seeks to reduce the morbidity and mortality. This article describes a software focused on the management of major cardiovascular risk factors (diabetes, hypertension, Dyslipidemia, smoking). Starting by registration and, based on major clinical guidelines, levels of cardiovascular risk. According to the level of risk, provides clinical indications summary for management and, through the processing of the results of subsequent laboratory tests, monitors the targets for therapies, according to the level of achievement of results, indicates maintenance or intensification of care. They are described and evaluated other systems and outpatient hospital that subsidized the development of software.
1 Introduction The cardiovascular disease is the main cause of death in the developed countries (45,6% of all deaths [1]). Given this situation, some softwares and another systems was created to try decrease this, for example Brazilian Society of Cardiology [2] and National Hearth Lung and Blood Institute [3]. Despite the practical and easy operation, limited to the calculation of global cardiovascular risk. When seeking grants for the clinical management, these institutions provide only in isolation guidelines for the treatment of cardiovascular risk factors (Diabetes (DM), hypertension (SH), Dyslipidemia) and not in a joint [3,4,5]. Thus, for the primary level of health care as clinics and health units, that support in the approach of the patient, these guidelines do not subsidize their practice properly. Additionally, the evaluation of the therapeutic results achieved by the patient is the ballast that ensures the quality of treatment and, thus, the clinical results. But beyond the classification of the level of cardiovascular risk, are necessary tools to historical M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 439–446. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
440
A.C. Roman et al.
records the results subsequently brought by the patient, seek these results compared with the targets set and, in a simple and practical, subsidize modification or maintenance of the therapeutic indications. This paper describes the Qu´ıron1 system, which is to automate proposed guidelines for management of cardiovascular risk factors, stratifying their levels and monitoring targets for therapeutic results. It is aimed to support the practice of surveillance, integrated management and maintenance of Diabetes, Hypertension and Dyslipidemia in offices, clinics and health units. It can be operated by all the team of health professionals, according to the level of access and tasks.
2 Methodology Initially a search was carried out where it was evaluated software such as hospital and clinical SIAB [6], HiDoctor [7], Tasy [8], Scravo [9] WPDHOSP [10] and Medsystem [11]. This measure has theoretical and practical substrate on software available in Brazil in the area of health. With the reunion of these subsidies was necessary to carry out Diagrams Swot2 in some software for a more practical view and detailed, allowing deduce important information for each software.
Fig. 1. Example of Swot Diagram
In the same period occurred in which the assessments were interviewed experts in health, where new ideas were added, corrections and referrals, according to the need for each identified as essential to be a helpful and attentive to software standards. These interviews have is that the basis of clinical software for the new focus on control of cardiovascular diseases and their risk factors. In developing the project, we used the Java language (J2EE3 ) to build the software and how Oracle4 database. 1
2
3 4
Qu´ıron God of medicine tessaliana and immortal centaur in Greek mythology, had the power of healing hands and that I could not heal, no one else could. Highlighted by the nobility, their high intelligence and knowledge on medicine. SWOT Diagram - Schedule toward recognizing strengths, weaknesses, opportunities and threats of a company. The facility also serves to several other applications. Java2 Platform Enterprise Edition, development for web. Oracle is a DBMS (system manager of database) with the ideal performance and safety for this article.
Electronic Records with Cardiovascular Monitoring System
2.1
441
Software Test
The first software was studied in the search Tasy [8]. This is a ERP5 offered to hospitals, clinics, imaging centers and laboratories for diagnosis. Was observed and evaluated at the Hospital of the Red Cross, located in the city of Curitiba - Paran´ a. The main purpose of this software is to manage more efficiently and provide information to a number of other resources such as administrative and operational.
Fig. 2. Swot Tasy
The SIAB (Information System for Primary Care) is the software that currently provides the data collected in primary care of the national health system of our country, the SUS (Unified Health System) and the site of DATASUS [6] can be found an area for download, the software and the manual system.
Fig. 3. Swot SIAB
The HiDoctor [7] is a tool for performance in the clinics. It has several different areas and can meet in a good standard, large-sized clinics. Aims to create specific clinical features, but can work in small hospitals with a certain practicality. The Medsystem [11] is a more robust and covers a range of features which can better meet the needs of a hospital. Are clinical tests, exams, occupational medicine, physical therapy, billing and others. 5
ERP - Enterprise Resource Planning.
442
A.C. Roman et al.
Fig. 4. Swot HiDoctor
Fig. 5. Swot MedSystem
The Scravo [9] is a system directed to the management of intensive care. The services offered in the software are: prescription, clinical evolution, general care of the ICU, rates of predictions, of tests, procedures performed, materials used, managed system of databases, access control, assistants to fill, module reports, network, visual interface, integrated with hospital management systems.
Fig. 6. Swot Scravo
2.2
How to Use Qu´ıron
Start up the system with a username and password with differentiation, according to the profile of the user previously registered by the system administrator.
Electronic Records with Cardiovascular Monitoring System
443
The registration of patients can be done by the doctor or in advance by another professional in the area of health. This first register contains information provided by the patient on administrative data and the presence of any cardiovascular risk factor (FRCV). The factors considered for the subsequent equating the level of risk were diabetes, hypertension, smoking, dyslipidemia, male, age greater than 50 years and cardiovascular events in first-degree relatives, when occurring in men under 55 years or women younger than 65 years. After this preliminary registration, the doctor confirmed that the patient really has the factors listed and the system classifies the level of cardiovascular risk: from low (no or at most a FRCV), intermediate (two or more FRCV) and high (diabetes, injury in target organ or current cardiovascular disease). Having done this stratification, the levels do not change. 2.3
Monitoring Targets
To monitor the patient under treatment for one of FRCV needs a reconsulta. After the first stratification, the doctor asks the necessary examinations and compares with the therapeutic goals. This second moment is a new classification according to the degree of attainment of the goals: Controlled, On Set, or uncompensated, which, respectively, must return at 1 year, 2 months and 1 month to review clinic. This classification can be changed, since the goal of treatment is to achieve the controlled classification. To facilitate this task and structure in a practical way of care, we used the scheme as Weed called SOAP (subjective, Objective, Assessment and Plan) [19]. The professionals have access to a list of chronic problems that the patient may have the answer to every health problem directly accessed by a click. Thus, the history will be available soon pathological, the result of examinations and specific indications. The list also includes integration with the ICD. 2.4
Risk Factors
Besides being responsible for the insertion of officials and release the correct access, the administrator tells the system what are the benchmarks for the stratification of patients. If the pattern change in some factor, you will not trigger a programmer. The administrator can change the figures, connecting it to your private area. The system then automatically interprets and recalculates the strata. The table below lists the factors FRCV and their parameters considered for the analysis of data made by Qu´ıron. For smoking the goal for all levels of risk is complete cessation of smoking6,7,8,9,10 . 6 7 8 9 10
HbA1c - Glycosylated Hemoglobin. LDL - Referring to the levels of cholesterol ”bad”. CT/HDL - Referring to the levels of cholesterol good. PA - Blood Presure. BMI = weight / height * height.
444
A.C. Roman et al.
Fig. 7. Variables FRCV
2.5
Data Mining
The model allows for effective data mining because the design and planning of its type where several algorithms can be processed, as Nave Bayes, SVM, tree algorithms and related rules to discover that hidden in the data stored. 2.6
Results
The purpose of drafting the Quron was practical help and make the care of FRCV. With its use, it is expected improvement in quality of care of patients under treatment, since it makes easy removal and classification of patients at risk and makes allowances for the integrated management of diabetes with hypertension and dyslipidemia. It also allows the health professional to continuously monitor the effect of treatment, by equating it operates with the results. An indication of the deadlines for reconsulta, not only organizing the agenda and, in the absence of the patient, alert for patients in need of greater attention to
Fig. 8. Patient Model [14]
Electronic Records with Cardiovascular Monitoring System
445
Fig. 9. Demo of Final Targets
Fig. 10. New Version, List of Issues
the care. That is, allows the graduation in the intensity of health care. The ease of updating the parameters that subsidize the tests performed by the program could also result in efficiency of services, to adapt to any scientific progress. The purpose of drafting the Qu´ıron was practical help and make the care of FRCV. With its use, it is expected improvement in quality of care of patients under treatment, since it makes easy removal and classification of patients at risk and makes allowances for the integrated management of diabetes with hypertension and dyslipidemia. It also allows the health professional to continuously monitor the effect of treatment, by equating it operates with the results. An indication of the deadlines for reconsulta, not only organizing the agenda and, in the absence of the patient, alert for patients in need of greater attention to the care. That is, allows the graduation in the intensity of health care. The ease of updating the parameters that subsidize the tests performed by the program could also result in efficiency of services, to adapt to any scientific progress. 2.7
Conclusion
Described facilities are provided by application Qu´ıron, where you can point the main character and that the difference is that the existing classification
446
A.C. Roman et al.
of cardiovascular risk, provides therapeutic targets to be achieved by each patient according to their cardiovascular risk factors, indicates lines conduct for the management of diabetes, hypertension and dyslipidemia, monitors the therapeutic results with a model that allows data mining and prospecting of rules hidden in the data. Using this application will contribute to the reduction of morbidity and mortality due to cardiovascular risk factors, and one support the decision appropriate.
References 1. The World Health Organization - World Health Report 2. Sociedade Brasileira de Cardiologia, http://prevencao.cardiol.br/testes/riscocoronariano (10/07/08) 3. National Institutes of Health. National Heart, Lung, and Blood Institute. The Sixth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure, http://www.nhlbi.nih.gov/guidelines/archives/jnc6/index.htm (10/07/08) 4. National Institute for Health and Clinical Excellence. Clinical Guidelines, http://guidance.nice.org.uk/page.aspx?o=36737 (10/07/08) ˇ ˘ 5. Cardiologia PrincSpios e PrGtica Fran Castro 6. DATASUS, http://w3.datasus.gov.br/datasus/datasus.php (28/05/08) 7. HiDoctor , http://www.hidoctor.com.br (28/05/08) 8. Tasy, http://www.wheb.com.br (28/05/08) 9. SCRAVO , http://www.scravo.com.br (28/05/08) 10. WPDHOSP, http://www.wpd.com.br/produtos.asp (28/05/08) 11. Medsystem, http://www.medsystem.com.br (28/05/08) 12. Hospitalar , http://www.hospitalar.com.br (28/05/08) 13. SPDATA , http://www.spdata.com.br (28/05/08) 14. Bortoleto, S., Ebecken, N.F.F.: IIIE ISDA 2008, Ontology model for multirelational data mining application (2008)
Stroke Slicer for CT-Based Automatic Detection of Acute Ischemia Artur Przelaskowski1, Grzegorz Ostrek1 , Katarzyna Sklinda2 , Jerzy Walecki2 , and Rafal J´ o´zwiak1 1
2
Institute of Radioelectronics, Warsaw University of Technology Nowowiejska 15/19, Warszawa, Poland [email protected] Department of Radiology, Medical Centre of Postgraduate Education, CSK MSWiA, Woloska 137, Warszawa, Poland [email protected]
Summary. Computed understanding of CT brain images used for assisted diagnosis of acute ischemic stroke disease was the subject of reported study. Stroke slicer was proposed as computer aided diagnosis (CAD) tool that allows extraction and enhancement of direct early ischemia sign - subtle hypodense of local tissue damage. Hypoattenuation of selected CT scan areas was visualized distinctly in a form of semantic maps. Moreover, brain tissue texture was characterized, analyzed and classified in multiscale domain to detect the areas of ischemic events. As the results of slice-oriented processing, the automatically indicated regions of ischemia and enhanced hypodensity maps were proposed as additional view for computerized assisted diagnosis. The experimental verification of stroke slicer was concentrated on diagnostic improvement in clinical practice by using semantic maps as additional information for interpretation procedure. Reported results indicate possible improvement of diagnostic output for really challenging problem of as early as possible CT-based ischemic stroke detection.
1 Background Accurate early diagnosis of hyperacute ischemic stroke is critical due to limited timing of applicable thrombolytic therapy. Broad clinical criteria based on symptoms at presentation resulted in the enrollment of many patients, many of who ultimately were not diagnosed as having stroke. However, clinical phenotype is today obligatory completed with neuroimaging. It should allow identification of patients with acute stroke and selection of suitable treatment, exclusion of intracerebral hemorrhage and determination of etiology as well as follow-up therapy and its possible complications. Although MRI has an established and recently expanding role in diagnosis of early stroke, the difficulties with getting access to this examination often require the use of CT [1]. Consequently, CT is widely used and considered as the method of first choice for differentiating the stroke syndrome. A CT image of the brain in acute stroke patients is not self-evident. Reading of CT needs training and additional knowledge about the physical M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 447–454. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
448
A. Przelaskowski et al.
conditions of image contrast distribution with noise and artifacts-caused limitations [2]. Physicians’ ability to reliably and reproducibly recognize the early CT changes is variable. Early changes with ischemia may vary within the limited range of HU (Hounsfield Unit) scale depending on cerebral infarct case, discrepant patient characteristics and acquisition conditioning. The attenuation coefficients of brain parenchyma vary, mainly due to the differing thickness of the cranial vault. Dense bone lowers the energy of the beam and increases attenuation. Bone, beam hardening artifacts as inter-individual density differences of up to 14 HU were noticed in brain parenchyma at comparable scan levels [3]. Additionally, the accuracy, stability and linearity of CT number (HU) and degradation of local contrast resolution are caused by noise (standard deviation up to 4HU) and CT number of water fluctuations of zero for water (within 2HU) because of variations in the stability of the detector system, x-ray source, non-optimum scanning and image reconstruction imperfectness. Such CT number instability masks subtle hypodense changes within ischemic region making pathology detection extremely difficult for many cases of irreversible infarcts. Thus, a challenge for CAD applications is making hypodensity distribution more distinct to reveal the diagnostic content and improve accurate recognition of infarct signatures. Computer-aided acute stroke diagnosis was mostly based on unenhanced CT examinations according to a concept of sensing and image understanding technologies. Desired effects of image post-processing were: a) segmentation of the regions susceptible to stroke; b) noise suppression; c) reduction of the content masking causes (i.e. artifacts, bad scanning conditioning etc); d) the enhancement of local tissue hypoattenuation. Besides approved methods of texture, shape and margins analysis applied in source image domain [3, 4], multiscale methods of medical image analysis were found to be effective for many diagnosissupport applications [5]. Hierarchical and flexible multiresolution image representation is extremely useful for nonlinear image approximation and subtle or hidden signal extraction [6] because its capability for signal energy packing with controlled localization across spatial, scale and subband coordinates. Selection of specific transform basis allow target content modeling and extraction through adaptive thresholding. Our research was directed to design, optimization and experimental verification of CAD method to improve acute stroke diagnosis based on extremely difficult cases of emergency CT scans. The most required effect of enhanced visibility of hidden hypodense signs included additionally in scan review procedure was increased ability to detect the hypodense area of hyperacute ischemic brain parenchyma.
2 Materials and Methods Proposed method was based on image content modeling in multiscale bases across scales and subbands with preserving of spatial signal distribution. Optimized data representation of selected regions was analyzed, processed,
Stroke Slicer for CT-Based Automatic Detection of Acute Ischemia
449
classified and reveled that occurred effective and flexible enough to model masked tissue density and extract subtle, diagnostically important hypodensic changes. Subtle ischemia signs detection was verified for over one hundred cases of stroke. Many disease signatures expressed in different form, intensity, size etc. were monitored, analyzed, estimated and extracted in multiscale domain. The procedure of nonlinear approximation reducing a redundancy of diagnostic content representation, makes the fundamental tissue features characteristics easier, more suggestive and informative to estimate the signatures of ischemia as target function (hypodensity pattern). Moreover, local texture features were extracted in spatial and wavelet domains to classify brain tissue as ischemic or not. The experiments verified possible improvement of diagnosis performance by stroke slicer used as extractor of additional information for interpretation procedure. Automatic stroke detection was verified on a set of selected CT scan regions susceptible to stroke. 2.1
Method of Stroke Slicer
Multiscale image processing methods used to enhance hypodense regions were as follows: •
Image conditioning with segmentation of stroke-susceptible regions of brain tissues - locally adaptive region growing and thresholding methods and smooth complement of segmented diagnostic areas; • Subtle hypodensity signs extraction in segmented regions by two subsequent multiscale decompositions giving 4 methods of processing: – target content estimation with tensor wavelet or curvelet basis as nonlinear approximants with adaptive semisoft thresholding; – signal emphasizing with especially adjusted curvelet and tensor wavelet basis and additional thresholding controlled by semantic content models; • Visual hypodensity expression - display arrangement of processed regions and source scans with greylevel quantization and contrast enhancement by different forms of visualization, according to observer suggestions; • Automatic recognition of the regions of ischemic brain tissue - texture features extracted in spatial and wavelet domain, selected and classified with SVM. ‘Two-stage segmentation of the regions susceptible to ischemic density changes play dominant role in elimination of false diagnostic indications. Two adjusted wavelet kernels were defined by orthogonal filter banks with low pass filter ˜ = [1/4, 2/4, 1/4] and h ˜ = [0.01995, −0.04271, −0.05224, 0.29271, 0.56458, h 0.29271, −0.05224, −0.04271, 0.01995], respectively. However, tensor wavelet image decomposition represents contours as isolated edge points with a crucially large number of the expansion coefficients. Thus the smooth edges are approximated inefficiently. Therefore, two-dimensional directional wavelet kernels, e.g. curvelets, were necessary for sparse representation of the smooth
450
A. Przelaskowski et al.
C2 (twice continuously differentiable) region edges. Moreover, simplified data visualization method that communicates enhanced pathology signs was used to make a distribution of brain tissue hypoattenuation more clear to the readers. Different forms of visualization adjusted to patient specificity and reader preferences were preferable. Textural features with energy distribution characteristics across scales and subbands of wavelet domain diversified significantly classified tissue. We used different classes of wavelet energy based features and histogram-based features from normalized wavelet coefficients. Moreover, entropy features (memoryless and joint, for subbands compositions) and homogeneity, correlation, energy and contrast of successive scale co-occurrence matrix of quantized coefficients were applied. SVM with optimized kernels and quality criteria was applied for classification and feature reduction procedures. 2.2
Materials
Evaluation of test CT examinations was performed in 106 patients admitted to a hospital with symptoms suggesting stroke. The test set was selected by two neuroradiologists from our database of over 170 patients imaged with brain CT scans for stroke diagnosis. Criteria of choice of 86 patients aged 28-92 years (mean age, 71.12 years) with proved infarction who underwent nonenhanced CT examinations of the head within first six hours (mean time, 3 hours 26 minutes) of stroke onset were diagnostic representativeness and interpretation difficulty because of hidden hypoattenuation. No direct hypodense signs of hyperacute ischemia were found on test data sets. Scans with unaccepted technical quality were excluded from consideration. Additionally, 20 patients without infarction (with non-stroke changes), aged 31-88 years (mean age, 64.1 years) were chosen from this database as control patients. For that patients, stroke-like symptoms at the admission disappeared in the follow-up. The approximate 4:1 ratio of the number of study patients to the number of control patients more closely simulate our clinical experience. Neither cases with active bleeding, brain tumor nor contrast enhancement were selected. Each test patient case consisted of around 22 regular CT scans synchronized in visualization to the same number of images obtained by designed CAD processing. Follow-up CT and/or DWI (from 1 to 10 days after the ictus) examinations and/or clinical features confirmed or excluded the diagnosis of stroke constituting ’gold standard’ to verify stroke detection performance.
3 Experiments and Results Retrospective image review was performed independently by four blinded neuroradiologists experienced in the interpretation of stroke CT images (treatments I). All test scans were subjectively rated by each reader according to the following relative 1-5 scale (” 1” indicating definite absence of acute
Stroke Slicer for CT-Based Automatic Detection of Acute Ischemia
451
a)
b)
c)
d)
Fig. 1. Two examples of test cases revealed by stroke slicer with four maps of extraction; a) acute CT scan (1 hour after the ictus) and two follow-up CT stroke confirmations (26 hours and 7 days later on); b) semantic maps based on curvelets, two wavelet bases, curvelets with tensor wavelets and tensor wavelets followed by curvelets, respectively; c) CT scan (3 hours after the ictus) and follow-up stroke confirmation (86 hours later on); d) respective semantic maps
stroke; ”2” - probable absence of acute stroke; ”3” - possible acute stroke; ”4” - probable acute stroke; ”5” - definite acute stroke) according to routine diagnostic procedure. For the rates 4 and 5 they were additionally asked to point out the location of ischemic focus. Formulated diagnostic scores could be changed in the next step of interpretation with additional aid of designed CAD tool (treatments II). Subjective rating of test scan was undertaken as previously according to the same 1-5 scale but taking into account additional preview of the scans processed by stroke slicer synchronized to their source scans previewed according to routine preferences.
452
A. Przelaskowski et al.
Table 1. The results of CBM and StAR analysis for all readers, all test cases, and selected case subgroups with satisfied technical quality of the examination or other scanning and patient conditioning Test set of cases (number of cases)
CBM analysis StAR analysis AUC I AUC II p-value AUC I AUC II p-value
All (106) Only good quality (81) No movement artifacts (79) No significant asymmetry (94) Without scarring (62)
0.636 0.637 0.627 0.635 0.705
0.716 0.713 0.710 0.716 0.797
0.0169 0.0138 0.0022 0.0191 0.1027
0.596 0.594 0.586 0.597 0.639
0.671 0.664 0.662 0.673 0.725
0.0049 0.0080 0.0042 0.0063 0.0011
ROC analyzes were performed for test cases to evaluate the influence of stroke slicer on radiological diagnosis. The statistical computations for paired test results were processed with software DBM MRMC 2.2 designed to perform an analysis of variance when both reader and case variation are relevant to calculate the statistical significance of the differences between different treatments (diagnostic tests, or modalities). Statistical significance of a difference between Receiver Operator Characteristic (ROC) indices was determined assuming that the performance of a diagnostic device is affected both by the cases analyzed (patient) and by the observer. By comparing the areas under ROC curves (AUC), a parametric contaminated binormal model (CBM) approaches was applied. Alternatively, new StAR server tool was used for statistical analysis. StAR relies on a non-parametric test for the difference of the AUCs that accounts for the correlation of the ROC curves. This test takes advantage of the equality between the Mann-Whitney U-statistic for comparing distributions and the AUC when computed by the trapezoidal
Fig. 2. Common ROC curves for acute stroke diagnosis according to test procedure. Dashed line represents detection performance of four readers according to treatments I (routine), and solid line is for treatments II (with slicer). Clearly increased AUC for supported diagnosis was noticed.
Stroke Slicer for CT-Based Automatic Detection of Acute Ischemia
453
Fig. 3. The examples of selected regions of classified brain tissue for automatic detection of acute stroke: top row for ischemic cases and down row for healthy cases
rule. A chi-square statistic is built and used to compute a p-value for the difference of the AUCs measured [7]. The results of reader detection ability for test CT scans considering treatments I and II were presented in Table 1. The examples of the semantic maps interpreted by test readers were given in Figure 1 and the general ROC curves estimated for all readers were shown in Figure 2. For training and testing of automatic tissue classification, a set of 47 differentiated acute ischemic regions and 38 control regions (see Figure 3) were used in the initial tests of possible distinction of pathology tissue according to numerical wavelet-based descriptors with clear classification criteria. These regions were selected symmetrically, if possible, from the same scans of stroke cases. Detailed results shows diagnosis peformance improvement on detection of stroke for all radiologists participating experimental evaluation. The AUC values were clearly higher for CAD-supported CT scan interpretation. Verified hypothesis of ROC AUCs’ equivalence for curves of treatments I and II was rejected because of p-values < 0.05 what indicates statistically significant diagnosis performance improvement. Average diagnosis sensitivity of four readers for test set of 106 examinations increased from 0.42 up to 0.57 with specificity improvement of 0.76 to 0.80. Additional test with three inexperienced radiologists noticed sensitivity improvement of 0.38 to 0.56 with noticeable specificity decrease from 0.88 to 0.75. Automatic detection of hypodense tissue was possible with sensitivity of 0.87 and specificity of 0.84 basing on segmented regions of interests. Reported results show promising potential of the automatic indications and possible improvement of diagnostic output for really challenging problem of as early as possible CT-based ischemic stroke detection.
454
A. Przelaskowski et al.
4 Conclusions Proposed method of computer assistance diagnosis provided noticeable improvement of acute stroke detection according to the rates of the readers in comparison to routine practice of diagnosis. Stroke slicer achieved more sensitive visualization of brain tissue hypoattenuation in susceptible to ischemia territories. Reported results indicate that combined evaluation of native CT together with hypodensity-oriented enhanced image may facilitate the interpretation of CT scans in hyperacute cerebral infarction. Moreover, the achieved efficiency of automatic recognition of ischemic tissue means significantly increased stroke detection performance of computerized suggestions in comparison to experts’ decisions made according to typical procedures or assisted with the maps of slicer hypodensity extraction. According to the test results and collected opinions, semantic maps of stroke improved the diagnosis of early ischemic changes because of increased visibility and clarity of hypodense signs in test exam probe. Therefore, reliable display of pathology signatures can considerably accelerate the diagnosis of hyperacute ischemic stroke because of increased sensitivity. Further verification of automatic detection method of hypodense regions should be performed in clinical environment to draw more reliable conclusions. Planned prospective study will let evaluate more accurately the impact of stroke slicer completed with automatic interpretation of acute CT scans on diagnosis and further treatment in patients suffered from stroke.
References 1. Adams, H., Adams, R., Del Zoppo, G., Goldstein, L.B.: Guidelines for the early management of patients with ischemic stroke, 2005 guidelines update, A scientific statement from the Stroke Council of the American Heart Association/American Stroke Association. Stroke 36, 916–921 (2005) 2. von Kummer, R.: The impact of CT on acute stroke treatment. In: Lyden, P. (ed.) Thrombolytic Therapy for Stroke. Humana Press, Totowa (2005) 3. Bendszus, M., Urbach, H., Meyer, B., Schultheiss, R., Solymosi, L.: Improved CT diagnosis of acute middle cerebral artery territory infarcts with densitydifference analysis. Neuroradiology 39(2), 127–131 (1997) 4. Grimm, C., Hochmuth, A., Huppertz, H.J.: Voxel-based CT analysis for improved detection of early CT signs in cerebral infarction. Eur. Radiol., B315 (2005) 5. Capobianco Guido, R., Pereira, J.C. (Guest eds.): Wavelet-based algorithms for medical problems. Special issue of Computers in Biology and Medicine 37(4) (2007) 6. DeVore, R.A.: Nonlinear approximation. Acta Numerica 7, 51–150 (1998) 7. Vergara, I.A., Norambuena, T., Ferrada, E., Slater, A.W., Melo, F.: StAR: a simple tool for the statistical comparison of ROC curves. BMC Bioinformatics 9, 265 (2008)
Control of Bio-prosthetic Hand via Sequential Recognition of EMG Signals Using Rough Sets Theory Marek Kurzynski, Andrzej Zolnierek, and Andrzej Wolczowski Technical University of Wroclaw, Faculty of Electronics, Chair of Systems and Computer Networks, Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland [email protected] Summary. The paper presents a concept of bio-prosthesis control via recognition of user intent on the basis of miopotentials acquired of his body. We assume, that in the control process each prosthesis operation consists of specific sequence of elementary actions. The contextual (sequential) recognition is considered in which the rough sets approach is applied to the construction of classifying algorithm. Experimental investigations of the proposed algorithm for real data are performed and results are discussed.
1 Introduction The activity of human organisms is accompanied by physical quantities variation which can be registered with measuring instruments and applied to control the work of technical devices. Electrical potentials accompanying skeleton muscles’ activity (called EMG signals) belong to this type of biosignals. Various movements are related to the recruitment of distinct motor units, different spatial location of these units in relation to the measuring points leads to the formation of EMG signals of differing features, e.g. with different rms values and different frequency spectrum. The features depend on the type of executed or (in the case of an amputated limb) only imagined movement so they provide the information about the users intention ( [3] [4]). Bioprostheses can utilize the EMG signals measured on the handicapped person’s body (on the stump of a hand or a leg) to control the actuators of artificial hand’s fingers, the knee and the foot of an artificial leg or the wheels of a wheelchair. The paper presents the concept of a bioprosthesis control system which in principle consists in the recognition of a prosthesis user’s intention (i.e. patient’s intention) based on adequately selected parameters of EMG signal and then on the realisation of the control procedure which had previously been unambiguously determined by a recognised state. The paper arrangement is as follows. Chapter 1 includes the concept of prosthesis control system based on the recognition of patient intention and provides an insight into sequential classification method. Chapter 2 presents the key M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 455–462. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
456
M. Kurzynski, A. Zolnierek, and A. Wolczowski
recognition algorithm based on the rough sets approach. Chapter 3 in turn describes experimental investigations of proposed algorithm and discusses their results.
2 Control System of Bioprosthesis In the considered control concept we assume that each prosthesis operation (irrespective of prosthesis type) consists of specific sequence of elementary actions, and the patient intention means its will to perform a specific elementary action [9]. Thus prosthesis control is a discrete process where at the n-th stage occurs successively: • • •
the measurement of EMG signal parameters xn (xn ∈ X ⊆ Rd ), that represent patient’s will jn (jn ∈ M = {1, 2, . . . , m}) (the intention to take a particular action), the recognition of this intention (the result of recognition at the n-th stage will be denoted by in ∈ M ), the realization of an elementary action en ∈ E, uniquely defined as a recognized intention.
This means that there is m number of elementary actions E = {e(1) , e(2) , . . . , e(m) } (an exemplary meaning of elementary actions in relation to a dexterous hand prosthesis is defined in chapter 3). For the purpose of determining patient’s intent recognition algorithm, we will apply the concept of the so-called sequence recognition. The essence of sequence recognition in relation to the issue we are examining is the assumption that the intention at a given stage depends on earlier intentions. This assumption seems relevant since particular elementary actions of a prosthesis must compose a defined logical entity. This means that not all sequences of elementary actions are acceptable, only those which contribute to the activities which can be performed by a prosthesis. Examples of such actions (sequences of elementary actions) are presented in chapter 3. Since the patient’s current intention depends on history, generally the decision (recognition) algorithm should take into account the whole sequence of the preceding feature values (parameters of EMG signal), x ¯n = {x1 , x2 , . . . xn } [5]. It must be stressed, however, that sometimes it may be difficult to include all the available data, especially for bigger n. In such cases we have to allow various simplifications (e.g. make allowance for only k recent values in the vectors) or compromises (e.g. substituting the whole classification history segment that spreads as far back as the k-th instant with data processed in the form of a decision established at that instant). Apart from the data measured for a specific patient we need some more general information to take a valid recognition decision, concerning the general associations that hold between decisions (patient’s intentions) and features (EMG signal parameters). From now on we assume that it has the form of a so-called training set, which - in the considered decision problem - consists of training sequences:
Control of Bio-prosthetic Hand via Sequential Recognition
Patient
EMG signal measurement and parameters detection
xn
457
Training sequences
Patient’s intention recognition algorithm
x n 1 Memory
in
Prosthesis elementary action
en
Control algorithm
Fig. 1. System of bio-prosthesis control via sequential recognition of patient’s intention
S = {S1 , S2 , ..., SN },
(1)
Sk = ((x1,k , j1,k ), (x2,k , j2,k ), ..., (xL,k , jL,k ))
(2)
A single patient’s record:
denotes a single-patient sequence of prosthesis activity that comprises L EMG signal observation instants, and the patient’s intentions. In practical situations acquisition of learning set is rather difficult task. An appropriate procedure requires simultaneously (synchronic) measurement of EMG signal (usually in multi channel mode) and observation of finger posture and hand movement which define elementary action. For the purpose of experimental investigations the special measurement stand was elaborated, which concept and structure are presented in [10]. Fig.1. shows the block-diagram of the dynamic process of bio-prosthesis control.
3 Algorithm of Sequential Recognition Based on Rough Sets Theory In this section we will apply the rough sets theory [7] to the construction of SC algorithm for the n-th instant: ψ(¯ xn , S) = in ,
n = 1, 2, . . . ,
in ∈ M.
(3)
Now, the training set (1) is considered as an information system S = (U, A), where U and A, are finite sets called universe and the set of attributes, respectively. For every attribute a ∈ A we determine its set of possible values Va , called domain of a. Such information system can be represented as a table, in which every row represents a single sequence (2). In successive column of k-th row of this table we have values of the following attributes:
458 (1)
M. Kurzynski, A. Zolnierek, and A. Wolczowski (2)
(d)
(1)
(2)
(d)
(1)
(2)
(d)
x1,k , x1,k , . . . , x1,k , j1,k , x2,k , x2,k , . . . , x2,k , j2,k , . . . , xL,k , xL,k , . . . , xL,k , jL,k . (4) In such information system we can define in different way the subset C ⊆ A of condition attributes and the single-element set M ⊆ A which will be the decision attribute. Consequently, we obtain the decision system S = (U, C, M ) in which, knowing the values of condition attributes, our task is to find the value of decision attribute, i.e. to find appropriate pattern recognition algorithm of sequential classification. Similary, as in algorithms based on the probabilistic approach [5], we can choose the subset of condition attributes in different way. Taking into account the set of condition attributes C, let us denote by Xj the subset of U for which the decision attribute is equal to j, j = 1, 2, . . . , m. Then, for every j we can defined respectively the C-lower approximation and the C-upper approximation of the set Xj , i.e.: . C∗ (Xj ) = {C(x) : C(x) ⊆ Xj }, (5) x∈U
C ∗ (Xj ) =
.
{C(x) : C(x) ∩ Xj = 0}.
(6)
x∈U
Hence, the lower approximation of the set Xj is the set of objects x ∈ U , for which knowing values of condition attributes C, for sure we can say that they are belonging to the set Xj . Moreover, the upper approximation of the set Xj is the set of objects x ∈ U , for which knowing values of condition attributes C, for sure we can not say that they are not belonging to the set Xj . Consequently, we can define C-boundary region of Xj as follows: CNB (Xj ) = C ∗ (Xj ) − C∗ (Xj ).
(7)
If for any j the boundary region Xj is the empty set, i.e. CNB (Xj ) = ∅, then Xj is crisp, while in the opposite case, i.e. CNB (Xj ) = ∅ we deal with rough set. For every decision system we can formulate its equivalent description in the form of set of decision formulas For(C). Each row of the decision table will be represented by single if — then formula, where on the left side of this implication we have logical product (and ) of all expressions from C such that every attribute is equal to its value. On its right side we have expression that decision attribute is equal to one the number of class from M . These formulas are necessary for constructing different pattern recognition algorithms for sequential classification. Algorithm without Context (Rough-0) As usual, we start with the algorithm without the context, which is well known in literature [2], [7], [8]. In this case our decision table contains
Control of Bio-prosthetic Hand via Sequential Recognition
459
N × L patterns, each having d condition attributes (features) and one decision attribute (the class to which the pattern belongs). Application of rough set theory to the construction of classifier (3) from the learning set (1) can be presented according to the following items [11]: 1. If the attributes are the real numbers then first the discretization preprocessing is needed. After this step, the value of each attribute is represented by the number of interval in which this attribute is included. For different attributes we can choose the different numbers of intervals in order to obtain their proper covering and let us denote for l-th attribute (l = 1, 2, . . . , d) by vpl l its pl -th value or interval. 2. The next step consists in finding the set For(C) of all decision formulas from (1), which have the following form: IF x(1) = vp11 AN D . . . AN D x(d) = vpdd T HEN ψ(x, S) = j.
3. 4.
5.
6.
(8)
It must be noted that from the learning set (1) we obtain more than one rule for particular case. Then for the formula (8) we determine its strength factor, which is the number of correct classified patterns during learning procedure. For the set of formulas For(C), for every j = 1, 2, . . . , m we calculate their C-lower approximation C∗ (Xj ) and their boundary regions CNB (Xj ). In order to classify the n-th pattern xn (after discretization its attributes if necessary) we look for matching rules in the set For(C), i.e. we take into account such rules in which the left condition is fulfilled by the attributes of recognized pattern. If there is only one matching rule, then we classify this pattern to the class which is indicated by its decision attribute j, because for sure such rule is belonging to the lower approximation of all rules indicating j-th class, i.e. this rule is certain. If there is more then one matching rule in the set For(C), it means that the recognized pattern should be classified by the rules from the boundary regions CNB (Xj ) (j = 1, 2, . . . , m) and in this case as a decision we take the index of boundary region for which the strength of corresponding rule is the maximal one. In such a case we take into account the rules which are possible.
Algorithm with k-th Order Context (Rough-k) This algorithm includes k-instant-backward-dependence (k < L) with full measurement data, i.e. the decision at the n-th instant is made on the base of vector of features: (1)
(d)
(1)
(d)
(1) (d) x ¯(k) n = (xn−k , . . . , xn−k , . . . , xn−1 , . . . , xn−1 , xn , . . . , xn ).
(9)
This means that features (9) state now decision atributes in decison table. Let us denote by D the total number of decision attributes. It is clear that
460
M. Kurzynski, A. Zolnierek, and A. Wolczowski
D = (k+1)×d. From the learning set (1), we can create the decision table with D + 1 columns (D columns of features and the additional column containing the class number of the n-th recognized pattern) and with N × (L − k) rows (from each learning sequence (2) L − k subsequences of the length k + 1 can be obtained). The main idea of the proposed methods of SC is exactly the same as for independent patterns — differences concern details in procedure of construction of the set of decision formulas For(C), which now are as follows: IF x(1) = vp11 AN D . . . AN D x(D) = vpDD T HEN ψ(¯ x(k) n , S) = jn .
(10)
The further procedure of SC is the same as previously, i.e. we calculate C∗ (Xjn ) and CNB (Xjn ) and the final decision is made according to the steps 4, 5 and 6. Reduced Algorithm with k-th Order Context (R-Rough-k) In this approach to classification at the n-th instant, we substitute the whole object history segment which — as previously — covers the k last instances with data processed in the form of decisions established at these instances ¯i(k) n = (in−k , in − k + 1, . . . , in − 1). Such a concept leads to the following set of attributes in the decision table: in−k , in−k+1 , . . . , in−1 , xn ,
(11)
and to the decision formulas: IF jn−k = vp11 AN D . . . AN D jn−1 = vpkk (k+1) (d) (k+d) ¯(k) AN D x(1) n = vpk+1 . . . AN D xn = vpk+d T HEN ψ(in , xn , S) = jn , (12)
which can be determined from the training set (1) (vp11 , vp22 , . . . , vpkk ∈ M ).
4 Experimental Results 4.1
Description of Experiments
In order to study the performance of the proposed method of sequential recognition of patient intent, some computer experiments were made. In the control process the grasping of 6 types of objects (a pen, a credit card (standing in a container), a computer mouse, a cell phone (laying on the table), a kettle and a tube (standing on the table)) were considered. In the process of grasping with a hand 7 types of macro-actions were distinguished [6]: rest position, grasp preparation, grasp closing, grabbing, maintaining the grasp,
Control of Bio-prosthetic Hand via Sequential Recognition
461
Table 1. Frequency of correct classification (in per cent) versus the number of learning sets for various algorithms (names are explained in the text) Algorithm
50
75
100
125
150
Rough-0 Rough-1 R-Rough-1 Rough-2 R-Rough-2
52.5 74.2 73.8 75.8 75.6
55.9 77.5 76.9 79.3 78.7
63.2 81.6 81.5 82.1 81.6
65.5 84.0 84.2 85.3 85.1
68.4 85.2 84.9 87.5 87.4
Bayes Markov-1 Markov-2
64.2 83.2 83.7
70.1 86.3 87.5
71.0 89.1 90.3
72.8 90.5 91.6
73.6 92.4 92.8
releasing the grasp and transition to the rest position — it gave in total 25 different elementary actions (or pattern classes). EMG signals registered in a multi-point system [4] on a forearm of a healthy man were used for the recognition of elementary actions. For the purpose of simplifying of experiments, the constant time of 250 ms for each action was adopted. The rms values of EMG signals, coming from 3 electrodes were accepted as a feature vector x. The electrodes were respectively located above the following muscles: the wrist extensor (extensor carpi radialis brevis), wrist flexor (flexor carpi ulnaris) and thumb extensor (extensor pollicis brevis). The basis for determination of recognition algorithm were learning sequences (1) containing a set of pairs: segment of EMG signal/the class of elementary action. Such a set was experimentally determined by means of synchronous registering of movement of fingers (by video camera) and EMG signal. The algorithm was constructed on the basis of the collected learning sequences (1) of the length 7 elementary actions. The tests were conducted on 150 subsequent sequences. 4.2
Results and Conclusions
The proposed different concepts of sequential algorithms with rough sets was compared with appropriate algorithm for single classification (Rough-0) and with two algorithms based on the probabilistic model ( [1], [5]): naive Bayes classifier (Bayes) and sequential algorithm with 1st and 2nd order Markov dependence(Markov-1 and Markov-2, respectively). The outcome is shown in Table 1. It includes the frequency of correct decisions for the investigated algorithms depending on the number of training sets. These results imply the following conclusions: 1. There occurs a common effect within each algorithm group: algorithms that do not include the inter-state dependences and treat the sequence of intentions as independent objects (Rough-0 and Bayes) are always worse
462
M. Kurzynski, A. Zolnierek, and A. Wolczowski
than those that were purposefully designed for the sequential decision task (Rough-1(2), Markov-1(2)) 2. Although algorithms with the original data are better then their reduced versions, the difference appears to be negligible. 3. The model of the second-order dependency turns out to be more effecive than the first-order approach. Acknowledgement. This work was financed from the Polish Ministry of Science and Higher Education resources in 2007-2010 years as a research project No N518 019 32/1421.
References 1. Duda, R., Hart, P., Stork, D.: Pattern classification. Wiley Interscience, New York (2001) 2. Fang, J., Grzymala-Busse, J.: Leukemia prediction from gene expression data-a rough set approach. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds.) ICAISC 2006. LNCS, vol. 4029, pp. 899–908. Springer, Heidelberg (2006) 3. Englehart, K., Hudgins, B., Parker, P.A., Stevenson, M.: Classification of the myoelectric signal using time-frequency based representations. Medical Eng. and Physics, Special Issue: Intelligent Data Analysis in Electromyography and Electroneurography, 415–423 (1999) 4. Krysztoforski, K., Wolczowski, A., Bedzinski, R.: Recognition of palm finger movements on the basis of EMG signals with application of wavelets. TASK Quarterly 8, 25–33 (2004) 5. Kurzynski, M.: Benchmark of approaches to sequential diagnosis. In: Lisboa, P. (ed.) Artificial Neural Networks In Biomedicine, pp. 129–141. Springer, Heidelberg (1998) 6. Kurzynski, M., Wolczowski, A.: Control of dexterous hand via recognition of EMG signal using combination of decision-tree and sequential classifier. In: Kurzynski, M., Wozniak, M. (eds.) Computer Recognition Systems, vol. 2, pp. 687–694. Springer, Heidelberg (2007) 7. Pawlak, Z.: Rough sets — Theoretical aspects of reasoning about data. Kluwer Academic Publishers, Dordrecht (1991) 8. Pawlak, Z.: Rough sets, decision algorithms and Bayes theorem. European Journal of Operational Research 136, 181–189 (2002) 9. Wolczowski, A., Kurzynski, M.: Control of artificial hand via recognition of EMG signals. In: Barreiro, J.M., Mart´ın-S´ anchez, F., Maojo, V., Sanz, F. (eds.) ISBMDA 2004. LNCS, vol. 3337, pp. 356–364. Springer, Heidelberg (2004) 10. Wolczowski, A., Myslinski, S.: Identifying the relation between finger motion and EMG signals for bioprosthesis control. In: Proc. of 12th IEEE Int. Conf. on Methods and Models in Automation and Robotics, Miedzyzdroje, pp. 127–137 (2006) 11. Zolnierek, A.: Application of rough sets theory to the sequential diagnosis. In: Maglaveras, N., Chouvarda, I., Koutkias, V., Brause, R. (eds.) ISBMDA 2006. LNCS (LNBI), vol. 4345, pp. 413–422. Springer, Heidelberg (2006)
Hierarchic Approach in the Analysis of Tomographic Eye Image Robert Koprowski and Zygmunt Wrobel University of Silesia, Institute of Computer Science, 41-200 Sosnowiec, Bedzinska 39, Poland [email protected], [email protected]
Summary. The paper presents an algorithm designed to detect layers of eye’s retina using an analysis in a hierarchic approach. This type approach has been implemented and tested in images obtained by means of Copernicus OCT (Optical Coherence Tomography). The algorithm created is an original approach to detect contours, layers and their thicknesses. The approach presented is an expansion of approaches described in [2], [4] and [5] and enables identification and recognition of external limiting membranes, retina and others in a very short time. The algorithm has been implemented in Matlab and C environment.
1 Introduction Images originating from a Copernicus tomograph due to its specific nature of operation are obtained in sequences of a few, a few dozen 2D images within approx. 1s, which provide the basis for 3D reconstruction. Because of their number, the analysis of a single 2D image should proceed within a time not exceeding 10 ms, so that the time of operator’s waiting for the result would not be onerous (as it is easy to calculate for the above value, for a few dozen images of resolution usually M xN = 740x820 in a sequence, this time will be shorter than 1 s). At the stage of image preprocessing (like in [1], [2] and [3]) the input image LGRAY is initially subject to filtration using a median filter of (Mh xNh ) size of mask h equal to Mh xNh = 3x3 (in final version of the software this mask may be set also at Mh xNh = 5x5 so as to obtain better precision of algorithm operation for a certain specified group of images). Image LM obtained this way is subject consecutively to decomposition to an image of lower resolution and analysed in terms of layers detection.
2 Image Decomposition It is assumed that the algorithm described should give satisfactory results considering mainly the criterion of operating speed. Although methods (algorithms) described in [1], [2] and [3] feature high precision of computations, however, they are not fast enough (it is difficult to obtain the speed of single M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 463–470. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
464
R. Koprowski and Z. Wrobel
Fig. 1. Blocks arrangement in LM image
Fig. 2. Pictorial diagram of layers sought arrangement in a tomographic image
2D image analysis on a PII 1.33 GHz processor in a time not exceeding 10 ms). Therefore a reduction of image LM resolution by approx. a half was proposed to such value of pixels number in lines and columns, which is a power of 2 , i.e.: M xN = 256x512 (LM2 ) applying further on its decompositions to image LD16 (where symbol ’D’ means decompositions, while 16 the size of block, for which it was obtained). Each pixel of the input image after decomposition has a value equal to a median of the area (block) of 16x16 size of the input image, acc. to Fig. 1. An example of LD16 result and the input image LM2 is shown in Fig. 3. Image LD16 is then subject to determination of pixels position of maximum value for each column, i.e.: 1 if LD16 (m, n) = maxm (LD16 (m, n)) LDM16 (m, n) = (1) 0 others where m - means a line numbered from one; n - means a column numbered from one. Using the described method of threshold setting for the maximum value in lines, in 99 percent of cases only one maximum value in a column is obtained. To determine precise position of Gw and Rp limits (Fig. 2) it turned out necessary to use one more LDB16 image, i.e.: 1 if |LD16 (m, n) − LD16 (m + 1, n)| > pr LDM16 (m, n) = (2) 0 others
Fig. 3. Image before and after decomposition - LM 2 and LD16 , respectively
Hierarchic Approach in the Analysis of Tomographic Eye Image
465
for m ∈ (1, M −1), n ∈ (1, N ), where: pr - threshold assumed within the range (0, 0.2). As a result, coordinates of Gw(n) and Rp(n) limits position points are obtained as such positions of values 1 in LDB16 image, for which Gw(n) ≤ Rp(n) and Rp(n) is obtained from LDB16 image in the same way. This method for pr threshold selection at the level of 0.01 gives satisfactory results in around 70 percent of cases of not composed images (i.e. such, which are not images with a visible pathology). Unfortunately for the other 30 percent cases the selection of pr threshold in the adopted limits does not reduce the originated errors. The correction on this level of erroneous recognitions of Gw(n) and Rp(n) layers is that important, that for this approach these errors will not be duplicated (in the hierarchic approach presented below) for the subsequent more precise approximations.
3 Correction of Erroneous Recognitions In LDB16 image white pixels are visible in an excess number for most columns. Two largest objects arranged along ’maxima’ in columns entirely coincide with Gw and Rp limits position. Based on that and having carried out the above analysis for a few hundred images, the following limitations were adopted: - for coordinates Rp(n) found in LDM16 image there must be at the same time LDM16 (m, n) = 1 in other cases this point is considered as disturbance or as a point of Gw(n) layer, - if only one pixel of value 1 occurs in image LDM16 and LDB16 for the same position, i.e. for the analysed n there is LDM16 (m, n) = LDB16 (m, n) the history is analysed for n > 1 and it is checked, whether |Gw(n−1)−Gw(n)| > |Rp(n − 1) − Rp(n)|, i.e.: ⎧ m if |LD16 (m, n) = LD16 (m, n) = 1∧ ⎨ Rp (m, n) = ∧|Gw(n − 1) − Gw(n)| > |Rp(n − 1) − Rp(n)| (3) ⎩ 0 others for m ∈ (1, M − 1), n ∈ (2, N ), - if |Gw(n − 1) − Gw(n)| ≤ |Rp(n − 1) − Rp(n)|, the condition Gw(n − 1) − Gw(n) = 1 is checked (giving thereby up fluctuations against history n − 1 within the range ±1 of area A (Fig. 1). If so, then this point is the next Gw(n) point. In the other cases the point is considered as a disturbance. It is assumed that lines coincide Gw(n) = Rp(n) if Rp(n − 1) − Rp(n) = 1 and only one pixel occurs of value 1 in LDM16 image. - in the case of occurrence in specific column of larger number of pixels than 2, i.e. if summ (LDB16 (m, n)) > 2 a pair is matched (if occurs) Gw(n − 1), Rp(n − 1) so that |Gw(n − 1) − Gw(n − 1)| − |Rp(n) − Rp(n)| = 1 would occur. In this case it may happen that lines Gw(n) and Rp(n) will coincide. However, in the case of finding more than one solution, that one is adopted, for which LD16 (Gw(n), n)+LD16 (Rp(n), n) assumes the maximum value (the maxi-mum sum of weights in LD16 occurs).
466
R. Koprowski and Z. Wrobel
Fig. 4. Example of LDB16 image for pr = 0.01 with incorrectly marked Gw(n), p(n) points (layers)
Fig. 5. Example of LDB16 image for pr = 0.01 with incorrectly marked Gw(n), Rp(n) points (layers)
The presented correction gives for the above class of images the effectiveness of around 99% of cases. Despite adopted limitations the method gives erroneous results for the initial value n=1, unfortunately these errors continue to be duplicated. Unfortunately, the adopted relatively rigid conditions of acceptable difference |Gw(n − 1) − Gw(n − 1)| or |Rp(n) − Rp(n)| cause origination of large errors for another class of tomographic images, in which a pathology occurs in any form (Fig. 5). As it may be seen in Fig. 4 and Fig. 5 problems occur not only for the initial n values, but also for the remaining points. The reason of erroneous recognitions of layers positions consists of difficulty in distinguishing proper layers in the case of discovering three ’lines’, three points in a specific column, which position changes in acceptable range for individual n. These errors cannot be eliminated at this stage of decomposition into 16x16 pixels areas (or 16x32 image resolution). They will be the subject of further considerations in the next sections.
4 Reducing the Decomposition Area The increasing of accuracy and thereby reducing the Am,n area size (Fig. 1) - block in LM image - is a relatively simple stage of tomographic image processing with particular focus on the operating speed. It has been assumed that Am,n areas will be sequentially reducing by half in each iteration - down to 1x1 size. The reduction of Am,n area is equivalent to performance of the next stage of lines Gw and Rp position approximation. The increasing of accuracy (precision) of Gw and Rp lines position determined in the previous iteration is connected with two stages: - concentration of (m, n) coordinates in the sense of determining intermediate ((m, n) points situated exactly in the centre) values by means of linear interpolation method; - change of concentrated points position so that they would better approximate the limits sought. If the first part is intuitive and results only in resampling, the second requires more precise clarifications. The second stage consists in matching individual
Hierarchic Approach in the Analysis of Tomographic Eye Image
Fig. 6. Pictorial diagram of the process of Rp course matching to the edge of the layer sought. Individual pixels independent of each other may change the position within the ±pu range
467
Fig. 7. Results of matching for two iterations White colour marks input Rp points and red and green - consecutive approximations
points to the layer sought. As in the ox axis the image by definition is already decomposed and pixel’s brightness in the image analysed corresponds to the median value of the original image in window A (Fig. 1), the modification of points Rp and Gw position occurs only in the vertical axis. The analysis of individual Rp and Gw points is independent in the sense of dependence on n − 1 point position, as was the case in the previous section. Each of Rp points, left from the previous iteration, and newly created from interpolation, in the consecutive algorithm stages is matched with increasingly high precision to the RP E layer. Point’s Rp(n) position changes within the range of ±pu (Fig. 6), where the variation range does not depend on the scale of considerations (size of A area) and strictly results from the distance between Gw and Rp (Fig. 2). For blocks A of size from 16x16 to 1x1 pu is constant and equal 2. This value has been taken based on typical average, for analysed a few hundred LGRAY images, distance between Gw and Rp equal to around 32 pixels, what means that after decomposition into blocks A of size 16x16 these are two pixels, i.e. pu = 2. In this 2 range a maximum is sought in LDM image and a new position of point Rp or Gw assumed for it. Thus the course of Rp or Gw is closer to the actual course of the layer analysed. The obtained results of matching are presented in Fig. 7. White colour shows input Rp values as input data for this stage of algorithm and decomposition into A blocks of size 16x16 (LDM16 and LDB16 images), red colour - results of matching for A blocks of size 8x8 (LDM8 and LDB8 images), and green colour - results of matching for A blocks of size 4x4 (LDM4 and LDB4 images). As may be seen from Fig. 7 the next decompositions into consecutive smaller and smaller A areas and thus image of higher resolution, a higher precision is obtained at the cost of time (because the number of analysed Rp(n), Gw(n) points and their neighbourhoods ±pu increases). This method for A of 16x16 size has that high properties of global approach to pixels brightness that there is no need to introduce at this stage additional
468
R. Koprowski and Z. Wrobel
actions aimed at distinguishing layers situated close to each other (which have not been visible so far due to image resolution). While at A areas of 4x4 size other layers are already visible, which should be further properly analysed. At increased precision, Io layer is visible, situated close to Rp layer (Fig. 7). Thereby in the area marked with a circle there is a high position fluctuation within the oy axis of Rp layer. Because of that the next step of algorithm has been developed, taking into account separation into Rp and Io layers for appropriately high resolution.
5 Analysis of Rp and Io Layers The analysis of layers consists in separating line Io from line Rp originating from previously executed stages of the algorithm. The case is facilitated by the fact that on average around 80, 90% of pixels in each tomographic image has a maximum value in each column exactly in point Rp (this property has been already used in the first section). So the only problem is to detect the position of Io line. One of possible approaches consists of an attempt to detect the contour of the layer sought in LIR image. This image originated from LM image thanks to widening of Rp(n) layer range within oy axis within the range of ±pI = 20 pixels. LIR image has been obtained with the number of columns consistent with the number of LM image columns and with the number of lines 2pI + 1. Fig. 8 shows image LIR = LM (m − Rp(n), n) originating from LM image from Fig. 7. Unfortunately, because of a pretty high individual variation within the Io layer position relative to Rp the selected pI range in further stages of the algorithm may be increased even twice (that will be described later). To determine consecutive points of Io layer position interpolations with 4th order polynomial of grey level degree for individual columns of LIR image obtaining this way LIRS , which changes of grey levels in individual column. The poFig. 8. Parts of LM images sition of point Io(n) occurs in the place of with marked courses Gw - the highest gradient occurring within the range red, Rp - blue, and Io - green (Rp(n) − pI ) ÷ Rp(n) relative to LMS image or 1 ÷ pI relative to LIRS image. As may be seen in Fig. 8, the method presented perfectly copes with detecting Gw, Rp and Io layers marked in red, blue and green, respectively.
6 Layers Thickness Map and 3D Reconstruction The analysis of LM images sequence and precisely the acquiring of Gw, Rp and Io layers allows performing 3D reconstruction and layers thickness measurement. A designation for an image sequence with an upper index (i) has
Hierarchic Approach in the Analysis of Tomographic Eye Image
Fig. 9. Spatial position of Gw
469
Fig. 10. Example 3D reconstruction of layers: Gw - blue, Rp - red, and Io - green (1)
(2)
(3)
(k−1)
(k)
been adopted, where i = {1, 2, 3, ..., k−1, k} i.e. LM , LM , LM , .., LM , LM . For a sequence of 50 images the position of Gw layers (Fig. 10), Rp and Io was measured as well as Io − Rp layer thickness. (i) 3D reconstruction performed based on LM images sequence is the key element crowning the results obtained from the algorithm suggested. The sequence of images, and more precisely the sequence of Gw(i) (n), Rp(i) (n) and Io(i) (n) layers position provides the basis for 3D reconstruction of a tomographic image. For an example sequence of 50 images and one LM (i) image resolution of M xN = 256x512 a 3D image is obtained, composed of three Gw, Rp and Io layers of 50x512 size. Results are shown in Fig. 10 - the reconstruction performed using the algorithm described above was carried out based on Gw(i) (n), Rp(i) (n) and Io(i) (n) information. In an obvious way a possibility of automatic determination of the thickest or the thinnest places between any points results from layers presented in Fig. 10. Table 1. Execution time of algorithm to find Gw, Rp and Io layers and 3D reconstruction Processing stage
Total time since Time of individprocessing start ual stages com[ms] putations [ms] Preprocessing 10.10 10.10 Preliminary breakdown into Gw and Rp − Io 13.20 3.20 Gw and Rp − Io approximation for A - 16x16 15.90 2.74 Gw and Rp − Io approximation for A − 8x8 23.70 7.63 Precise Rp and Io breakdown 44.60 20.85
7 Summary The algorithm presented detects Gw, Rp and Io layers within up to 50ms time on a PC with a 2.5 GHz Intel Core 2 Quad processor. The time was measured as a mean value of 700 images analysis dividing individual images into A blocks (Fig. 1) consecutively of sizes 16x16, 8x8, 4x4, 2x2. This time may
470
R. Koprowski and Z. Wrobel
be reduced modifying the number of approximation blocks and at the same times increasing the layer position identification error - results are presented in the table above. The specification of individual algorithm stages’ analysis times presented in the table above clearly shows the longest execution of the first stage of image preprocessing, where filtration with a median filter is of prevailing importance (in terms of execution time) as well as of the last stage of precise determination of Rp and Io layers position. Because precise Rp and Io breakdown is related to the analysis and mainly to the correction of Rp and Io points position in all columns of the image for the most precise approximation (because of a small distance between Rp and Io it is not possible to perform this breakdown in earlier approximations). So the reduction of computation times may occur only at increasing the error of layers thickness measurement. And so for example for the analysis in the first approximation for A of 32x32 size and then for 16x16 gross errors are obtained generated in the first stage and duplicated in the next ones. For approximations for A of 16x16 and then 8x8, 4x4, 2x2 and 1x1 sizes the highest accuracy is obtained, however the computation time increases approximately twice.
References 1. Koprowski, R., Wrobel, Z.: Analiza warstw na tomograficznym obrazie oka (Layer Analysis in a Tomographic Eye Image), Systemy Wspomagania Decyzji (Decision Support Systems), Zakopane (2007) 2. Koprowski, R., Wrobel, Z.: Rozpoznawanie warstw na tomograficznym obrazie oka w oparciu o detekcje krawedzi Canny (Layers Recognition in Tomographic Eye Image Based on Canny Edges Detection). Submitted for Conference on Information Technology in Biomedicine, Kamien Slaski (2008) 3. Koprowski, R., Wrobel, Z.: Layers Recognition in Tomographic Eye Image Based on Random Contour Analysis - sent to Mirage (2009) 4. Koprowski, R., Wrobel, Z.: Determining the contour of cylindrical biological objects using the directional field. In: Proceedings 5th International Conference on Computer Recognition Systems CORES 2007. Advances in Soft Computing (2007) 5. Wrobel, Z., Koprowski, R.: Automatyczne metody analizy orientacji mikrotubul (Automatic methods of microtubules orientation analysis), Wydawnictwo US (2007)
Layers Recognition in Tomographic Eye Image Based on Random Contour Analysis Robert Koprowski and Zygmunt Wrobel University of Silesia, Institute of Computer Science, 41-200 Sosnowiec, Bedzinska 39, Poland [email protected], [email protected]
Summary. The paper presents an algorithm designed to detect layers of eye’s retina using an area analysis. This analysis has been implemented and tested in images obtained by means of Copernicus OCT (Optical Coherence Tomography). The algorithm created is an original approach to detect layers, contours and psuedoparallels. The approach presented is an expansion of approaches described in [1] and [2],[3] and enables identification and recognition of external limiting membranes, retina and others. The algorithm has been implemented in Matlab and C environment.
1 Introduction Like in [1] and [2] the input image LGRAY is initially subject to filtration using a median filter of (Mh xNh ) size of h = 3x3 mask. The first stage of the edge detection method used [4] consists of making a convolution of input image LM of MM xNM resolution, i.e.
Mh /2
LGX (m, n) =
Nh /2
LM (m + mh , n + nh ) · hx (mh , nh )
(1)
LM (m + mh , n + nh ) · hy (mh , nh )
(2)
mh =−Mh /2 mh =−Nh /2
Mh /2
LGY (m, n) =
Nh /2
mh =−Mh /2 mh =−Nh /2
with Gauss filters masks, e.g. of 3x3 size. Based on that the matrix of gradient in both directions necessary to determine the edges has been determined in accordance with a classic dependence: 5 LGXY (m, n) = (LGX (m, n)2 + LGY (m, n)2 (3) And in particular its normalised form, i.e.: LG (m, n) =
LGXY (m, n) maxm,n (LGXY )
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 471–478. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
(4)
472
R. Koprowski and Z. Wrobel
The image of Lα direction field has been determined for each pair of pixels LGX (m, n) and LGY (m, n), and in general LGX and LGY images, i.e.: LGY (m, n) (5) Lα (m, n) = atan LGX (m, n) Those images, Lα and LG , are further used in the analysis, where the starting points random selection is the next step.
2 Starting Points Random Selection and Adjustment Starting points, and - based on them - the next ones will be used in consecutive stages of algorithm operation to determine parts of layers contours. The initial position of starting points was determined at random. Random values were obtained from uniform range (0, 1) for each point of image matrix Lo with image resolution LM , i.e.: M xN . For a created this way (random) image Lo digitisation is carried out with threshold pr , which is the first and one of matched (described later) parameters of the algorithm, the obtained binary matrix Lu is described by the relationship: 1 f or LG (m, n) > LM (m, n) · pr (6) Lu (m, n) = 0 f or others Starting points o∗i,j (where index i marks the next starting point, while j subsequent points created on its basis) satisfy condition Lu (m, n) = 1 that is starting points are o∗i,1 . This way the selection of the threshold value pr within the range (0, 1) influences the number of starting points, which number is the larger, the brighter is the grey level (contour) in the LG image. In the next stage the starting points’ position is modified in the set area H of MH xNH size. Modification consists in the correction of points o∗i,1 position of coordinates (m∗i,1 , n∗i,1 ) to new coordinates (mi,1 , ni,1 ), where shifts within the range mi,1 = m∗i,1 ±(MH )/2 and ni,1 = n∗i,1 (NH )/2 are possible. A change of coordinates occurs in the area of ±(MH )/2 and ±(NH )/2, in which the highest value is achieved LG (m∗i,1 ± (MH )/2, n∗i,1 ± (NH )/2), i.e.:
LG (mi,1 , ni,1 ) =
m∗ i,1 ±
max
MH 2
,n∗ i,1 ±
NH 2
MH ∗ NH ∗ , ni,1 ± LG mi,1 ± 2 2
(7)
Then the correction of repeating points is carried out - points of the same coordinates are removed.
3 Iterative Determination of Contour Components To determine layers in an OCT image, contour components have been determined in the sense of its parts subject to later modification and processing in the following way. For each random selected point o∗i,1 of coordinates
Layers Recognition in Tomographic Eye Image
473
(m∗i,1 , n∗i,1 ) and then modified (in the sense of its position) to oi,1 of coordinates (mi,1 , ni,1 ) an iterative process is carried out consisting in looking for consecutive points oi,2 , oi,3 , oi,4 , oi,5 etc. and local modification of their position (described in the previous section) starting from oi,1 in accordance with the relationship: ∗ mi,j+1 = mi,j + Ai,j · sin(Lα(mi,j , ni,j )) (8) n∗i,j+1 = ni,j + Ai,j · cos(Lα(mi,j , ni,j )) A pictorial diagram of the iterative process is shown in Fig. 1. In the case of described iterative process of contour components determination it is necessary to introduce a number of limitations (next parameters), comprising: - jMAX - maximum iterations number - limitation aimed at elimination of algorithm looping if each time points oi,j of different position are determined and the contour will have the shape of e.g. a Fig. 1. Pictorial diagram of iterative pro- spiral. cess of contour components determination - Stopping the iterative process, if it is detected that mi,j = mi,j+1 and ni,j = ni,j+1 . Such situation happens most often if Ai,j is close to or higher than MH or NH . Like in the case of starting points random selection and correction, also here a situation may occur that after the correction mi,j = mi,j+1 and ni,j = ni,j+1 . - Stopping the iterative process if mi,j > MM or ni,j > NM that is in the cases, when indicated point oi,j will be outside the image. - Stopping the iterative process if |L(mi,j , ni,j ) − L(mi,j+1 , ni,j+1 )| > Δα where Δα is the next parameter set for acceptable contour curvature. At this stage consecutive contour components for set parameters are obtained. These parameters comprise: - hx and hy mask size ((1), (2)) strictly related with image resolution and size of identified areas taken for MM xNM = 864x1024 to MH xNH = 23x23, - pr - threshold responsible for the number of starting points (6) - changed practically within the range 0 − 0.1, - jMAX - the maximum acceptable iterations number - set arbitrarily at 100, - Δα - angle range set within the range 10 − 70o , - MH xNH size of the correction area, a square area, changed within the range from MH xNH = 5x5 to MH xNH = 25x25, - Aij - amplitude, constant for individual i, j set at Ai,j = MH , - Δα - acceptable maximum change of angle between consecutive contour points, set within the range 10 − 70O . For an artificial image presented in Fig. 2 an iterative process of contour determination was carried out taking pr = 0.1, Δα = 45o , MH xNH = 5x5. The obtained results are presented in Fig. 2.
474
R. Koprowski and Z. Wrobel
Fig. 2. Artificial input image with marked contour components
Fig. 3. Artificial input image with marked overlapping contour components - the number of overlapping points of the same coordinates is shown in pseudocolours
When analysing results presented in Fig. 3 it should be noticed that the iterative process is stopped only when mi,j = mi,j+1 and ni,j = ni,j+1 (as mentioned before). That is only if points oi,j and oi,j+1 have the same position. Instead, this condition does not apply to points oi,j which have the same coordinates but for different i that is originated at specific iteration point from various starting points. Easing of this condition leads to origination of overlapping contour components (Fig. 3), which will be analysed in the next sections.
4 Contours Determination from Their Components As presented in Fig. 3 in the previous section, the iterative process carried out may lead to overlapping of points oi,j of the same coordinates (mi,j , ni,j ) originated from various starting points . This property is used for final determination of layers contour in an OCT image. In the first stage the image Lz from Fig. 3 is digitised, i.e. the image that originated as follows: 1 if m = mi,j ∧ n = ni,j LZ,j (m, n) = (9) 0 others For j = 1, 2, 3, and finally Lz (m, n): LZ (m, n) =
LZ,j (m, n)
(10)
j
LZB (m, n) = LZ,j (m, n) > pb
(11)
Where LZB is a binary image originated from digitising image Lz with threshold pb . The selection of threshold pb is a key element for further analysis and correction of the contour generated. In a general case a situation may occur,
Layers Recognition in Tomographic Eye Image
475
where despite relatively low value pr of threshold taken a selected starting point oi,1 is situated outside the object’s edge. Then the next iterations may ’connect’ it (in consecutive processes (9), (10), (11)) with the remaining part. In such case the process of protruding branches removing should be carried out - like branch cutting in skeletonisation. In this case the situation is a bit easier - there are two possibilities of this process implementation: increasing the threshold value pb or considering the brightness value LG (mi,j , ni,j ).
5 Setting the Threshold of Contour Components Sum Image On the one hand the selection of threshold pb during image LZB receiving for high values leads to obtaining those contour components, for which the largest number of points overlapped at various i of oi,j points. On the other hand contour discontinuities may occur. Therefore the second mentioned method of obtaining the final form of contour, which consists of considering values LG (mi,j , ni,j ) for Lz (mi,j , ni,j ) = 1 and higher, was selected. Assuming that two non-overlapping points o1,j and o2,j have been random selected, such that m1,j = m2,j or n1,j = n2,j , LM (m1,j , n1,j ) and LM (m2,j , n2,j ) values were determined for consecutive j. Then a maximum value was determined for each sequence of oi,j points: Om (i) = max(LM (mi,j , ni,j )) j
(12)
Then all oi,j points were removed, which satisfied the condition oi,j < (Om (i)· pj ), where pj is the threshold (precisely the percentage value of Om (i) below which all points are removed). To prevent introduction of discontinuities, only points at the beginning of the component contour are removed. The value was arbitrarily set to pj = 0.8. Correctly determined contour components and other contour fragments, which because of the form of relationship (11) and limitation for Om (i) have not been removed, are visible. However, on the other hand the number and form of parameters available allows pretty high freedom in such their selection as to obtain the expected results. In most cases the obtaining of intended contour shape is possible for one fixed MH xNH value. However, it may turn out necessary to use a hierarchic approach, for which the MH xNH size will be reduced, thanks to which a higher precision of the proposed method will be obtained and the weight (hierarchy) of individual contours importance will be introduced.
6 Properties of the Algorithm Proposed The algorithm created is presented in a block diagram - Fig. 4. The assessment of proposed algorithm properties (Fig. 4) was carried out evaluating error δ in contour determination for changing parameters pr , Δα,
476
R. Koprowski and Z. Wrobel
MH xNH , pb , pj within the range pr ∈ (0, 0.1), Δα , MH xNH ∈ (3, 35) pb , pj . An artificial image of rectangular object located centrally in the scene (Fig. 4) has been used in the assessment. Instead, the error was defined as follows: 1 (|mi,j , mw,j | + |ni,j , nw,j |) δ= · j j (13) δmin = min(|mi,j , mw,j | + |ni,j , nw,j |) j
(14) δmax = max(|mi,j , mw,j | + |ni,j , nw,j |) j
(15) assuming that only one point, i.e. i = 1, was random selected. The second part of the assessment consists of points of discontinuity against the standard contour. Fig. 6 shows the graph of erFig. 4. Block diagram of proposed contour detection algorithm (and hence ror δ values changes and its minimum δmin and maximum δmax value versus layers in an OCT eye image) MM xNM changed in the range from 3 to 35. As may be seen, values of error δ fit within the range 0.5 − 0.7, what is a small value as compared with the error originating during the algorithm operation for broad changes of other parameters. Fig. 7 shows the graph of error δ values changes and its minimum δmin and maximum δmax value vs. pr As it results from (6), the change of threshold pr value is directly connected with the number of selected points. For pr = 0.02 and higher values the number of random selected points is that large that it is possible to assume that starting from this value their number does not have a significant influence on error δ value. Fig. 8 shows the graph of error δ values changes and its minimum δmin and maximum δmax value vs. MH xNH . Both the choice of the points position correction area MH xNH and the amplitude Ai,j which in practical application is constant for various i and j is a key element affecting the error and thereby the precision of contours reconstruction. As may be seen from Fig. 7 the value of δ versus MH xNH is relatively large for Ai,j = const = 9 (for variable i and j ), for which the computations were carried out. A strict relationship of error δ values versus MH xNH and Ai,j is visible in Fig. 9 and of the maximum value δmax in Fig. 10. Based on it the relationship between MH = NH and Ai,j may be determined, i.e.: MH = NH ≈ 1.4 ∗ Ai,j (in graphs in Fig. 9 and Fig. 10 for the minimum error value it may be read e.g. MH = NH = 25 at Ai,j = 35). From Fig. 9 and Fig. 10 it may be noticed that high error values
Layers Recognition in Tomographic Eye Image
477
Fig. 5. Artificial input image used for error assessment
Fig. 6. Graph of error δ values changes and its minimum δmin and maximum δmax value vs. MM xNM
Fig. 7. Graph of error δ values changes and its minimum δmin and maximum δmax value vs. pr
Fig. 8. Graph of error δ values changes and its minimum δmin and maximum δmax value vs. MH xNH
Fig. 9. Graph of error δ values changes versus MH xNH and Ai,j
Fig. 10. Graph of error δmax maximum value changes versus MH xNH and Ai,j
occur for small MH xNH values and high Ai,j . This results from the fact that the consecutive points oi,j+1 are separated from oi,j by Ai,j and their local position correction occurs within a small MH xNH range. At high Ai,j the
478
R. Koprowski and Z. Wrobel
rounding originating in computations of Lα value - formula (5) - causes large deviations of oi,j+1 points from the standard contour, what substantially affects the δ and δmax error.
7 Summary The method described gives correct results at contours determination (layers separation) both in OCT images as well as in others, for which classical methods of contours determination do not give results or the results do not provide a continuous contour. The algorithm drawbacks include a high influence of noise on the results obtained. This results from relationship (6), where pixels of pretty high value, resulting from a disturbance, increase the probability of selecting in this place a starting point and hence a component contour. The second drawback is the computations time, which is the longer the larger is the number of selected points and/or the reason, for which searching for the next points oi,j+1 was stopped (these are limitations specified in section 4). Fig. 11 below presents the enlarged results obtained for an example of OCT image. The algorithm presented may be further modified and parametrised, e.g. through Ai,j change for various i and j acc. to the criterion suggested or having considFig. 11. Example of final enlarged result ob- ered weights of individual oi,j tained for a real OCT image for pr = 0.02, points and taking them into Δα = 45o , MH xNH = 35x35, pb = 2, pj = 0.8, account as iteration stopping Ai,j = 25 condition etc.
References 1. Koprowski, R., Wrobel, Z.: Analiza warstw na tomograficznym obrazie oka (Layer Analysis in a Tomographic Eye Image), Systemy Wspomagania Decyzji (Decision Support Systems), Zakopane (2007) 2. Koprowski, R., Wrobel, Z.: Rozpoznawanie warstw na tomograficznym obrazie oka w oparciu o detekcje krawedzi Canny (Layers Recognition in Tomographic Eye Image Based on Canny Edges Detection). Submitted for Conference on Information Technology in Biomedicine, Kamien Slaski (2008) 3. Koprowski, R., Wrobel, Z.: Layers Recognition in Tomographic Eye Image Based on Random Contour Analysis - sent to Mirage (2009) 4. Koprowski, R., Wrobel, Z.: Determining the contour of cylindrical biological objects using the directional field. In: Proceedings 5th International Conference on Computer Recognition Systems CORES 2007. Advances in Soft Computing (2007)
Recognition of Neoplastic Changes in Digital Images of Exfoliated Nuclei of Urinary Bladder – A New Approach to Classification Method Annamonika Dulewicz1 , Adam J´ o´zwik2, Pawel Jaszczak1, 1 and Boguslaw D. Pi¸etka 1
2
Institute of Biocybernetics and Biomedical Engineering PAS, 02-109 Warsaw, ul. Ks.Trojdena 4, Poland [email protected] Technical University of L o ´d´z, Computer Engineering Department, 90-924 L o ´d´z, ul. Stefanowskiego 18/22
Summary. The aim of this study was to examine whether it could be possible to recognize neoplastic changes in digital images of exfoliated nuclei of urinary bladder with the help of pattern recognition methods. Nonsurpervised classification based on the k-nearest neighbors rule (k-NN) was applied. Presence of neoplastic urothelial nuclei in organic fluid points to neoplastic changes. A computer-assisted system for identification of neoplastic urothelial nuclei was constructed [2]. The system analyzed Feulgen stained cell nuclei obtained with bladder washing technique and analysis was carried out by means of a digital image processing system designed by the authors. Features describing nuclei population were defined and measured. Then a multistage classifier was constructed to identify positive and negative cases [3]. In this study we used the same features and tried to use k-NN rule to classified analyzed cases. At the beginning the training set was formed of 55 cases representing 19 healthy persons and 36 cancer patients, among them 17 being diagnosed as having cancer of high grade malignancy and 19 as having cancer of low grade malignancy. Standard and parallel k-NN classifiers were analyzed. For both methods feature selection was performed and a total error rate was calculated. Then evaluation of both classifiers was carried out by the leave out method. The evaluation of the examined classifying methods was completed on a set of 76 new cases of testing set. The results for the standard k-NN classifier was: 64% specificity and 77% sensitivity for all the cancer cases. The results for the parallel k-NN classifier was: 57% specificity and 62,5% sensitivity. Then the final approach was carried on join data: training and testing. What made together 131cases. This number of data became numerous enough for analyzing k-NN classiffiers for 8 variables. Finally, the results for the standard k-NN classifier was: 74% specificity and 75% sensitivity for all the cancer cases. The results for the parallel k-NN classifier was: 89% specificity and 74% sensitivity. The results shown that k-NN parallel classifier is sufficiently effective to be used in constructed computer-aided systems dedicated for aid in detection of urinary bladder cancer.
M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 479–487. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
480
A. Dulewicz et al.
1 Introduction Pathology and cytology relay on images examined through a microscope what requires a lot of specialists in clinics and analytical laboratories. The work of cytopathologists is uphill, tiresome and because of that it is sometimes faulty. The quality of diagnostic process can be enhanced by digital techniques. Moreover can be performed on enormous number of cases in screening examinations what besides providing facilities for easier and more objective diagnosis, enables diagnosis of early neoplastic changes. Early diagnosis of neoplastic changes is of great value because it allows for increase of chance for efficient therapy or prolongation of patient’s life and moreover it would considerably lower costs of treatment, which in case of advanced stage of invasive cancer is expensive, as it requires surgical treatment (extensive intervention of removal of an attacked organ together with lymph nodes), radiotherapy and chemotherapy. As we already mentioned cytological visual examination of urine or bladder washings provides many false negative results especially for cases at an early grade of malignancy whereas histological examination is a painful invasive test often causing inflammation, dysuria and hematuria. Therefore, working out a computer system for recognizing neoplastic changes would considerably facilitate for control examinations in clinics and analytical laboratories. Urinary bladder tumours originate from lining and glandular epithelium of urinary bladder wall or from other cells of bladder wall. Tumour cells exfoliate into urine. We have tried with success to detect cancer nuclei from the voided urine [3]. We worked out a computer-aided system for detecting the urinary bladder cancer on the basis of image analysis of enlarged fragments of microscopic smears of Feulgen stained urine bladder nucleated cells. It was a non-invasive method of diagnosis of the urinary bladder cancer in early stadium and during its progression or involution in the course of therapy. After that, we found out that examination of specimens obtained by ”bladder washing” technique seems to be quicker and more reliable as we get specimens of much better quality. A new technique facilitated for working out a quick and little invasive method based on computer analysis of Feulgen stained specimens. A computerassisted system for identification of neoplastic urothelial nuclei was constructed [2]. And evaluation of the system has shown a 68% correct classification rate in the control group, while a 86% rate was obtained among the cancer patients. The aim of this study was to examine whether it could be possible to recognize neoplastic changes in digital images of exfoliated nuclei of urinary bladder with the help of k-nearest neighbors algorithm (k-NN) with better results.
2 Materials and Methods 2.1
Materials
A total of 131 men and women were examined. Of those, 47 had no cancer and 28 of them were designated controls. The rest made a control testing set.
Recognition of Neoplastic Changes in Digital Images
Fig. 1. Examples of normal nuclei
481
Fig. 2. Examples of neoplastic nuclei
Among the 84 cancer patients, 45 were diagnosed by clinical and histopathologic criteria as having bladder cancer of high grade and 39 as having bladder cancer of low grade. Of the all cancer patients 36 were used as training set for building classificatories and the rest of them made testing sets. The specimens were prepared and diagnosed in: Department of Urology at University Hospital of Nijmegen and later by Leiden Pathological Laboratory, Holland. Feulgen stained specimens were observed under an optical microscope (10x). A CCD camera and frame-grabber were used to supply images to the computer system. The process of image scanning with SSU (Stage Scan Utility) and processing with DIPS (Digital Image Processing System) software worked out by authors [3] was performed automatically. DIPS software facilitates correction of the image background and normalization, which are essential conditions for correct comparison of objects derived from different images. Cells for analysis in our study were obtained with bladder washing technique. Then material was concentrated and specially prepared to prefix the cells. Next the cells were settled on glass slides, dried, post fixed and stained (Feulgen staining). Feulgen staining was choose in order to get in specimens only stained nuclei without visible cytoplasm. Neoplastic cells differ from normal cells as they have: different structure of nuclei, increased ratio of nucleus to cytoplasm, bigger nuclei than in normal cells, irregular shapes of nuclei, nuclei contain usually more chromatin, nuclei have enlarged nucleoli, sometimes bigger number of nucleoli and more of them exfoliate. Examples of normal and neoplastic nuclei are presented in Fig. 1 and Fig. 2. One of the very important feature concerning patients having cancer is greater exfoliation of urothelial cells than with healthy people. We took this feature under consideration. Beside different number of nuclei in specimens, we observed relatively big differences in size of nuclei clusters and single nuclei. Analysis of nuclei size histograms of control and high grade of malignancy showed the biggest differentiation between those groups for object sizes 27÷275 pixels, what corresponds to the range of 3,6μm2 -108μm2 .
482
A. Dulewicz et al.
Approved cytological observation indicates increased presence of granulocytes and bigger nuclei in the course of neoplastic process. We have investigated histograms of nuclei size distribution in the selected ranges. There were 4 sub-ranges selected in the range of the histogram and 8 parameters defined as ratios of nuclei number in mentioned 4 sub-ranges in order to differentiate between control, and the groups of malignancy. The list of them is presented below: • • • • • •
• •
Feature R60, defined as a ratio of nuclei number in the range of 27-60 pixels size to number of nuclei in the range of 61-120 pixels size, Feature R40, defined as a ratio of nuclei number in the range 27- 40 pixels size to number of nuclei in the range of 41 - 80 pixels size, Feature R121/500, as a measure of the rate of interest of number of objects in the interval of object size: 121 -500 pixels, calculated in proportion to the total number of objects in a specimen, Feature RB10S10, as a measure of the ratio of the number of nuclei clusters - nk to the global number of single nuclei and granulocytes in a specimen - nj . Feature RO, as a ratio of nuclei number in the range of 27÷60 pixels size to number of nuclei in the range of 27÷40 pixels size in the range of 61-120 pixels size, Feature X - was a measure of the difference between low grade and high grade malignancy. of nuclei in the range ”a” X = number number of nuclei in the range ”b” where ranges: a and b concern sizes of nuclei measured in pixels , a: 1-39 pixels, b: 40-80 pixels. Feature G, as a measure of the rate of interest of number of objects in the interval of object size: 27-29 pixels (granulocytes), calculated in proportion to the total number of objects in a specimen. Feature C - number of objects in the field of the analyses.
Then a multistage classifier was constructed to identify cases. And final evaluation of the system has shown a 68% correct classification rate in the control group, while a 86% rate was obtained among the cancer patients. 2.2
Methods
The aim of this study was to examine whether it could be possible to recognize neoplastic changes in digital images of exfoliated nuclei of urinary bladder with the help of k-NN rule. In the study we used the same features and tried to use k-NN rule to classify analyzed cases. The training set of cases was formed of 55 cases representing 19 healthy persons and 36 cancer patients, among them 17 being diagnosed as having cancer of high grade malignancy and 19 as having cancer of low grade malignancy.
Recognition of Neoplastic Changes in Digital Images
483
Standard and parallel k-NN classifiers were analyzed. For both methods feature selection was performed and total error rate calculated. Then evaluation of both classifiers was carried out by the leave out method and tested on testing set. At the beginning a relation between used 8 parameters and tree classes (control, high grade and low grade) was investigated with the help of KruskalWallis test. The result is presented in Tab.1. Table 1. The results of Kruskal-Wallis Test Results of Kruskal-Wallis Test Feature name R40 R60 R121/500 RO RB10S10 X G C Significance level 0.0013 0.0025 0.0036 0.0042 0.0089 0.0021 0.1986 0.0001
The 7 parameters out of 8 showed to be statistically significant. Classification on the Basis of Standard k-NN Classifier The first step of constructing the standard k-NN classifier was feature selection out of 8 features and calculation of the total error. The training set consisted of 131 cases. The results are shown in Tab.2. Table 2. The results showing features selection, total error and number of k-nearest neighbors for standard k-NN classifier Standard k -NN rule Considered Error Opt. Selected classes rate k-NN features 1, 2, 3 0.3511 10 3, 6, 7, 8 Total error rate 0.3511 Joint number of features 4
The selected features: 3,6,7,8, corresponded to features: - R121/500, X, G, C. Then number of correct classifications was calculated with the leave one method for three classes: • • •
control – class1, high grade – class 2, low grade – class 3.
The results are shown in Tab.3. At the next step probabilities that object from the class ”i” (row) would be assigned to the class ”j” (column) was calculated. The results are shown
484
A. Dulewicz et al.
Table 3. Number of objects from the class i assigned to the class j (leave one out) Standard k -NN rule True Assigned class class 1 2 3 1 35 9 3 2 7 32 6 3 14 7 18
Table 4. Probability that object from the class i would be assigned to the class j Standard k -NN rule True Assigned class class 1 2 3 1 0.7447 0.1915 0.0638 2 0.1556 0.7111 0.1333 3 0.3590 0.1795 0.4615
Table 5. Probability that object assigned to the class i comes in fact from the class j Standard k -NN rule Assigned True class class 1 2 3 1 0.6250 0.1250 0.2500 2 0.1875 0.6667 0.1458 3 0.1111 0.2222 0.6667
in Tab.4. Then probabilities that object assigned to the class i comes in fact from the class j was calculated. The results are shown in Tab.5. The specificity for standard classifier was 74.5% and sensitivity calculated for all the cancer cases was 75.0%. Classification on the Basis of Parallel k-NN Classifier The first step of constructing the parallel k-NN classifier was feature selection out of the same 8 features and calculation of the error rates. The training set consisted of 131 the same cases that were used for calculating standard classifier. The results are shown in Tab.6. The selected features: 1,3,6,7,8, corresponded to features: for 1 & 2 pair of classes: - R40, R121/500, X, G, C, for 1 & 3 pair of classes: - R60, X, C, for 2 & 3 pair of classes: - R60, RO,G. Then number of correct classification was calculated with the leave one method for three classes: • • •
control – class 1, high grade – class 2, low grade – class 3.
Recognition of Neoplastic Changes in Digital Images
485
The results are shown in Tab.7. Table 6. The results showing features selection, error rates and number of k-nearest neighbors for parallel k-NN classifier Parallel k -NN rule Pair of Error Opt. Selected classes rate k -NN features 1 & 2 0.1630 5 1,3,6,7,8 1 & 3 0.1977 6 2,6,8 2 & 3 0.2024 3 2,4,7 Total error rate 0.2977 Joint number of features 7
Table 7. Number of objects from the class i assigned to the class j (leave one out)
Parallel k -NN rule True Assigned class class 1 2 3 1 42 5 0 2 8 32 5 3 14 7 18
At the next step probabilities that object from the class ”i” (row) would be assigned to the class ”j” (column) was calculated. The results are shown in Tab.8. Then probabilities that object assigned to the class i comes in fact from the class j was calculated. The results are shown in Tab.9. Table 8. Probability that object from the class i would be assigned to the class j
Table 9. Probability that object assigned to the class i comes in fact from the class j
Parallel k -NN rule True Assigned class class 1 2 3 1 0.8936 0.1064 0.0000 2 0.1778 0.7111 0.1111 3 0.3590 0.1795 0.4615
Parallel k -NN rule Assigned True class class 1 2 3 1 0.6563 0.1250 0.2188 2 0.1136 0.7273 0.1591 3 0.0000 0.2174 0.7826
The specificity for parallel classifier was 89.4% and sensitivity calculated for all the cancer cases was 73.8%.
3 Results and Conclusions A multi-class problem can be reduced to some two-decision tasks. One of the possible solutions is the construction of a parallel net of two-decision classifiers, a separate classifier for each pair of classes, and then forming the final decision by voting of these two-decision classifiers. Such kind of an approach, applied for the k -NN rule, was first used in [4] and slightly modified in [5]. However, the modified version is more suitable for larger data since it is more flexible.
486
A. Dulewicz et al.
This parallel network of two-decision classifiers should offer better performance than the standard k-NN classifier. It is derived from the geometrical interpretation of both discussed types of classifiers. In case of the standard classifier, the boundary separating any pair of classes i and j depends also on the samples from the remaining classes. They have influence on value k and on the selected features. Those samples may act as a noise. The parallel net may reduce this noise effect. By using the error rate estimated by the leaving-one-out method as a criterion, we can find an optimum number of k for the k-NN rules and perform the feature selection separately for each of the component classifiers. The error rate found by the leave one out method was used as a feature selection criterion and the criterion for determination the value of k, i.e. all possible combinations of the features and the values of k were reviewed. In the first step of evaluation of classification quality on the training set of 55 cases by the leave one method gave the following result: for the standard classifier: specificity: 89% , sensitivity: 69% and for the parallel classifier: specificity: 94% , sensitivity: 89%. The sensitivity was calculated for two classes: control/malignancy. In the second step the evaluation of the examined classifying methods was completed on a set of 76 new cases of testing set. The results for the standard k-NN classifier was: 64% specificity and 77% sensitivity both for high and low grade patients and for the parallel k-NN classifier was 57% specificity and 63% sensitivity. Then the final approach was carried on join data: training and testing. What made together 131cases and it was sufficiently large as compared to the number of features. Finally, the results for the standard k-NN classifier was: 75% specificity and 75.0% sensitivity for all the cancer cases. The results for the parallel k-NN classifier was: 89% specificity and 74% sensitivity. The results shown that k-NN parallel classifier is sufficiently good to be used in construction of computer-aided systems dedicated for aid in detection of urinary bladder cancer.
References 1. Boon, M.E., Drijver, J.S.: Routine cytological staining techniques. Theoretical Background and Practice. Macmillan Education Ltd., London (1986) 2. Dulewicz, A., Pi¸etka, D., Jaszczak, P., Nechay, A., Sawicki, W., Pykalo, R., Ko´zmi´ nska, E., Borkowski, A.: Computer identification of neoplastic urothelial nuclei from the bladder. Analytical and Quantitative Cytology and Histology 23(5), 321–329 (2001) 3. Pi¸etka, D., Dulewicz, A., Jaszczak, P.: Pathology explorer (PATHEX) a computer- aided system for urinary bladder cancer detection. In: XIII Scientific Conference, Biocybernetics and Biomedical Engineering, Gda´ nsk, CD-ROM Proceedings, Session XII-2 (2003)
Recognition of Neoplastic Changes in Digital Images
487
4. J´ o´zwik, A., Vernazza, G.: Recognition of leucocytes by a parallel k-NN classifiers, Warsaw. Lecture Notes of ICB Seminar, pp. 138–153 (1988) 5. J´ o´zwik, A., Serpico, S., Roli, F.: A parallel network of modified 1-NN and k-NN classifiers -application to remote-sensing image classification. Pattern Recognition Letters 19, 57–62 (1998) 6. Kurzy´ nski, M.: Rozpoznawanie obraz´ ow, Oficyna Wydawnicza Politechniki Wroclawskiej, Wroclaw, str.12–45, 58–102, 143–216 (1997)
Dynamic Contour Detection of Heart Chambers in Ultrasound Images for Cardiac Diagnostics Pawel Hoser Institute of Biocybernetics and Biomedical Engineering PAS [email protected]
Summary. The contour detection of moving left ventricle is very useful in cardiac diagnosis. The subject of this paper is to introduce the contour detection method for moving objects in the unclear biomedical images series. This method is mainly dedicated to be used for heart chamber contours in ultrasound images. In such images the heart chambers are visible much better if they are viewed in their movement. That is why for a good automatic contour detection the analysis of the whole series of images is needed. This method is suitable for the analysis of the heart ultrasound images. The contour detection method has been programmed and tested with the series of ultrasound images, with an example of finding the left ventricle contours. The presented method has been later modified to be even more effective. The last results seems to be quite interesting in some cases.
1 Introduction Computer image processing and analysis are very important in medical diagnosis now days. Within the automatic analysis of biomedical images, the key role belongs to contouring and segmentation of different objects. This is necessary to expose the interesting parts of the image, like the internal organs, tissues, cells and other important structures. The finding outlines of the left ventricle on the ultrasonic images is quite good example. The analysis of the left ventricle contractibility is highly important for the cardiological diagnosis. Properly acquired contours of the ventricle enable to determine many hemodynamic parameters and even to detect the cases of diskinesis, hypokinesis or akinesis [1, 2, 3, 4]. This is very important since the ultrasonic imaging is cheap, non-invasion and widely accessible. On the other side, such images are hardly visible and very difficult for a automatic computer analysis. In case of clear images it is enough to apply the standard methods of the edges and borders detection [4]. For more difficult cases there are numerous more sophisticated methods. If the outline is broken and the picture is very crabbed, it is worth to incorporate so-called active contour methods [5, 6, 7]. Many complex methods based on this idea have been developed [8, 9, 10, 11, 12]. But even those methods become insufficient in some cases. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 489–497. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
490
P. Hoser
A situation when some objects are much easier to be recognized if they are observed in movement rather than in checking each static image is not uncommon. Such a situation is faced when it is necessary to see something in darkness, fog, behind bushes, or in many other such situations. There is a lot of sets of images looking completely unclear when they are viewed in the static way, while viewing them as a movie enables to recognize the objects moving within them. In case of such images even the best methods of contouring static images are not able to fulfill this task. In biomedical imaging the intermediate situation, when some objects are partly visible, but in moving they can be seen much better, is come across not infrequently. An example of such moving biomedical images are ultrasond images [4]. Ultrasonic images are mostly viewed dynamically, as particular static images are often unreadable. The objects are much easier to recognize when they are moving, since fragments of information from the consecutive images complement one another. This leads to the conclusion that for an effective method of outline detection possibly the most complete analysis of the full set of images is needed, so the problem is a space-time issue.
Fig. 1. Ultrasound images are very difficult for automatic computer analysis
Aiming to find the outlines of the ventricles within the ultrasond images a specific method of moving contours detection has been designed and implemented. The method bases on setting a certain surface immersed within the 2D+1 space-time into moving contours in the full series of the images.
2 A Method of Contour Detection of Moving Object The new method is applicable, in particular, to moving images that are very crabbed, but the searched for objects are much better visible in motion than in static. Therefore, the case of finding the moving contours has been formulated within the 2D+1 space-time. The images are two-dimensional and the third dimension is the time. Assuming that the contour is a closed curve, the movement of the contour within the space-time is a certain surface, which normally resembles a bent tube. The contours of the moving object are found by fitting a certain surface within the 2D+1 space-time to the edges of the object at the series of
Dynamic Contour Detection of Heart Chambers
491
Fig. 2. A method of contour detection of moving object
the images. The fitting process is a specific dynamic system. The surface is represented within the computer program by an adequate mesh of knots. The dynamics of the surface depends on its geometry and its connection with the series of the images tested. In the beginning, the method was adapted mainly for detection of heart ventricles outlines within the series of ultrasound images. The whole process of contour detection is composed of four steps: 1. 2. 3. 4.
The established of moving center o heart chamber in 2D+1 space. The construction of starting configuration of the active surface. The main process of fitting surface to visible edges of the moving chamber. The last corrections of settled contours in the whole series.
The first step is finding of the center of the movement for the object. In case of heart ventricles on the ultrasound images the situation is relatively comfortable, as the ventricles are dark inside, and their edges are in many places visible as light spots. The center of the ventricle is found with the help of the method of the highest scalar multiplication. In the beginning, all the images are averaged by time, and the common center of the ventricle is being found for the whole series of the images. So, the first approximation is based on special image obtained by averaging all images in the whole series. It is obtain as maximum of scalar multiplication, given by following formula: ϕ (m, n) =
K
Jk (i, j) f (i − n, j − m)
(1)
k=1 j∈I i∈I
Where Jk (i,j ) is the brightness of k -image in point indicated with i,j numbers. Whereas f function is the function of two arguments, which represents general simple model of chamber on the image. The position on the image, indicated by n, m numbers, where ϕ function is the biggest, is considered as the common middle of moving ventricle. It is supposed there that the ventricle does not much change its coordinates within the series of the images tested. The next step is setting the ventricle center within each image, and this way the trajectory of the ventricle center movement within the 2D+1 space-time is obtained. So, the similar calculation is applieid in this step in case of ech image.
492
P. Hoser
Fig. 3. The moving center of the heart ventricle within the 2D+1 space
After setting the movement of the object’s center, the main process of fitting the active surface to the moving outline, which is the discreet dynamic system, starts. This surface is described by a network of knots within the 2D+1 space. The position for each knot w (i,j ) is set by three coordinates [ xij , yij , tj ] within the 2D+1 space-time. The whole mesh representing the active surface possesses the amount of NM knots, where N is the number of knots in each contour, and M is the number of images within the tested series. The dynamics of the surface depends on its geometry and the connection with the series of the tested images. The initial cause for the active surface is built on the basis of the previously found movement of the center of the object. The initial cause is a thin tube, which surrounds the curve of the object’s center movement. The konts of starting position of active surface are given by simple farmula. There are established as a points along the circle around the center of chumber in ech image. wxij = sxj + r0 cos(2π(i + λ)/m) (2) wyij = syj + r0 sin(2π(i + λ)/m) where λ is equal zero for j/2 natural and 1/2 for j/2 not natural. In this way, the active surface for dynamic system is created at the beginning of fitting process. Next, the dynamic process of fitting starts, the tube expands, the surface movement is described with a set of appropriate equations. When the surface stops, the final state is treated as the contour found.
Fig. 4. Stages of the process of fitting the active surface
Dynamic Contour Detection of Heart Chambers
493
The dynamics of the mesh should be understood as moving the position of the knots in the proper direction. Normally, in the beginning of the fitting process, all the knots are moved equally and the tube simply expands, but later only a part of the knots is moving, and this determines how the process goes. The positions of the knots are changed one by one in a certain order. The direction of the position changes for a knot is described by the vector perpendicular to the surface at the knot point, calculated on the basis of the neighboring knots. Whether the position of the knot is to be changed or not can be determined by specific equations. At first the new position of the knot is computed and, basing on it, the whole row of the values is computed, depending also on the neighboring knots of the mesh and on the contents of the images. If any of these values exceeds the preset range, the next step for this knot would not be executed, and the knot is temporarily stopped. G(i, j) < Gmax d1 (i, j) < dmax D4 (i, j) < Dmax
J(i, j) < Jmax d2 (i, j) < dmax K ∗ (i, j) < Kmax
rL (i, j) < hmax rP (i, j) < hmax d3 (i, j) < dmax d4 (i, j) < dmax K ∗ (i − 1, j) > Kmin K ∗ (i + 1, j) > Kmin
(3) where the quantities: rL(i,j ) , rP (i,j ) represent the distance from the new position for the knot w (i,j ) to the neighboring knots within the same outline. Quantities d 1(i,j ) , d 2(i,j ) , d 3(i,j ) , d 4(i,j ) represent the distance of the w (i,j ) knot from the four nearest knots from the neighbor contours. Quantity K *(i,j ) has been named the contour curvature for the i,j indexed knot and quantity D4(i,j ) – the time deviation for the knot of the mesh. J (i,j ) means the brilliance of the smoothed image with index j at the point indicated by knot w (i,j ). Quantity G(i,j ) is the value of the directional derivative of the brilliance function for the j image in the point determinated by the position of the w (i,j ) knot, where the direction is determined by the vector normal to the surface. The most important and most interesting of those quantities are the outline curvature, time deviation and quantities J (i,j ) and G(i,j ), being responsible for the connection of the dynamics of the active surface within the series of images. The mentioned quantities take the main role within stopping and setting free the knots during the process of fitting the active surface. Time deviation D 4 (i,j ) is set with the equations: wi,j − 1/4 ( wi,j−1 + wi+1,j−1 + wi,j+1 + wi+1,j+1 ) f or j/2 ∈ /N D4 (i, j) = 1 wi,j − /4 ( wi−1,j−1 + wi,j−1 + wi−1,j+1 + wi,j+1 ) f or j/2 ∈ N
(4) The phrase may be incorporated as a comfortable numerical operand of curvature for the smooth curve category C 2. In case of checking whether the negative curvature is not too small, value K * for both neighbor knots in the same contour, not for the central knot, is computed. This arises from the fact that at every move of the knot, increases the curvature K * at its point, and decreases the curvature K * of the neighbor knots, so only testing of the condition causing a negative curvature has sense. When all the conditions are fulfilled, the program executes the move of the knot position, otherwise the knot is temporarily
494
P. Hoser
Fig. 5. The contour curvature and time deviation of the mesh knot. The most important parameters are the curvature indicator and time deviation value, defined above.
stopped. Checking the above conditions for the position change for every knot in the mesh enables to control the process of fitting the active surface. Parameters Kmin, Kmax, Dmax, hmax, rmax are established experimentally on the bases of typical parameters of real chamber contours in ultrasound images. The calculation of this parameters based on user corrections of contours is planed in near future. They play important rule in this dynamic process as well. It may be said that the dynamics of the mesh representing the active surface consists mainly in proper stopping and releasing the knots. If this algorithm works effectively with the proper series of images, the mesh representing the active surface broadens until it stays at the outlines of the moving object within this series of images. All the knots of the mesh are then stopped as the result of meeting the visible fragments of the contour, or due to the surface geometry in the fragments, where the contour is not visible clearly enough. When all the knots are stopped, it means the fitting process for the active surface is finished. The stopped surface should be treated as the problem solution that is as the found contour of the moving object within the 2D+1 space-time. So the final state sections of the active surface, made with planes containing images of the tested series, are the found contours of the object in the images.
3 The Results of the Method Activity with Heart Ultrasonic Image Series The method has been implemented in the form of an appropriate computer program and tested on tens of series of ultrasonic heart images. In the beginning, the method was designed mainly to analyze series of ultrasonic heart images. It was incorporated to contour of the left ventricle. The results turned out better than expected. It was nearly always the center of the left ventricle that had been found properly. In most images (about 88%) the contours of the left ventricle had been described properly. In each series of images at least the bigger part of images was elaborated properly with the program, and there were such image series, that all the contours were indicated properly. This examination shows that it was worth the effort adding the third dimension, connected with time. It has been proven
Dynamic Contour Detection of Heart Chambers
495
Fig. 6. The contour detection of the left ventricle on the ultrasonic images
that analyzing the ultrasonic images one by one provides much worse results. It happened even so that the contour of the ventricle was indicated properly within the completely unreadable image. It arises from the fact that the program was based on fragments of information from the neighbor images, since the contouring is a space-time process. Anyhow, the method cannot be said to be unfailing. There are images with the contour of the left ventricle indicated improperly, and there are around 12% of such images. In most cases, when the contour had been indicated improperly by the program, the images were extremely difficult for automatic analysis. In case, in the diastole phase of the ventricle, the contour had been strongly fragmented and there were big areas that it had been totally invisible. In the contraction phase, the whole area around the ventricle was nearly uniformly smeared. The method is still in development and there are many improvements planned, which may make the results better. The algorithm is simple and works really fast, so the whole program may work fast. The most time consuming part is the primary image elaboration. Additionally, the algorithm may be easily programmed in a parallel way, which should much improve its speed, with the multicore processors applied in the near future. The program works fully automatic, without the user’s supervision, and the position of the ultrasonic probe is optional. No information is needed for the global shape and dynamics of the ventricle. In this way implementation of the additional knowledge may improve the effects of the method. In the situation the contour is not always indicated properly, a program modification may be needed, so the user may enter the final corrections.
4 Conclusion The special method of contour detection of the crabbed moving objects has been created and programmed. The method is based on dynamic fitting of the active surface within the 2D+1 space-time. It is suitable, in particular, for the cardio ventriculographic analysis within the series of ultrasonic images. In most cases the program indicates the left ventricle contours properly, but there are still some cases in which the algorithm fails. This makes necessary to include the possibility of entering the changes by the user. It is also
496
P. Hoser
planned to incorporate a number of improvements, modifications and additional programs, aiming to strongly improve the effectivity and generality. In the nearest future, into the process of dynamic fitting of the active surface more advanced tools from the differential geometry [5], concerning the curves and surfaces curvature, should be introduced. We hope the method may also be incorporated into many other biomedical imaging techniques after certain necessary modifications. An attempt to create a method for reconstruction of moving surfaces of the heart ventricles within the 3D+1 space-time, working a similar way on the basis of series of three-dimensional images, is considered. The program would be capable to cooperate with much bigger expert systems, mainly in the area of the contour detection. The new method is important not only within biomedical solutions, but it can also find a more generic application. The contouring of moving objects is a significant part of automatic image processing and analysis. There is still planned to design and implement the method of simultaneous detection of all the contours of cardio chambers visible on the tested frames. The objects to be tested are both chambers, both auricles, the cross section of the aorta or the thickness of the interchamber walls. Such approach may highly help the program performance, as the information about the cardio chambers are correlated, that is basing on the view of one chamber it is possible to say more about the shape and location of the other one, visible at the same frame. Such solution would have much higher diagnostic value.
References 1. Kulikowski, J.L., Przytulska, M., Wierzbicka, D.: Left Heart Ventricle Contractility Assessment Based on Kinetic Model. In: ESEM 1999, Barcelona, pp. 447–448 (1999) 2. Przytulska, M., Kulikowski, J.L.: Left Cardiac Ventricle’s Contractility Based on Spectral Analysis of Ultrasound Imaging. Biocybernetics and Biomedical Engineering 27(4), 17–28 (2007) 3. Hoser, P.: A Mathematical Model of the Left Ventricle Surface and a Program for Visualization and Analysis of Cardiac Ventricle Functioning. Task Quarterly 8(2), 249–257 (2004) 4. Feigenbaum, H.: Echocardiography, 5th edn. Lea & Febiger, Philadelphia (1994) 5. Goetz, A.: Introduction to differential geometry of curve and surfaces. PrenticeHall, Englewood Cliffs (1970) 6. Mlsna, P.A., Rodriguez, J.J.: Gradient and Laplacian-Type Edge Detection. In: Bovik, A. (ed.) Handbook of Image and Video Processing, San Diego. Academic Press Series in Communications, pp. 415–432 (2000) 7. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. International Journal of Computer Vision 1, 321–331 (1988) 8. Caselles, V., Kimmel, R., Sapiro, G.: Geodesic active contours. International Journal of Computer Vision 22, 61–79 (1997) 9. Cohen, L.D.: Active contour models and balloons. Computer Graphics and Image Processing: Image Understanding 53(2), 211–218 (1991)
Dynamic Contour Detection of Heart Chambers
497
10. Cohen, L.D., Kimmel, R.: Global minimum for active contour models: A minimal path approach. International Journal of Computer Vision 24, 57–78 (1997) 11. Chan, T.F., Vese, L.A.: An active contour model without edges. IEEE Transactions on Image Processing 10, 266–277 (2001) 12. Fejes, S., Rosenfeld, A.: Discrete active models and applications. Pattern Recognition 30, 817–835 (1997) 13. Ji, L.L., Yan, H.: Attractable snakes based on the greedy algorithm for contour extraction. Pattern Recognition 35, 791–806 (2002) 14. Mclnerney, T., Trerzopoulos, D.: Topologically adaptable snakes. In: Proc. IEEE Conf. Computer Vision, ICCV 1995, pp. 840–845 (1995) 15. Niessen, W.J., Romeny, B.M.T., Viergever, M.A.: Geodesic deformable models for medical image analysis. IEEE Transactions on Medical Imaging 17, 634–641 (1998) 16. Xu, C.Y., Prince, J.L.: Snakes, shapes, and gradient vector flow. IEEE Transactions on Image Processing 7, 359–369 (1998) 17. Blake, A., Curwen, R., Zisserman, A.: A framework for spatio-temporal control in the tracking of visual contour. Int. J. Comput. Vis. 11(2), 127–145 (1993)
Capillary Blood Vessel Tracking Using Polar Coordinates Based Model Identification Mariusz Paradowski1, Halina Kwasnicka1 , and Krzysztof Borysewicz2 1 2
Institute of Informatics, Wroclaw University of Technology Department of Rheumatology and Internal Diseases, Wroclaw Medical University
Summary. Capillaroscopy is one of the best medical diagnostic tools for early detection of scleroderma spectrum disorders. The diagnostic process is based on capillary (small blood vessel) study using a microscope. Key step in capillaroscopy diagnosis is extraction of capillaries. The paper presents a novel semi-automatic method of capillary vessel tracking, which is a non-directional graph creation method. Selection of neighboring vertexes location is its key component. It is performed by model identification. Four capillary model classes are proposed, all using data represented in polar coordinates.
1 Introduction Capillaroscopy is a valuable study method for early microcirculation morphofunctional abnormalities. It is indicated in all patients where microcirculation involvement is expected eg. in rheumatology, dermatology, angiology, phlebology, vascular surgery, plastic surgery. Despite the increasing interest in capillary microscopy there is still a surprising discrepancy between its potential applications and still limited use in routinely medical practice. In some medical disorders, like in connective tissue diseases, local vascular changes can be studied by capillary microscopy of the nailfold. Vascular abnormalities appear earlier in the course of the disease than at other sites of a finger skin. The morphology of skin capillaries can be studied directly with an ordinary light microscope [1]. Characteristic structural abnormalities are found to a much greater extent in connective tissue diseases such as systemic sclerosis, overlap syndrome, mixed connective tissue disease and dermatomyositis [3, 2]. Computer-aided capillaroscopy diagnosis is a new research and application idea. One of the key components in such approach is automated nail-fold capillary extraction. Input of the process is a capillary image, output are capillary skeletons. There are many approaches to blood vessel extraction, including filter based [4], vessel tracking [5] and model based [6] methods. However, direct application of those methods do not give satisfactory results due to specific features of capillaries and capillary images. Capillary segmentation research has been performed [7], however on other kinds of capillaries. Usage of presented solution to nail-fold capillaries is not possible. Our research M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 499–506. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
500
M. Paradowski, H. Kwasnicka, and K. Borysewicz
is motivated by large difficulties in application of various pattern recognition algorithms to nail-fold capillary images, acquired using a microscope. We have studied various kind of filtering approaches, including Tophat filters [8] enhanced with multiscale approach [9], the result were well [10] for very high quality images acquired using specialized, very expensive medical hardware. Such hardware is not available for daily use and therefore the presented approach is designed to support semi-automated capillary extraction using a stereomicroscope with integrated photo camera. We propose a blood vessel tracking method, where the first point (seed) is located manually. The method seeks for neighboring vessel points, until the whole vessel is covered. The key problem in such a method is proper selection of neighboring points. We formulate it as model identification problem and propose a set of capillary model classes defined in polar coordinates. Proper usage of polar coordinates gives a range of advantages, including rotation invariance. Polar coordinates are used in range of vessel detection methods, amongst others to: retrieval of initial retinal vessel points [11], determination if a point is vessel center [12]. The proposed method can be classified [13] both as Model-based and Tracking-based approaches.
2 Blood Vessel Tracking The addressed problem is a blood vessel extraction problem. Blood vessel tracking is formulated as a graph construction problem. Generation of single graph vertexes and edges is formulated as the model identification problem. We assume that a set of seed points S is given for a considered image. Each seed point is defined as (x, y) coordinates and has to be placed inside capillary area. One seed point per capillary has to be enough, but it is possible to give more than one. Seed points are given manually and are not discussed in the paper. Generated vessel is represented as a non-directional graph G = V, E. Points inside the tracked vessel are represented as graph vertexes v ∈ V , edges e ∈ E connect vertexes. Due to possible branches and joins, we construct the graph using breadthsearch algorithm. Initial graph G0 contains all seed points and no edges G0 = S, ∅. All seed points are inserted into a processing FIFO queue Q. For each point p ∈ Q model identification is performed. Details of model identification are presented in the next section. If a proper model M is found, a set of neighbor vertexes N is generated. For each found neighbor vertex n ∈ N , Euclidean distance is calculated to all vertexes in graph G. If the distance is greater than the acceptance distance a , then a new vertex and edge is created. The newly created vertex n is added to the queue Q. If the distance is smaller than the linking distance l : l < a , then a new edge is created between point p and the nearest vertex to n. A new edge is created only if it does not introduce a cycle of length 3 in G. The last step is removal of short, terminated branches of length 1. As a result the method returns graph
Capillary Blood Vessel Tracking Using Polar Coordinates
501
G representing the blood vessel. The presented method is formalized using Algorithm 1.
2.1
Neighbor Selection as a Model Identification
Neighbor selection is one of the most important parts of Algorithm 1. It is responsible for correct traversal along the analyzed blood vessel. If neighbor selection fails, the whole vessel tracking method may return incorrect blood vessel skeletons. The neighbor selection problem may be formulated as a model identification problem. Given a set model classes, the method should find the best model. The data consists of visual neighborhood of the processed point p = (xp , yp ). Visual neighborhood is defined as a set of pixels, for which Euclidean distance to p in a given range. To represent neighborhood as a matrix, polar coordinates are employed. Polar coordinates allow easier processing of pixel neighborhood and offer rotation invariance. We look for the angle in which the ’probability’ of blood vessel presence is the smallest one, this angle we state as zero (α = 0). To find the angle, pixel values for all possible radii, for every angle are averaged. The angle for which this value is maximal is selected. Another issue is selection of minimum radius r = r0 . According to the model class design, minimum radius should be equal to blood vessel thickness. It has to be determined before the identification data generation and is performed using a simple, median based heuristic. In the Euclidean space the data is seen as a circle, with removed center. 2.2
Evaluation of a Model and Its Parameters
A blood vessel neighborhood is mapped into polar coordinates, with the given start angle. The space is represented in a form of matrix R, with dimensions 360 × d, where d is the presumed neighborhood size. To detect which neighboring pixels belong to a blood vessel a threshold τ is introduced. All
502
M. Paradowski, H. Kwasnicka, and K. Borysewicz
neighboring points (αn , rn ) with values higher than the threshold τ are detected as non-blood vessel (R(αn , rn ) > τ ). Other points are classified as blood vessel (R(αn , rn ) ≤ τ ). Threshold value is selected dynamically, and is equal to median value of all points inside the neighborhood. As a result, a new discretized matrix D is created. It is the input data to model identification procedure. Usage of discretized matrix D allows to reduce complexity of model evaluation from O(αd) to O(d) or even O(1). Each defined model Mi is evaluated, its quality is the difference s(Mi , D) between the model Mi and the data D (eq. 1). The model with the lowest difference is the best one. These differences (for every radius r) are remapped back into Euclidean coordinate space and normalized. s(M, D) =
360
1 d+r0 r=r0
r
359 d+r 0 5 r (M (α, r) − D(α, r))2 .
(1)
α=0 r=r0
If a vessel is very noisy or barely visible, s(M, D) can be high. It means that all considered models does not approximate the data correctly and should be rejected. If s(M, D) is larger than a threshold θ (given as parameter), no model is returned and no neighboring vertexes are generated. 2.3
Defined Model Classes
We have defined four model classes representing capillary shapes. Generation of neighboring points is model class specific, however all generated neighbors have rδ = r0 + rΔ (rΔ is a method parameter). If incorrect model class is chosen, too much or too less neighboring points will be generated. In all presented models a single curve equation is used for all calculations of blood vessel boundary. Having two points (α1 , r0 ) and (α2 , d + r0 ) in polar coordinates, intermediate points are calculated as follows: f (α1 , α2 , r) = α1 (1 − sin(0.5π(r − r0 )/d)) + α2 sin(0.5π(r − r0 )/d). One Direction Curve Model Class One direction curve model is the simplest class. It is best fitted to describe capillary termination points. It has two parameters, namely: blood vessel start angle α1 and blood vessel end α2 , where 0 ≤ α1 < α2 < 2π. The model Mo (α, r) is defined as follows and presented in Fig. 1: 1 if α ≥ f (0, α1 , r) ∧ α ≤ f (α2 , 2π, r) . (2) Mo (α, r) = 0 otherwise Because it has only two parameters, model identification is very fast, with computational complexity equal to O(α2 d). According to model class assumptions, it generates only one neighboring vertex. Vertex coordinates are: Vα1 = 0.5(f (0, α1 , rδ ) + f (α2 , 2π, rδ )).
Capillary Blood Vessel Tracking Using Polar Coordinates
503
Fig. 1. One direction curve and Straight half-lines model classes
Straight Half-Lines Model Class Straight half-lines model class is well suited for modeling straight vessels or sharp curves. The class has the following four parameters: α1 represents the angle of first blood vessel half line direction, α2 represents the angle of second blood vessel half line direction, δ1 represents thickness of polar blood vessel representation for r = r0 , δ2 represents thickness of polar blood vessel representation for r = d + r0 . In case α2 − α1 ≈ π then the vessel is a straight line, otherwise it has a sharp turn. The class is defined as follows (see Fig. 1): ⎧ ⎪ ⎨1 if α ≥ f (α1 − δ1 , α1 − δ2 , r) ∧ α ≤ f (α1 + δ1 , α1 + δ1 , r) Ms (α, r) = 1 if α ≥ f (α2 − δ1 , α2 − δ2 , r) ∧ α ≤ f (α2 + δ1 , α2 + δ1 , r) ⎪ ⎩ 0 otherwise (3) where: α1 < α2 , δ1 > δ2 , α1 − δ1 ≥ 0, α2 + δ1 < 2π. According to assumptions behind the model class, it generates two neighboring vertexes. Generated vertexes coordinates are: Vα1 = f (α1 , rδ ), Vα2 = f (α2 , rδ ). It has four parameters and is slower to identify than the first proposed model, with computational complexity equal to O(α4 d). Two Directions Curve Model Class The third presented class is called two directions curve. It is best fitted to approximate curved blood vessels. Capillary loops and smooth, longer capillary curves are correctly modeled. The class has five parameters: α1 – the angle on which vessel half-curves are separated, α2 and α3 – parameters of first half-curve, α4 and α5 – parameters of second half-curve. The model is defined as follows and presented in Fig. 2: ⎧ ⎪ ⎨1 if α ≥ f (0, α2 , r) ∧ α ≤ f (α3 , α1 , r) (4) Mt (α, r) = 1 if α ≥ f (α1 , α4 , r) ∧ α ≤ f (α5 , 2π, r) , ⎪ ⎩ 0 otherwise where: 0 ≤ α1 < 2π, 0 ≤ α2 < α3 < α4 < α5 < 2π. It has five parameters but the identification process is fast because of O(1) complexity for model verification. Total computational complexity is equal
504
M. Paradowski, H. Kwasnicka, and K. Borysewicz
Fig. 2. Two directions curve and Three directional branch model classes
to O(α5 ). According to model class assumptions, it generates two neighboring vertexes. Generated vertexes coordinates are: Vα1 = 0.5(f (0, α2 , rδ ) + f (α3 , α1 , rδ )), Vα2 = 0.5(f (α1 , α4 , rδ ) + f (α5 , 2π, rδ )). Three Directional Branch Model Class The last model class is three directional branch. Its main task is to approximate vessel branches. The class has total six parameters, all representing angles of top curve points (r = d + r0 ). α1 and α2 are responsible for the first half-line, α3 and α4 for the second one, and α5 and α6 for the last one. The model is defined as follows: ⎧ 1 if α ≥ f (0, α1 , r) ∧ α ≤ f (α2 , π, r) ⎪ ⎪ ⎪ ⎨1 if α ≥ f (0.5π, α , r) ∧ α ≤ f (α , 1.5π, r) 3 4 , (5) Mb (α, r) = ⎪ 1 if α ≥ f (π, α , r) ∧ α ≤ f (α , 2π, r) 5 6 ⎪ ⎪ ⎩ 0 otherwise where: 0 ≤ α1 < α2 < α3 < α4 < α5 < α6 < 2π. The model is presented in Fig. 2. It has computational complexity equal to O(α6 d) and is very hard to identify. According to assumptions behind the model class, it generates three neighboring vertexes. Generated vertexes coordinates are: Vα1 = 0.5(f (0, α1 , rδ ) + f (α2 , π, rδ )), Vα2 = 0.5(f (0.5π, α3 , rδ ) + f (α4 , 1.5π, rδ )), Vα3 = 0.5(f (π, α5 , rδ ) + f (α6 , 2π, rδ )).
3 Method Verification We have performed a number of experiments to check how the proposed method estimates capillaries. We have used a set of capillary images, acquired under various lighting conditions, microscope zoom settings, and with various artifacts. Each image comes from a different patient. A manually segmented capillaries are the ground truth for considered images. Seed points are defined on segmented capillaries. We would like to check the method’s behavior in case if more than a single seed point is selected on single capillary. Therefore some capillaries have more than one seed point. We have decided to present the results in a form of tracking successes and failures. The numerical estimation of results quality is very difficult and, in
Capillary Blood Vessel Tracking Using Polar Coordinates
505
Fig. 3. Correct identification of three model classes and successful segmentation
Fig. 4. Incorrect model approximations and resulting segmentation
our opinion, it does not reveal the strengths and weaknesses of the proposed method. At the beginning we present examples of correctly tracked capillaries, together with model estimates, next – a series of failures, also together with model estimates. All experiments are performed under the same parameter setup: d = 30, rΔ = 15, a = 0.75rδ , d = 0.5rδ and θ = 0.35. Fig. 3 presents various exemplary data, identified models and segmentation results. In all presented cases, model classes are correctly selected and their parameters are well estimated. Correct identification results in correct neighborhood selection. Additionally, successful model identification very often leads to successful segmentation. This means that both the identification process input data and models are correctly designed. Apart of presented successes, there are also blood vessels which are incorrectly tracked, see Fig. 4. In all these cases, incorrect model is selected and as a result – neighbors are generated incorrectly. The method tends to track vessels incorrectly if two parallel blood vessels are very close to each other.
4 Summary Extraction of blood vessels from capillary images is a new research topic. It allows to introduce automation into a manually performed medical procedure. We present a new approach to capillary tracking. The method is semi-automatic and requires manual seed generation. It employs graph construction and analysis. Selection of neighboring points is performed by model identification. Four capillary models classes are defined, they represent possible scenarios on capillary images. We use polar coordinates in model identification task. Neighboring points are generated according to the selected model. The proposed approach is verified on various kinds of real capillary
506
M. Paradowski, H. Kwasnicka, and K. Borysewicz
images. Detected capillary skeletons are compared with the given ground truth. Successfully and unsuccessfully tracked capillaries are presented. Further research should be focused on automation of seed points generation. Initial results show that a series of adaptive image filters can be used. The next important issue is a verification of automatic calculation of method parameters. It seems that the promising direction is using center of gravity of modeled capillary to calculation the distance (rΔ ) between newly generated vertexes and a vertex from which they are created. Acknowledgements. This work is financed from the Polish Ministry of Science and Higher Education resources in 2007 - 2009 years as a research project N518 020 32/1454.
References 1. Mahler, F., Saner, H., Bose, C.: Local cold exposure test for capillaroscopic examination of patients with Raynaud’s syndrome. Microvascular Research 33, 422–427 (1987) 2. Maricq, M.R.: Raynaud’s phenomen and microvascular abnormalities in scleroderma (systemic sclerisis). Systemic sclerosis, 151–166 (1998) 3. Houtman, P.M., Kallenberg, C.G.M., Fidler, V., et al.: Diagnostic significance of nailfold capillary patterns in patients with Raynaud’s phenomen. Journal of Rheumatology 13, 556–563 (1986) 4. Hoover, A., Kouznetsova, V., Goldbaum, M.: Locating Blood Vessels in Retinal Images by Piece-wise Threshold Probing of a Matched Filter Response. IEEE Transactions on Medical Imaging (2000) 5. Jiang, Y., Bainbridge-Smith, A., Morris, A.B.: Blood Vessel Tracking in Retinal Images. In: Proc. of IVC New Zealand 2007, pp. 126–131 (2007) 6. Boldak, C., Rolland, Y., Toumoulin, C.: An Improved Model-Based Vessel Tracking Algorithm with Application to Computed Tomography Angiography. Biocybernetics And Biomedical Engineering 23(1), 41–63 (2003) 7. Sainthillier, J.-M., Gharbi, T., Muret, P., Humbert, P.: Skin capillary network recognition and analysis by means of neural algorithms. Skin Research and Technology 11, 9–16 (2005) 8. Condurache, A.P., Aach, T.: Vessel Segmentation in Angiograms using Hysteresis Thresholding. In: Proc. of 9th IAPR, pp. 269–272 (2005) 9. Frangi, A.F., Niessen, W.J., Vincken, K.L., Viergever, M.A.: Multiscale vessel enhancement filtering (1998) 10. Kwasnicka, H., Paradowski, M., Borysewicz, K.: Capillaroscopy Image Analysis as an Automatic Image Annotation Problem. In: Proc. of CISIM 2007, pp. 266– 271 (2007) 11. Farzin, H., Abrishami-Moghaddam, H., Moin, M.-S.: A Novel Retinal Identification System. EURASIP Journal on Advances in Signal Processing (2008) 12. Cornea, N.D.: Curve-Skeletons: Properties, Computation and Applications. PhD Thesis, The State University of New Jersey (2007) 13. Kirbas, C., Quek, F.: A Review of Vessel Extraction Techniques and Algorithms. ACM Comput. Surv. 36(2), 81–121 (2004)
Part V
Miscellaneous Applications
Application of Rough Sets in Combined Handwritten Words Classifier Jerzy Sas and Andrzej Zolnierek 1
2
Wroclaw University of Technology, Institute of Applied Informatics, Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland [email protected] Wroclaw University of Technology, Faculty of Electronics, Chair of Systems and Computer Networks, Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland [email protected]
Summary. In the paper the multilevel probabilistic approach to hand printed form recognition is described. The form recognition is decomposed into two levels: character recognition and word recognition. On the letter level the rough sets approach is presented. After this level of classification, for every position in the word, we obtain either the certain or the subset of possible or the subset of impossible decision about recognized letter. After on the word level the probabilistic lexicons are available. The decision on the word level is performed using probabilistic properties of character classifier and the contents of probabilistic lexicon. The novel approach to combining these two sources of information about classes (words) probabilities is proposed, which is based on lexicons and accuracy assessment of local character classifiers. Some experimental results and examples of practical applications of recognition method are also briefly described.
1 Introduction Handwritten text recognition is still one of the most practical application of pattern recognition theory, for example in mail sorting, banking operations, education, polling, medical information systems, to name only a few. Plenty of different models and attempts has been used in letter classifier construction but there are still no sufficiently reliable methods and techniques assuring acceptable error rate of handwritten word recognition. In order to improve the overall recognition quality compound recognition methods are applied. One of the most widely used categories of compound methods consist in combining classifiers based on different recognition algorithms and different feature sets ([3]). Another possibility consists in using context information and the hidden Markov model ([5], [7]). Another approach divides the recognition process into levels in such a way, that the results of classification on the lower level are used as features on the upper level ([9]). Two-level approach is typical in handwriting recognition, in which the separate characters are recognized on the lower level and next on the upper level the words are recognized, usually with the use of lexicons. Some interesting and promising concepts can M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 509–518. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
510
J. Sas and A. Zolnierek
be found in ([8]). In this attempt the most important thing is to recognize even one or two (or more)letter from the word with possibly high accuracy because they serve as base for word recognition. Let us notice that such kind of reasoning looks like human methods of word classification if we have problems with reading for example medical text hand printed by physicians. We also try to find any letter which we are sure then on this base we try to match any word. Because the certainty that the particular letter is recognized correctly plays the crucial role in the word recognition we used rough sets theory ([1],[6]) in the algorithm of letter recognition. In this theory we deal with the decision rules which are either certain or possible ([2], [10]). In our approach, recognizing independently letter by letter in the data field, using rough sets approach, we try to recognize the word included in it. Moreover, we assume that for every data field we have corresponding set of words (data field lexicon), which can appear in it. The organization of the paper is as follows: after introduction the problem statement is presented in second section. Next, the description of the rough sets algorithm for letters recognition is presented. Consequently, in the section 4, new algorithm of word recognition using classifiers interleaving is described in details. In the section 5 the results of empirical investigation are presented and in the end we conclude this paper.
2 Problem Statement Let us consider a paper form F designed to be filled by hand printed characters. The form consists of data fields. Each data field contains a sequence of characters of limited length coming from the alphabet A. Data fields do not have to be filled completely - only the leading part of each field must be filled with characters. We assume that the actual length of filled part of data field can be faultlessly determined. The set A can be different for each field. Typically we deal with fields that can contain only digits, letters or both of them. For each data field there exists a probabilistic lexicon L. Lexicon contains words that can appear in the data field and their probabilities: L = {(W1 , p1 ), (W2 , p2 ), ..., (WN , pN )},
(1)
where Wj is the word consisting of characters from A, pj is its probability and N is the number of words in the lexicon. Our aim is to recognize successive words which are noted in successive data fields of the paper form F on the base of its scanned image. For every data field the recognition process can be divided into two levels, naturally corresponding to the two-level form structure: • •
character (alphabetical) level - where separate characters are recognized, word level - where the contents of data fields is recognized, based on the alphabetical level classification results, their probabilistic properties and probabilistic lexicon (1).
Application of Rough Sets in Combined Handwritten Words Classifier
511
We assume next that on the alphabetical level a classifier Φ is given which recognize character c ∈ A on the base of its image x, i.e. Φ(x) = c and furthermore characters in sequence of data fields are recognized independently. Probabilistic property (quality) of Φ is described by conditional probabilities of character w ∈ A appearance pwc = P (w | Φ(x) = c),
(2)
which for all w, c ∈ A form the confusion matrix PΦ of a rule Φ. In practice, probabilities (2) can be got from manually verifying results of form processing. Any classifier can be applied on character level, but in this paper in further experiments we used multi-level perceptron (MLP) and the classifier based on rough sets theory. Moreover, in both classifiers we used a vector of directional features [4]. Let the length | W | of currently recognized word W ∈ L be equal to n. This fact defines the probabilistic sublexicon Ln n Ln = {(Wk , qk )N k=1 : Wk ∈ L, | Wk |= n},
(3)
i.e. the subset of L with modified probabilities of words: qk = P (Wk / | Wk |= n) =
pk
j:|Wj |=n
pj
.
(4)
The sublexicons (3) can be considered as a soft classifier ΨL which maps feature space {| Wk |: Wk ∈ L} into the product [0, 1]Nn or equivalently, for each word length n produces the vector of decisions support s = (s1 , s2 , ..., sNn ),
(5)
where for ΨL support sk of decision Wk is equal qk . Let suppose next, that classifier Φ, applied n times on the character level, has recognized the sequence of characters (word) C = (c1 , c2 , ..., cn ) on the base of character images X = (x1 , x2 , ..., xn ), namely: Φ(X) = C.
(6)
Such an activity of classifier Φ will be further treated as an action of soft classifier ΨC , which - as previously - produces vector of decision support (5) (k) (k) (k) for words Wk = (w1 , w2 , ..., wn ) ∈ Ln , where now sk (ΨC ) = P (Wk | Φ(X) = C) =
n & j=1
(k)
P (wj
| Ψ (xj ) = cj ) =
n & j=1
pw(k) ,cj . (7) j
Now our purpose is to built soft classifier ΨW for word recognition as a fusion of activity of both lexicon-based ΨL and character-based classifier ΨC . In the next chapter a number of possible combination methods are discussed.
512
J. Sas and A. Zolnierek
3 Application of Rough Sets in Letter Recognition In order to classify every letter (pattern) we need some more general information to take a valid decision, namely the a priori knowledge concerning the general associations that hold between the letter on the one hand, and the vector of features which can be taken from its scan image. From now on we assume that his knowledge has the form of so called learning set of length k: Sk = ((x1 , j1 ), (x2 , j2 , ..., (xk , jk ))
(8)
In consequence, the algorithm of letter recognition Φ is the same for every letter (v=1,...,n) in the word and is of the following form: cv = Φ(S, xv )
(9)
Applying the rough sets theory ([6], [10]) to the construction of algorithm (9) we consider the training set (8) as the decision system Ds = (U n, Co, Da). In this system, Da is the decision attribute, while Un and Co are finite sets called universe and the set of condition attributes, respectively. For every attribute a ∈ Co we determine its set of possible values Va , called domain of a. Such information system can be also represented as a table, in which every row represents a single pair (xv , jv ), i.e. the set of features is the set conditional attributes. Taking into account the set of condition attributes Co, let us denote by Xc the subset of U n for which the decision attribute is equal to c, c ∈ A. Then, for every c we can defined respectively the Colower approximation Co∗ (Xc ) and the Co-upper approximation Co∗ (Xc ) of set Xc ([6]). According to the rough sets theory, the lower approximation of set Xc is the set of objects x ∈U n, for which if we know values of condition attributes Co, we can for sure say that they are belonging to the set Xc . Moreover, the upper approximation of set Xc is the set of objects x ∈U n, for which if we know values of condition attributes Co, we can not say for sure that they are not belonging to the set Xc . Consequently, we can define Co-boundary region of as follows: CoNB (Xc ) = Co∗ (Xc ) − Co∗ (Xc ).
(10)
If for any c the boundary region of Xc is the empty set, i.e. CoNB (Xc ) = 0, then Xc is crisp, while in the opposite case, i.e. CoNB (Xc ) = 0 we deal with rough set. For every decision system we can formulate its equivalent description in the form of set of decision formulas F or(Co). Each row of the decision table will be represented by single if-then formula, where on the left side of this implication we have logical product (and) of all expressions from Co such that every attribute is equal to its value. On its right side we have expression that decision attribute is equal to some letter c. Generally, these
Application of Rough Sets in Combined Handwritten Words Classifier
513
formulas can be used in constructing pattern recognition algorithm [10], but in this paper we proposed rough sets based algorithm for letter recognition support. It means that for every recognized letter we calculate the output of the algorithm in the form of strength factor for every possible decision. Then the construction of classifier (9) from the training set (8) can be presented according to the following items: 1. If the attributes are the real numbers then the discretization preprocessing is needed first. After this step, the value of each attribute is represented by the interval number in which this attribute is included. Of course for different attributes we can choose the different numbers of intervals in order to obtain their proper covering and let us denote for l-th attribute (l = 1, ..., d) by νpl its pl -th value or interval. In this case each attribute is equivalent to corresponding feature. 2. The next step consists in finding the set F or(Co) of all decision formulas from (8), which have the following form:
3. 4.
5.
6.
7.
IF (x(1) = νpl )and(x(2) = νp2 )and...and(x(d) = νpd )T HEN Φ(S, x) = c (11) Of course, it can happen that from the training set (8) we obtain more than one rule for particular letter. Moreover the rules of the same predicate can indicate another letters. Then for every formula (11) we determine its strength factor. In this paper this factor is equal to the inverse of the number of different letters indicated by the same predicate during learning procedure. For the set of formulas F or(Co), for every c ∈ A we calculate their Co-lower approximation Co∗ (Xc ) and their boundary regions CoNB (Xc ). In order to classify the v-th pattern xv (after discretization its attributes if it is necessary) we look for matching rules in the set F or(Co), i.e. we take into account such rules in which the left condition is fulfilled by the attributes of recognized letter image. If there is only one matching rule, then we classify this letter image as a letter which is indicated by its decision attribute c, because for sure such a rule belongs to the lower approximation of all rules indicating c, i.e. this rule is certain and its strength factor is equal to 1. If there is more then one matching rule in the set F or(Co), it means that the recognized pattern should be classified by the rules from the boundary regions CoNB (Xc ),c ∈ A and in this case we take into account all possible decisions with the strength factor equal to the inverse of the number of possible decisions, i.e. we take into account the rules which are possible. If for any c ∈ A we can not find in the set F or(Co) either certain or possible rule, it means that for recognized pattern we can exclude such decision.
514
J. Sas and A. Zolnierek
4 Application of Rough Sets in Classifier Interleaving on the Word Level The general concept applied here is to mimic the way a human typically follows when reading unreadable handwriting. It starts with selecting such fragments of the word where the characters can be recognized relatively reliably and next these characters are assumed to be fixed. The remaining fragments of the word are recognized by matching to these words in the vocabulary, which have fixed letters on corresponding positions. In our approach, the rough sets are used to assess the certainty of the letter recognition. The word recognition algorithm with classifier interleaving was adapted so as to apply rough sets in assessment of character recognition reliability. The concept of classifier interleaving is described in detail in[7]. Below the concept is briefly outlined. Let N = {1, 2, ..., n} be the set of numbers of character positions in a word W ∈ Ln and I denotes a subset of N . I is the subset of positions in the word for which the recognition based on the features extracted from a character image is assumed to be reliable. In classifier interleaving the algorithm Φ is applied for recognition of characters on positions I and next - using these results of classification - the lexicon Ln (or algorithm ΨL ) is applied for recognition of a whole word W . Let C I = {ci , i ∈ I} be the set of characters on positions I which have been recognized by classifier Φ, i.e. Φ(X I ) = C I . Hence for any set of characters W I = {wi , i ∈ I, wi ∈ A} we have: & & P (wi | Φ(xi ) = ci ) = pwi ci . (12) P (W I | Φ(X I ) = C I ) = i∈I
i∈I
The above formula determines conditional probability that on positions I of word to be recognized are characters W I provided that rule Φ recognized characters C I . Since the whole word Wk ∈ Ln consists of characters WkI and ¯ ¯ respectively, hence we have the WkI , i.e. characters on positions I and I, following support vector (5) of combined rule ΨW :
¯
sk (ΨW ) = P (Wk | C I ) = P (WkI ∩WkI | C I ) =
¯
P (WkI | WkI ∩ C I )P (WkI ∩ C I ) , P (C I ) (13)
and after simple transformations we get: sk (ΨW ) = P (Wk | C I ) = P (WkI | C I ) P (Wk | WkI )
(14)
The first factor of (14) is given by (12), whereas the second one can be calculated as follows: P (Wk | WkI ) =
qk
i:Wi containsW I
qi
.
(15)
Application of Rough Sets in Combined Handwritten Words Classifier
515
Selection of the subset I of N is crucial for final word recognition accuracy. Intuitively, subset I should contain these positions for which character recognition algorithm gives the most reliable results. In [7] the self-assessment of the character classifier was used in selection of the subset I. The selfassessment was based on the entropy of resultant support factors calculated by the final word classifier. Here we extend this idea by additionally utilizing the strength factor calculated as it was shown in the previous section. Let us assume that the image x is equivalent to its attribute vector in the attribute space Co. The domain of attributes is divided into disjoint areas V = {v1 , v2 , ..., vK }. The partitioning of U n into V is defined by the predicates of decision rules (11). If in an area vk there are elements of the learning set representing only single class then the strength factor Sf (vk ) for this area is equal 1.0. In the other extreme case, if all classes are represented in an area then its strength factor is the lowest and equal to 1/NA where NA is the number of characters in the alphabet. In the first case, if additionally the class recognized by the classifier Φ(xi ) is the same as the class represented solely in the area vk then the recognition of xi is assumed to be completely certain. Therefore, if there are positions in the sequence (x1 , x2 , ..., xn ) such that xi ∈ vk ∧ Sf (vk ) = 1.0 ∧ Φ(x) ∈ C(vi ) then i is assigned to I. The eventual allocation of remaining positions in the word to the set I is done so as to maximize the reliability assessment of the whole word recognizer. The quality of the whole word recognizer can be assessed by the entropy of support factors calculated by the word recognizer for all words in the dictionary, as described in [8]. Now support factors sk (ΨW ) are calculated according to modified formula (14). The modification consists in taking into account the strength factor while calculating the support for the subsequence of characters corresponding to the set I . The term P (Wk | C I ) in (14) is now replaced by the subword certainty assessment PDn calculated as: PDn (W I | Φ(X I ) = C I ) =
& i∈I
P (wi | Φ(xi ) = ci )Sf (v(xi )) , Sf (v(xi )) w∈A P (w | Φ(xi ) = ci )
(16)
where v(xi ) denotes the element in V containing xi . Finding the subset which minimizes the entropy of support values for the word classifier is a combinatorial problem. The number of subsets to be tested is 2(n−nc ) where nc is the number of certain positions in the word. Finding the best subset I by exhaustive search is inefficient. Therefore simplified method that finds suboptimal set is suggested. The algorithm starts with I containing all positions i such that Sf (v(xi ) = 1.0. All remaining positions are sorted by its strength factors in descending order, giving the sequence (xi1 , xi2 , ...). Positions are then temporarily appended to I and the word classifier quality assessment Q(w, (x1 , x2 , ..., xn ), I)) is calculated. If appending the position ij to I improves the total reliability, the position remains in I, otherwise it is rejected and the next position xij+1 is tried. The algorithm can be summarized by the pseudocode as follows:
516
J. Sas and A. Zolnierek
Input: the sequence of character images (x1 , ..., xn ) the set of areas V = {v1 , v2 , ..., vK } with their strength factors Sf (vi ), i = 1, .., k Output: the set of reliably recognized positions I I = {i ∈ {1, ..., n} : Sf (v(xi )) = 1.0} J = {1, ..., n} − I k = card(J ) Create the sequence (xi1 , xi2 , ...) by sorting the set I in descending order of Sf (v(xi )) for j = 1 to k do Qj = Q(w, (x1 , x2 , ..., xn ), I ∪ {ij }) if Qj > Qj−1 I = I ∪ {ij }) end j
5 Experimental Results In order to compare the described method witch other similar approaches an experiment has been carried out. The experiment consisted in recognizing Polish surnames written in block letters. The lexicon consisted of 9944 surnames. The word appearance probabilities were estimated using the statistical patient data gathered from a number of medical information systems. MLP was used as the character classifier. Its number of outputs was equal to the number of characters in the alphabet. MLP was trained in 1-of-L manner i.e. when presenting a character al during training the expected output values are: 1 on the l-th output and 0 on all remaining outputs. The training set consisted of manually selected and appropriately processed samples of characters representing typical writing styles of letters. The set consisted of about 18000 character images coming from various writers was used to estimate the MLP confusion matrix. Directional features ([4]) were applied. In order to obtain practically unlimited number of testing samples, the images of handwriting to be recognized were artificially created by assembling images of character corresponding to letters on the consecutive positions of the word. The pool of sample characters used for creation of the testing set consisted of 2000 images of handwritten characters, other than the samples used to train MLP and to estimate its confusion matrix. Other features were used for recognition reliability assessment and for constructing rough set areas. The universe used in the reliability assessment with rough sets was constructed using human perceivable geometric features of the character image:
Application of Rough Sets in Combined Handwritten Words Classifier
517
Table 1. Results of empirical tests - names recognition
• • • • • • •
Criterion
CB
CA
SA
SR
1 of 1 1 of 3 1 of 5
52.1% 58.3% 61.5%
82.5% 87.2% 88.5%
91.2% 93.5% 95.0%
92.1% 94.1% 95.0%
character image aspect ratio, average count of strokes intersections with horizontal scan lines, average count of strokes intersections with vertical scan lines, equalization factor of vertical and horizontal image projection histogram, image center of gravity calculated with first-order Cartesian moments m01 /m00 and m10 /m00 , centralized moments: μ02 and μ20 , roundness factor.
The feature observed value ranges were normalized and divided into 2-4 equal length intervals depending on the feature. The following word recognition methods were compared: • • • •
CB - recognition based only on maximizing the word support factors calculated as a product of supports for isolated letters, lexicon is not utilized, CL - recognition based only on maximizing the word support factors calculated as a product of supports for isolated letters - recognizable words restricted to the lexicon, SA - recognition by combined classifier with reliability assessment based on the support factors entropy - the method used in [8], SR - recognition with the method described in this article.
Typically, the word recognition results are used in further stages of the complete document or sentence recognition. For this reason, the word recognition performance is evaluated by considering as a success the situation, where the actual word is among K words with highest support factors calculated by the recognizer. The tests were done for K = 1, 3, 5. Obtained results are presented in Table 1.
6 Conclusions The idea presented in the article concerns specific case of classifiers fusion, where the classifier based on rough sets supports MLP classifier. Its role is to improve the assessment of the reliability of isolated characters classification. The classifier interleaving technique was adapted to take into account the assessment of reliability provided by rough sets regions and its strength
518
J. Sas and A. Zolnierek
factors. Experimental tests of the proposed method indicated the observable improvement in recognition accuracy. For the case where the recognition is considered to be successful only when the actual word is the one with highest support factor, the accuracy increases from 91.2% to 92.1%. It gives 10.2% of the relative error rate reduction. The proposed algorithm can be further improved. In the described experiments, the character recognition support vector is created using the classifier confusion matrix. For MLP classifier used here at the character level, better results could be possibly obtained by applying the MLP output vector for support factors evaluation. Further improvements could be probably obtained by combining classifiers using directional features and human-perceivable features at the character level. The method can be also adapted for continuous handwritten script recognition.
References 1. Fibak, J., Pawlak, Z., Slowinski, K., Slowinski, R.: Rough Set Based Decision Algorithm for Treatment of Duodenal Ulcer by HSV. Bull. of the Polish Acad. Sci., Bio Sci. 34, 227–246 (1986) 2. Grzymala-Busse, J.: A System for Learning from Examples Based on Rough Sets. In: Slowinski, R. (ed.) Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory, pp. 3–18. Kluwer Academic Publishers, Dordrecht (1992) 3. Kuncheva, L.: Combining Classifiers: Soft Computing Solutions. In: Pal, S., Pal, A. (eds.) Pattern Recognition: from Classical to Modern Approaches, pp. 427–451. World Scientific, Singapore (2001) 4. Liu, C., Nakashima, K., Sako, H.: Handwritten Digit Recognition: Benchmarking of State-of-the-Art Techniques. Pattern Recognition 36, 2271–2285 (2003) 5. Marti, U.V., Bunke, H.: Using a Statistical Language Model to Improve the Performance of an HMM-Based Cursive Handwritting Recognition System. Int. Journ. of Pattern Recognition and Artificial Intelligence 15, 65–90 (2001) 6. Pawlak, Z.: Rough Sets, Decision Algorithms and Bayes Theorem. European Journal of Operational Research 136, 181–189 (2002) 7. Sas, J., Kurzynski, M.: Multilevel Recognition of Structured Handwritten Documents - Probabilistic Approach. In: Proc. 4th Int. Conf. on Computer Recognition Systems, pp. 723–730. Springer, Heidelberg (2005) 8. Sas, J., Kurzynski, M.: Combining Character Level Classifier and Probabilistic Lexicons in Handwritten Word Recognition - Comparative Analysis of Methods. In: Gagalowicz, A., Philips, W. (eds.) CAIP 2005. LNCS, vol. 3691, pp. 330–337. Springer, Heidelberg (2005) 9. Sas, J., Zolnierek, A.: Comparison of feature reduction methods in the text recognition task. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS(LNAI), vol. 5097, pp. 729–738. Springer, Heidelberg (2008) 10. Zolnierek, A.: Application of rough sets theory to the sequential diagnosis. In: Maglaveras, N., Chouvarda, I., Koutkias, V., Brause, R. (eds.) ISBMDA 2006. LNCS (LNBI), vol. 4345, pp. 413–422. Springer, Heidelberg (2006)
An Adaptive Spell Checker Based on PS3M: Improving the Clusters of Replacement Words Renato Cordeiro de Amorim Birkbeck, University of London School of Computer Science and Information Systems Malet Street, London WC1E 7HX [email protected]
Summary. In this paper the author presents a new similarity measure for strings of characters based on S3M which he expands to take into account not only the characters set and sequence but also their position. After demonstrating the superiority of this new measure and discussing the need for a self adaptive spell checker, this work is further developed into an adaptive spell checker that produces a cluster with a defined number of words for each presented misspelled word. The accuracy of this solution is measured comparing its results against the results of the most widely used spell checker.
1 Introduction Even nowadays misspelling is still a fairly common problem, wide research has addressed this field aiming not only to locate misspelled words but also provide an as small as possible list of replacements containing the targeted word. It has been found by [2] that there are around 0.2 percent to 3 percent of spelling errors in documents in English, as one can expect misspelling is not a problem for English speakers only and tends to have a higher average percentage of errors when the language in question is not the persons mother tong, evidence can be found in [3] [4] were it was found 2.4 percent spelling errors in texts written by second language users of Swedish. In general, when there is no word context and the subject is just presented with a number of errors and must respond with his best guess for correcting each word, humans were found to have an average of 74% accuracy [2] which indicates that humans tend to need help when correcting words. As discussed in [3], misspelling can be carried out by three reason either because the user does not know the spelling or because of a typing error or because the user is not completely sure about the spelling. An important thing to note is that although spelling checkers can identify misspellings at a 100% accuracy rate, depending on the quality of their dictionaries, these not always provide the correct target word in their list of replacements [5], which means that these are able to find there is something wrong with the M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 519–526. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
520
R.C. de Amorim
spelling of a word but they are not able to give proper advice to the user of how this word should be spelled. There is a high importance in pointing the right word, in special when the subject has a learning disability such as dyslexia. Research carried out by [7] examining the use of spell checkers by 27 students with learning disabilities shows that the subjects performance in correcting words was directly related not only to the words in the list of replacements provided by the spell checker but also by the order of those words. When the target word was the first of the list these subjects were able to correct 83.5% of their errors, dropping to 71.6% if the target word was not the first and dropping further to 24.7% when no choice was provided. Another interesting fact shown in the same research is that students inaccurately corrected errors over 50% of the time when the spell checker did not provide the target word in the replacement list, this further develops the findings of [6] that poor spellers have difficulty identifying misspelled words. The above facts give more strength to the suggestion of [1][2] that ideal isolate word-error correctors should exceed 90% when multiple matches may be returner. Surely one of the most used spell checkers in the world is the one embedded in Microsoft Word, this spell checker has already been used for comparison with other algorithms in research, and as demonstrated by [8] in MS-Word 97 it was observed that from time to time, the list of offered words seemed counterintuitive and also that non-word errors (when the incorrect word is not found in a dictionary) had a rate of between 0.2 and 6%. In both MS-Word 2003 and MS-Word 2007 the same problems can still be found, for instance the word Uinervtisy (University) is marked as incorrect but no suggestion is given. Also, as a final point in this introduction, the author agrees with the discussion started by [10][11], where the main argument is that there is a need for adaptive interfaces that can anticipate and adapt to the specific spelling mistakes of any user as regardless of how often corrections are done by most spell checkers, these applications will perform its tasks independent of the types of mistakes most commonly made by that particular user, since commercial spell checkers are designed to the needs of the general public. Adaptability for this type of application is of extreme importance if really high rates are to be generated by a spell checker, as for example different learning disabilities may induce the user to have a different most often mistake. The method here presented aims to have a high percentage of correct spellings retrieved (high recall), but unfortunately in order to achieve such aim the price to be paid was having in the list of possible target words a high percentage of words other than the correct spelling retrieval (low precision), which is common in most spell checkers.
An Adaptive Spell Checker Based on PS3M
521
2 The Used Data In order to test the method here presented, the following two datasets, which are full of spelling mistakes (the words in italic), have been chosen: fi yuo cna raed tihs yuo hvae a sgtrane mnid too. Cna yuo raed tihs? Olny 55 plepoe out of 100 cuold. I could not blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid, aoccdrnig to a rscheearch at Cambridge Uinervtisy, it deos not mtaetr in waht oerdr the ltteres in a wrod are, the olny iproamtnt tihng is taht the frsit and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it whotuit a pboerlm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Azanmig? I awlyas tghuhot slpeling was ipmorantt! If you can raed tihs forwrad it The other dataset used can be found at [16] and contains 547 misspellings; this text was incorporated here because it has been used for comparisons between different spell checkers. The first dataset, here called text T, and some of its variants can be easily found in a number of different websites, it has exactly 71 misspelled words, including the same word misspelled twice in different ways. It is important to note that the claims done in it are unverified. The second dataset will be referenced as T2. As pointed out by [8][9] another important topic is that the recall of a spell checker does not increase in direct proportion to the size of its dictionary, in other words bigger dictionaries do not mean better recognition as if the dictionary is too large it can include hardly used words that coincide with misspellings of common words, taking this into account, in the evaluation it has been used a dictionary with 57.103 words, quantity which the author believes not to be too high or to too low.
3 The New Similarity Measure Although it may sound odd for people unfamiliar with this, similarity between words can be measured, for instance one would normally consider the word university much closer to universe than to pineapple. Research in measuring the similarity between sequences is quite wide and includes the most diverse algorithms, from those calculating the number of necessary modifications from one word to another until those calculating similarity based in the quantity of similar items in the sequences. In order to be consider a similarity metric a function S has to satisfy the following properties [12]: - Non-Negativity: S(x, y) ≥ 0 - Symmetry: S(x, y) = S(y, x) - Normalization: S(x, y) ≤ 1
522
R.C. de Amorim
The Sequence and Set Similarity Measure (S3 M) is a metric function proposed by [13] and extended in [12] that has been used in problems related to security and the personalization of webspace, in other words, the argument that a system should decide what information should be presented to the visitor and in what fashion, generating then customer-centric websites which is in fact not too far from this paper objective, generate a user-centric spell checker. S3 M is defined as a weighted linear combination of the length of the longest common sequence (LLCS) as well as the Jaccard similarity measure and its formula is as follows where |A| represents the length of the sequence. A = : S 3 M (A, B) = p ∗
A∩B LLCS(A, B) +q∗ max(|A|, |B|) A∪B
(1)
It is important to note that the parameters (p and q) for this formula should each be equal of higher than zero and their sum should be equal to one, this way the result of the formula will be normalized. Here the author suggests that in order to increase its accuracy in the calculation of similarities in between words, PS3 M should be extended with a weighted position similarity related to the number of items in sequence A that are equal in value and position to those in sequence B, having then the following new formula: A∩B LLCS(A, B) S M (A, B) = p ∗ +q∗ +r∗ max(|A|, |B|) A∪B 3
min(|A|,|B|)
ai = b i (2) max(|A|, |B|)
i=1
Where p, q and r are ≥ 0 and p + q + r = 1 and becoming then the Position, Sequence and Set Similarity Measure (PS3 M). The interesting point here is that it is not claimed that the new extension, (or in fact any of its 3 components) will always be needed, and PS3 M can easily adapt to an environment where it is or it is not needed simply adapting the weight r, or in case of a different component, p or q.
4 Experiments Applying PS3 M to T When using a PS3 M based spell checker one can define how many words should be listed in the list of replacements and rank them according to its similarity to the misspelled word, this threshold θ of words can be set to any integer ≥ 1 and ≤ to the size of the dictionary. Applying MS-Word 2003 and 2007 spell checkers to T had a recall of 84.5%, they were both unable to provide the target word in the list of replacements for the following: Aulaclty, uesdnatnrd, rdanieg, aoccdrnig, rscheearch, Uinervtisy, iproamtnt, whotuit, pboerlm, bcuseae, azanmigt.
An Adaptive Spell Checker Based on PS3M
523
The PS3 M based spell checker was able to have a much higher rate, having a recall rate of 98.5% when θ = 3 and one of the following configurations, which were found using steps of 0.1: p, q, r of 0.1, 0.8, 0.1 and 0.2, 0.6,0.2. As in the above r is not really high it becomes important to evaluate the impact of the extension shown in this paper, without it and using a θ of 4, which should increase the recall, the best result would be 83.1% (p = 0 and q = 1) that slightly worst than the recall rate provided by MS-Word. Other interesting results with θ = 4 where that the worst found result, 0.02% was obtained when q and r were both zero (and p was 1), and that if p and q were zero (and r equal to 1) the recall would be 0.09% which suggests that generally speaking the here extended r part of the formula, for the purpose of finding the target word from a misspelled word may have a higher importance than p. Experimental results with T2 are disused following the introduction of a learning process.
5 The Learning Process As discusses in the introduction, there is a need for self adaptive spell checkers, in other words a spell checker that learns with the feedback provided by the user, this feedback takes form of the word chosen by the user in the provided list of replacements for the misspelled word. If the first word in the list of replacements is the target word it means the parameters p, q, and r have good values, if that is not the case then these have to be updated, a learning rate (α) is then introduced as the coefficient by which the parameters have to be adjusted, an important thing to note is that this adjustment should not change the basic rule of p + q + r = 1 so while the learning rate increases one or two of the parameters it should decrease other(s). For learning, the PS3 M Formula (2) was decomposed into (3), (4) and (5) which should be used to measure the similarity between the misspelled word and the target word (l, m, n) and in between the misspelled word and the first word in the returned replacements’ list (l’, m’, n’). l=
LLCS(A, B) max(|A|, |B|)
(3)
A∩B A∪B
(4)
m=
min(|A|,|B|) n=
ai = b i max(|A|, |B|)
i=1
(5)
Afterwards the difference between the prime values and the non prime values has to be calculated creating Δl, Δm and Δn where for instance Δl = l - l’.
524
R.C. de Amorim
Positive deltas (which sum is here represented by δ+) denote the part(s) of the formula in which parameter(s) p, q or r should be increased in order to get a better recall and negative ones (which absolute sum is here represented by δ-) represent the part(s) of the formula in which the parameter(s) should be decreased, of course the absolute values of these will not be necessarily the same and what should be noted is that the learning algorithm should also take into account the maximum (1) and minimum (0) possible values of the parameters. In order to calculate the possible increase in a parameter E ∈ {p, q, r} which may or may not be the final the following formula should be used, where E’ represents a parameter whose delta is positive, E” a parameter whose delta is negative, and their deltas (ΔE’ and ΔE”) represent the delta value (Δl, Δm or Δn) of a parameter for those which are positive and negative respectively. E = min(1, E + E = max(0, E −
ΔE α) δ+
(6)
ΔE α) δ−
(7)
If E =1 then the parameters have to be adjusted, to do so one can simply change the learning rate (α) in formulas 6 and 7 to the smallest absolute value in between two differences, firstly all E’ and their respective parameters, and secondly E” and their respective parameters. In order to evaluate the above learning method the P3 M spell checker was further developed to include the cross validation algorithm leave-one-out and initiated with the poor choice of one third for each parameter and a learning rate (α) of 0.05, with these a result of 95.6% was obtained. Moving the learning rate (α) to 0.03 slightly decreases the recall to 94.4%. A different test was also conducted using 0.15, 0.7 and 0.15 as initial values for p, q and r and the adaptive PS3M spell checker was applied to T2. Using this second dataset and a threshold θ of 10 the algorithm had a recall of 92.09%. According to [16] MS-Word 97 had a recall of 72.2% in a similar test and none of the other 10 spell checkers there presented had a better result in the 1 to 10 replacement words category.
6 Conclusions and Future Research The author has argued here that in order to produce optimal results spell checkers need to be user centric as different users may have different patterns in misspellings, he followed to present a similarity measure called Position, Sequence and Set Similarity Measure (PS3 M) which may produce better results if compared with other widely used spell checkers such as the one embedded in Microsoft Word, (versions 97, 2003 and 2007), together with a learning algorithm specifically developed to be used with it.
An Adaptive Spell Checker Based on PS3M
525
The author demonstrated that the adaptive PS3 M spell checker may learn relatively fast achieving good results after learning the patterns of only 70 words (or less) and it could learn even faster if the parameters p, q and r were giving better initial values, as for example the general best values, so the updates in the parameters would be likely to be much smaller. Also it should be noted that PS3M is language independent and to change the solution language one just needs to change its dictionary. It could also be useful when dealing with historical documents as languages tend to evolve with time or other string matching problems such as the ones found in bioinformatics. At the present the adaptive PS3 M spell Checker forces the calculation of similarity between a misspelled word and all the words in a dictionary which of course generates an undesirable quantity of processing, future research will address the formation of word clusters in the dictionary and the decrease of calculations by measuring the similarities only against the prototypes of those clusters, and also generate a variable θ depending on the similarities found for a specific misspelled word. As the number of clusters would depend on the dictionary and of course in its language, a cluster method that could find the number of clusters in data such as the intelligent k-Means [14] or its constrained version shown in a previous work [15], would be a good incorporation to the PS3 M Spell Checker. While most spell checkers would make the same mistake not once, not twice but always, the adaptive PS3 M spell checker would learn from the user’s mistakes and adapt having them better results than other popular spell checkers.
References 1. Hodge, V.J., Austin, J.: A Comparison of a Novel Neural Spell Checker and Standard Spell Checking Algorithms. Pattern Recognition 35, 2571–2580 (2002) 2. Kukich, K.: Techniques for Automatically Correcting Words in Text. ACM Comput. Surveys 24(4), 377–439 (1992) 3. Dalianis, H.: Evaluating a Spelling Support in a Search Engine. In: Andersson, B., Bergholtz, M., Johannesson, P. (eds.) NLDB 2002. LNCS, vol. 2553, pp. 183–190. Springer, Heidelberg (2002) 4. Knutsson, O.: Automatisk spr˚ akgranskning av svensk text (in Swedish) (Automatic Proofreading of Swedish text). Licentiate Thesis, IPLAB-NADA, Royal Institute of Technology, KTH, Stockholm (2001) 5. Montgomery, D.J., Karlan, G.R., Coutinho: The Effectiveness of Word Processor Spell Checker Programs to Produce Target Words for Misspellings Generated by Students With Learning Disabilities. JSET E Journal 16(2) (2001) 6. Gerlach, G.J., Johnson, J.R., Ouyang, R.: Using an electronic speller to correct misspelled words and verify correctly spelled words. Reading Improvement 28, 188–194 (1991) 7. MacArthur, C.A., Graham, S., Haynes, J.B., DeLaPaz, S.: Spell checkers and students with learning disabilities: Performance comparisons and impact on spelling. Journal of Special Education 30, 35–57 (1996)
526
R.C. de Amorim
8. Garfinkel, R., Fernandez, E., Gobal, R.: Design of an Interactive spell checker: Optimizing the list of offered words. Decision Support Systems (35), 385–397 (2003) 9. Nuance, International CorrectSpell (2000), http://www.lhs.com/tech/icm/proofing/cs.asp (accessed 2003) 10. Seth, D., Kokar, M.M.: SSCS: A Smart Spell Checker System Implementation Using adaptive Software Architecture. In: Laddaga, R., Shrobe, H.E., Robertson, P. (eds.) IWSAS 2001. LNCS, vol. 2614, pp. 187–197. Springer, Heidelberg (2003) 11. Vaubel, K.P., Gettys, C.F.: Inferring user expertise for adaptive interfaces. Human Computer Interaction 5, 95–117 (1990) 12. Kumar, P., Bapi, R.S., Krishna, P.R.: SeqPAM: A Sequence Clustering Algorithm for Web Personalization. In: Poncelet, P., Masseglia, F., Teissseire, M. (eds.) Successes and New Directions in Data Mining, pp. 17–38. Information Science Reference, USA (2007) 13. Kumar, P., Rao, M.V., Krishna, P.R., Bapi, R.A., Laha, A.: Intrusion Detection System Using Sequence and Set Preserving Metric. In: Kantor, P., Muresan, G., Roberts, F., Zeng, D.D., Wang, F.-Y., Chen, H., Merkle, R.C. (eds.) ISI 2005. LNCS, vol. 3495, pp. 498–504. Springer, Heidelberg (2005) 14. Mirkin, B.: Clustering for Data Mining: A Data Recovery Approach. Chapman and Hall/CRC, Boca Raton (2005) 15. Amorim, R.C.: Constrained Intelligent K-Means: Improving Results with Limited Previous Knowledge. In: Proceeding of Advanced Engineering Computing and Applications in Sciences, pp. 176–180. IEEE Computer Society, Los Alamitos (2008) 16. Aspell .Net, Spell Checker Test Kernel Results (2008) (updated May 14, 2008), http://aspell.net/test/cur/ (accessed February 10, 2009)
Visual Design Aided by Specialized Agents ´ Ewa Grabska and Gra˙zyna Slusarczyk The Faculty of Physics, Astronomy and Applied Computer Science Jagiellonian University, Reymonta 4, 30-059 Krak´ ow, Poland [email protected], [email protected]
Summary. This paper deals with the system of agents supporting the user in visual designing. Agents are treated as a concurrent modular system which is able to solve complex design tasks. Designs are represented in the form of diagrams, while their internal representations have the form of attributed hierarchical hypergraphs. Agents analyse both diagrams and hypergraphs and are responsible for physical, perceptional, functional and conceptual design actions. The design context is expressed by the environment in which agents act. The proposed approach is illustrated by examples of designing floor-layouts.
1 Introduction The modern design process is characterized by the increased importance of the visualization of design concepts and tools. The interaction between the designer and the computer visual tools strongly determines the course of designing. Therefore recent frameworks for design focus on dynamic character of the context in which the designing takes place. In this paper ideas of the dynamic character of design sketched in [1] are developed using a system of specialized intelligent agents. The objective of this paper is to present a visual system for designing, which is supported by four types of specialized cooperating agents. Agents of each type are responsible for physical, perceptional, functional and conceptual actions, respectively. The design context is expressed by the environment in which agents act. A proposed system of agents is treated here as a concurrent modular system which is able to solve complex design tasks. In the described knowledge-based decision support design system designs are represented in the form of diagrams forming a specific visual language. The syntactic knowledge of this language is defined by means of attributed hierarchical hypergraphs, which constitute internal representations of design diagrams. Hyperedges of hypergraphs represent both diagram components and the multi-argument relations among them. Hierarchical hyperedges correspond to groups of diagram components. Attributes assigned to hyperedges encode the semantic design knowledge. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 527–534. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
528
´ E. Grabska and G. Slusarczyk
The analysis of both design diagrams and the knowledge stored in the attributed hypergraph representation of diagrams allows the system of agents to reason about designs and to support the designer by suggesting further steps and prevent him/her from creating designs not compatible with the specified constraints or criteria. Our approach to design based on the system of agents is illustrated by examples of designing floor-layouts.
2 Design Diagrams and Hierarchical Hypergraphs In our approach two representations of floor-layouts are considered. The first one is a simplified architectural drawing called a design diagram. It is composed of polygons which are placed in an orthogonal grid. These polygons represent components of a floor-layout, like functional areas or rooms. Mutual location of polygons is determined by design criteria. Dashed lines separating polygons denote the accessibility relations between areas, while continuous lines shared by polygons denote the adjacency relations between them. A design diagram representing a layout of an attic including two flats is shown in Fig.1a. The second representation of a layout, which is a design diagram internal representation, has a form of an attributed hypergraph with a hierarchical structure [3, 4] (Fig.1b). This hypergraph contains two types of hyperedges, which represent components and spatial relations and are labelled by their names. Hyperedges of the first type represent areas or rooms of the floorlayout. Nodes assigned to these hyperedges correspond to walls of the areas or rooms. Hyperedges of the second type represent relations among floorlayout components and can be either directed or non-directed in the case of symmetrical relations. While designing floor-layouts only spatial relations which are by nature symmetrical (accessibility and adjacency) are used. Hyperedges representing areas of a layout can contain nested hypergraphs. Hierarchical hyperedges (with nonempty contents) represent areas with different functions composed of groups of other areas or rooms. An example of the internal representation of the diagram presented in Fig.1a is shown in Fig.1b. This hypergraph is composed of five component hyperedges, two of which are hierarchical ones, and four relational hyperedges. The hierarchical hyperedges represent the whole attic and the living area in it, respectively, while the other three hyperedges represent two flats M 1 and M 2, and the landing, respectively. Three relational hyperedeges represent the accessibility relation between the landing and both flats and between the flats. One hyperedge represents the adjacency relation between one wall of the landing and the second flat. To represent features of layout components and relations between them attributing of nodes and hyperedges is used. Attributes represent properties (like shape, size, position, number of windows or doors) of elements corresponding to hyperedges and nodes.
Visual Design Aided by Specialized Agents
M1
529
M2
landing
(a)
living area acc
M1
acc
attic
adj
M2
acc
landing
(b)
Fig. 1. a) A design diagram representing a layout of an attic, b) a hierarchical hypergraph corresponding to this diagram
3 A Model of a System of Agents An agent is a system module situated in an environment and capable of autonomous action in this environment in order to satisfy design requirements. The environment of a design agent Ag is described as a set WAg = {w0 , w1 , ...} of possible states of its world. The effectoric capability of an agent is assumed to be represented by a set A = {a0 , a1 , ...} of actions [5]. Visual design imposes special features of design agents used. In the design context intelligent agents have specified goals to achieve. The agents’ goals are imposed by types of design actions, which are considered during the design process. Four types of the design actions are distinguished: physical actions, perceptional actions, functional and conceptual actions. All types of actions are supported by the system of agents. In this paper the following four types of design agents are proposed: • an editor agent, denoted by Ag E , performing physical actions, • a spotter agent, denoted by Ag S , responsible for perception, • an interpreter agent, denoted by Ag I , associating meaning to images, and • a creator agent, denoted by Ag C , determining additional goals.
530
´ E. Grabska and G. Slusarczyk
The editor agent Ag E performs physical actions like drawing, copying and erasing diagram elements. It interprets data obtained from the creator agent Ag C . These data have influence on the agent’s internal state and determine which physical action is to be taken. The spotter agent Ag S discovers visual features of diagrams. It percepts spatial relations between diagram elements, for instance closeness or neighbourhood, compares elements, for example searches for differences or similarities between them. Its perception process is based on the analysis both of diagrams and their internal representations in the form of hypergraphs. The former analysis consists in discovering geometrical changes made in the diagram by the designer. The goal of the latter one is to find topological properties of the design diagram. On the basis of the perception new shapes or/and relations are discovered in the process of evaluation its new internal state. Actions of Ag S consists in sending information about discovered features to the agents Ag I and Ag C . The interpreter agent Ag I associates meaning with features discovered by agent Ag S and relates abstract concepts to these features. The patterns of predefined associations and abstract concepts are stored in its long-term memory. Additionally it can evaluate design solutions. The results of its actions are sent to the agent Ag C . The creator agent Ag C determines design goals and requirements and searches for similar design solutions. The arguments of its performance are obtained from agents Ag S or Ag I . The new design requirements are communicated to the designer, while proposed solutions are presented in the form of diagrams drawn by the agent Ag E . A visual design supporting system of agents is composed of four agents Ag E , Ag S , Ag I and Ag C , and the environment which they occupy: the designer, the visual site and internal representations of diagrams. The behaviour of agents is determined by design requirements obtained as a result of the interaction with the designer by means of a visual site, for instance a monitor screen. The partial solutions of the design that the designer visualises on the visual site are automatically transformed into hypergraphs. A system of design agents is a structure AS = (Env, Ag E , Ag S , Ag I , Ag C ) [6], where the environment Env is a tuple (W, KAgE , KAgS , KAgI , KAgC , τ ): 1. W is a set of possible states of the world, 2. KAg : W → WAg ∈ 2W , where Ag ∈ {Ag E , Ag S , Ag I , Ag C } is a partition of W for every agent Ag, which characterizes information available to an agent in every environment state, 3. τ : W × AAgE × AAgS × AAgI × AAgC → 2W is a state transformer function, which maps an environment state and one action of each agent to the set of environment states that can result from the performance of these actions in this state. A system of design agents which supports the designer in a visual design process is presented in Fig. 2. In visual design a portion of any surface on which
Visual Design Aided by Specialized Agents
531
Designer Specification of requirements
Ag-C
Ag-I
Ag-E
Ag-S
System of design agents
Internal representations of diagrams
Visual site
Fig. 2. A system of design agents in a visual design process
diagrams are generated is called a visual site. The designer has an internal world being a mental model of a design task that is build up of concepts and perceptions stored in his memory, and an external world composed of representations outside the designer. Both visual site and internal representations of diagrams drawn by the designer can be treated as situations in the external world build up outside the designer. The designer takes decisions about design actions in his internal world and then executes them in the external world. Therefore the designer can be treated as an intelligent agent whose decision making process is supported by a system of specialized agents. An agent can be equipped with functions which enable it to remember the decisions taken in the previous states of the world. Such an agent possesses two types of memory: a short-term memory and a long-term one. The short-term memory MS allows the agent to remember a few recent states and actions taken. The long-term memory ML stores agent’s generalized experiences from the past. Let I denote a set of agent’s internal states being a Cartesian product of both types of memory, I = MS × ML . A designer agent Ag with a short-term (MS ) and long-term memory (ML ) situated in the environment WAg is a tuple (I, A, σ, η, χ, α), where: 1. I = {i0 , i1 , ...} is a set of the designer internal states, 2. A = {a0 , a1 , ...} is a set of his actions, 3. σ : WAg → S is the environment sensing process, where S = {s0 , s1 , ...} is a set of the visual site states, 4. η : I × P → I, where P = {p0 , p1 , ...} is a set of perceptual states, is the internal state changing process based on the interpretation of the visual site, 5. χ : I → C, where C = {c0 , c1 , ...} is a set of conceptual states, is the decision making process during which an internal state is evaluated and new goals are specified,
532
´ E. Grabska and G. Slusarczyk
6. α : C → A is the action process which translates goals specified by a conceptual state to an action that should be taken by the designer.
4 Examples of Design Supported by Specialized Agents The proposed approach is illustrated here by examples of designing floorlayouts. The designer draws the diagrams representing successive stages of the floor-layout design process. Each diagram being a partial solution is analysed and evaluated by the system of design agents in respect to the given requirements. Then the agents make the decision about an action which is to be taken to correct the diagram, if needed. The propositions of the system are shown to the designer. Let us consider the example of designing a floor plan, when an overall shape of it is given (Fig. 3a). Having specified design requirements as to the number of rooms, the designer creates an initial floor-layout (Fig. 3b). The objective of the system of agents is to support the designer in searching for original and at the same time valid floor-layout designs. In the presented example agents should propose modified floor-layouts which would cover the whole floor plan. The spotter agent considers regions of the plan which are situated outside the floor-layout [2]. In the floor-layout presented in Fig. 3b it finds two areas of this type. The interpreter agent looks for new possible shapes of rooms by generating all possible partitions of the found areas using a maximal line representation of shapes (Fig. 4b). Then it synthesizes new shapes from the ones obtained by the partition and the shapes representing rooms in a considered layout. Two examples of the obtained new shapes of rooms are presented in Fig. 4c. Then, new shapes are sent to the agent creator which using its experience and knowledge creates new floor-layouts. The agent creator can also, directly on the basis of the areas found by the spotter agent, generate new floor-layouts by simple enlarging existing rooms to include regions which are outside the given plan. This agent checks whether the generated floor-layout does not include overlapping areas or the number of rooms greater than the required one. If all conditions are satisfied, then new floor-layouts are drawn by the agent editor and presented to the designer. An example of a floor-layout created by agent Ag C and then drawn by agent Ag E is shown in Fig. 3c.
(c)
Fig. 3. a) A floor plan b) an initial floor-layout in the floor plan c) a floor-layout created by the system of agents
Visual Design Aided by Specialized Agents
533
Fig. 4. a) A hole b) partition of the hole c) two synthesized shapes
Let us consider an example of designing a floor-layout of the attic. The first diagram drawn by the designer represents the area of the attic with the living area divided into two flats and the landing (Fig. 1a). The initial hierarchical hypergraph representing this diagram, that is automatically generated
(a)
attic living area M1
M2 adj
living room
lobby
acc
adj
acc
bedroom2 acc
acc
acc adj
adj
bedroom2
lobby
acc acc
kitchen
acc
landing
acc
acc bathroom
adj
adj
kitchen
adj
bathroom
(b)
Fig. 5. a) A design diagram representing a layout of an attic, b) a hierarchical hypergraph corresponding to this diagram
534
´ E. Grabska and G. Slusarczyk
by the system, is shown in Fig. 1b. The spotter agent observes the fact that the flats are not symmetrical ones and that they are accessible from each other. Non symmetrical design results from the fact that the first flat has a rectangular shape, while another one is L-shaped. The L-shape of the area bounding the second flat is recognised by the interpreter agent. The appropriate information is sent by the agents Ag S and Ag I to the creator agent, which notifies the designer about the asymmetrical areas of the flats and also proposes to replace the accessibility between the flats by the adjacency relation. The editor agent draws a diagram, where the dashed line between the flats is replaced by a continuous one. In the next steps, the designer divides the first flat into a living room, kitchen, bathroom and lobby, and the second flat into a kitchen, bathroom, lobby and two bedrooms (Fig. 5a). A hierarchical layout hypergraph corresponding to the obtained diagram is shown in Fig. 5b. Analysing this diagram the spotter agent notices that in the first flat the bathroom is directly accessible from the kitchen. Thus, the system of agents suggests the designer the solution where the accessibility relation between these two rooms is replaced by the adjacency.
5 Conclusions This paper presents an attempt to support the design process by a system of specialized design agents. The results obtained so far seem to be interesting. Intelligent agents can effectively assist in the design process. They not only cooperate with each other but also dynamically react to changes in the design context which is treated as a part of the environment. In our future work the process of generating new designs by agents will be supported by animation.
References ´ 1. Grabska, E., Slusarczyk, G., Grze´s, P.: Dynamic Design with the Use of Intelligent Agents. In: Proceedings of CORES 2005, pp. 827–834. Springer, Heidelberg (2005) ´ 2. Grabska, E., Grzesiak-Kope´c, K., Slusarczyk, G.: Designing Floor-Layouts with the Assistance of Curious Agents. In: Alexandrov, V.N., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2006. LNCS, vol. 3993, pp. 883–886. Springer, Heidelberg (2006) 3. Minas, M.: Concepts and Realization of a Diagram Editor Generator Based on Hypergraph Transformation. Science of Computer Programming 44, 157–180 (2002) ´ 4. Slusarczyk, G.: Hierarchical Hypergraph Transformations in Engineering Design. Journal of Applied Computer Science 11, 67–82 (2003) 5. Wooldridge, M.J.: Intelligent Agents. In: Weiss, G. (ed.) Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, Cambridge, MA (1999) 6. Wooldridge, M.J., Lomuscio, A.: Multi-Agent VSK Logic. In: Proceedings of JELIAI 2000. Springer, Heidelberg (2000)
Head Rotation Estimation Algorithm for Hand-Free Computer Interaction Rafał Kozik Institute of Telecommunications, University of Technology & Life Sciences, Kaliskiego 7, 85-796 Bydgoszcz, Poland [email protected]
Summary. In the article robust method of hands-free interaction with computer is proposed and tested. There are showed the results of algorithms based on optical flow and rapid face detection.
1 Introduction and Motivation Alternative way of computer controlling allows to get rid of such devices as touch pad, mouse or joystick. Novel operating systems are facilitated with voice recognition systems which allows to control computer by voice commands. One of the drawback of such a system is that situation becomes complicated when we want to tell the computer to move mouse cursor from one position to another. Human-computer interaction without hands is very useful for those people who do not have full use of their hands. Unfortunately, many people do not have sufficient use of their hands due to injury or illness and are thus unable to use a computer equipped with traditional hardware. Some alternative interfaces have been developed using electroencephalograms or eye motion, but these system requires an expensive hardware. We can also find HCI systems that are based on hand tracking and gesture recognition [4] [5] [6]. 1.1
Our Approach
The algorithm of hands-free computer interaction is based on face position estimation and its further tracking. For real-time face detection a modified algorithm, proposed by Viola and Jones, is used [3]. For face rotation and position estimation a modified Lucas-Kanade algorithm is adopted [7]. The system also allows to simulate left mouse button input by detecting opened lips. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 535–541. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
536
R. Kozik
Fig. 1. The approach details
2 Real Time Face Detection The problem of face detection can be difficult to solve. Sometimes this task can be handled by extracting and recognizing certain image features, such as skin color [9], contours [10], spacial features or texture features [11]. There are also neural network approaches [8]. In some cases skin color detection would be enough for face segmentation and tracking (2). One of the drawbacks of such a method is that the skin color vary when the lightness of the scene changes what rapidly increases false detection ratio. What is more we have to deal with other object that have skin color such as hands. To handle such a problem additional information extracting techniques (such as detecting edges) have to be adopted. Besides the standard image features information mentioned above, we can also extract Haar-like feature introduced by Viola and Jones [3]. Those features indeed reminiscent of Haar basis functions. To compute these in
Fig. 2. Color-based face detector can often detect hand as face
Fig. 3. Examples of Haar-like features used for face detection
Head Rotation Estimation Algorithm
537
relatively short time we have to first change image representation from pixel luminance to an integral image. One of the advantages of Haar-like features is that these can be computed at any scale or location in constant time.
3 Feature Vectors Having a large set of Haar-like function we can build features vectors for particular pictures. In this case an images data set of faces was created to obtain learning vectors of human faces.
Fig. 4. Examples of captured faces
Depending on number of generated Haar-like functions we may achieve relatively long features vectors. This may dramatically slow down application which have to process no less than 15 fps. To handle such a problem we can chose only these features which plays the most significant role in vector. We are able to select strong features during Adaboosting process. In our case 81 Haar-like functions implicates vector of such length. After adaboosting the number is decreased to 20.
4 Building Classifier To detect face occurrence a classifier is built which uses three types of information: distance to the nearest face feature vector from learning data set, output from skin color classifier, and face positions in previous frame. 1 if hdist (D)hregion (P )hskin (C) > 0 h(D, P, C) = (1) 0 otherwise
Fig. 5. Examples of correctly detected faces
538
R. Kozik
Fig. 6. Examples of manually marked skin regions
Where • • • • • •
D is the Euclidean distance to nearest learning vector P is the face position detected in actual frame C is the vector of pixels (IRgBy color format) hdist verifies if distance D is less than acceptance threshold hregion verifies if actually detected face belongs to search window hskin verifies if vector of pixels C contains skin colors
All classifiers build cascade of classifiers to make whole classification process less time consuming. First hregion classifier is run to make searching window smaller. The window is build from previous frame detection result (the face position is taken). In the worst case the searching region covers whole image if no face was found in previous iteration. When searching window is ready hdist is run to find one feature vector which is nearest to learning set and which Euclidean distance is lower than system threshold. Marked area is tested whether contains some skin color pixels. For this purpose statistical skin color model was build from learning data set. Firstly pictures containing sink and non skin colors were collected. The skin areas were marked and extracted. Extracted pixels were converted from RGB to IRgBy and 2D histogram was created. I = L(R)+L(B)+L(G) 3 Rg = L(R) − L(G) By = L(B)−[L(G)+L(R)] 2 L(x) = 105log(x + 1)
(2)
where R,G,B are Red, Green, Blue components values respectively.
Fig. 7. 2D skin histogram and its approximation by single Gaussian distribution
Head Rotation Estimation Algorithm
539
Fig. 8. Examples of properly classified skin region
The final histogram of skin colors is normalized and approximated by single Gaussian distribution. And the final skin color classifier is created. 1 if p(C|Cskin ) > θ hskin (C) = (3) 0 otherwise where p(C|Cskin ) is skin color distribution estimated by single Gaussian mode and θ is detection threshold.
5 Tracking Features To estimate head rotations and movement in video sequence a variant of Lucas-Kanade algorithm, proposed by Bouguet [7] is used. The goal of this algorithm is to track selected feature in image sequence. Consider the point p(t) = [px (t) py (t)]T on the first image. We have to estimate its position on second image. So we have to find p(t + 1) = p(t) + d, the position of point p in actual frame. The vector d is an image velocity and is also known as the optical flow. e(dx , dy ) = (I(t, x, y) − I(t + 1, x + dx , y + dy ))2
(4)
In other words we have to minimize the error that is described by equation 4. ∂I ∂I ∂I dx + dy = − ∂x ∂y ∂t
(5)
The 2D Motion Constraint Equation 5 have to be solved to find vector d that minimizes the error e. The figure 9 shows that three features are chosen to be tracked in the video sequence. For each of these points the vector di is computed.
Fig. 9. Features tracking. Blue crosses indicates tracked features, red arrow its offset from initial position, and orange rectangle cursor controlled by head movements.
540
R. Kozik
davg
N 1 = di N n=1
(6)
Then the center of the mass davg (6) for all points is taken to update the position of the cursor on the screen.
6 Simulating Mouse Button Input To have full interaction with computer we need simulate one of the mouse buttons. Lips was chosen to handle that issue. When lips are wide open then this is equal to pressing left mouse button. When lips are closed then the button is released. Finding open lips is relative easy when we assume that we have already detected face. In this case round and dark area in face region is searched.
Fig. 10. Open and closed lips triggers "mouse pressed" and "mouse released" events. Blue points represent tracked features. Orange arrow represents controlled cursor.
7 Conclusion In the article an human-computer interaction system is presented. It is showed that hands free computer controlling can be good alternative for those who can not use their hands and for those who are not allowed to touch the controlled device (for example computer behind the window in computer store). The major contributions of this paper are: rapid face detector and head rotation estimator based on optical flow.
References 1. 3D-Tracking of Head and Hands for Pointing Gesture Recognition in a HumanRobot Interaction Scenario - Kai Nickel, Edgar Seemann, Rainer Stiefelhagen (2004) 2. Hunke, M., Waibel, A.: Face Locating and Tracking for Human-Computer Interaction (1994) 3. Viola, P., Jones, M.: Robust Real-time Object Detection (2002) 4. Kölsch, M.: Vision Based Hand Gesture Interfaces for Wearable Computing and Virtual Environments. Ph.D. Dissertation (August 2004) 5. Kölsch, M., Turk, M.: Analysis of Rotational Robustness of Hand Detection with Viola&Jones Method. In: Proc. ICPR (2004)
Head Rotation Estimation Algorithm
541
6. Kölsch, M., Turk, M., Höllerer, T.: Vision-Based Interfaces for Mobility. In: Intl. Conference on Mobile and Ubiquitous Systems (MobiQuitous) (August 2004) 7. Bouguet, J.-Y.: Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm (1994) 8. Rowley, H.A., Baluja, S., Kanade, T.: Neural Network-Based Face Detection (1998) 9. Singh, S.K., Chauhan, D.S., Vatsa, M., Singh, R.: A Robust Skin Color Based Face Detection Algorithm (2003) 10. Suzuki, Y., Shibata, T.: An edge-based face detection algorithm robust against illumination, focus and scale variations (2004) 11. Fan, L., Sung, K.K.: Face Detection and Pose Alignment Using Color, Shape and Texture Information (2000)
Detection of the Area Covered by Neural Stem Cells in Cultures Using Textural Segmentation and Morphological Watershed Marcin Iwanowski1 and Anna Korzynska2 1
2
Institute of Control and Industrial Electronics, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warszawa, Poland [email protected] Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences, ul. Ksiecia Trojdena 4, 02-109 Warszawa, Poland [email protected]
Summary. Monitoring and evaluation of the dynamic of stem cells growth in culture is important in the regenerative medicine as a tool for cells population increasing to the size needed to therapeutic bprocedure. In this paper the automatic segmentation method of cells images from bright field microscope is proposed. It is based on the textural segmentation and morphological watershed. Textural segmentation aims at detecting within the image regions with intensive textural features, which refer to cells. Texture features are detected using local mean absolute deviation measure. Final, precise segmentation is achieved by means of morphological watershed on the gradient image modified by the imposition of minima derived from the result of rough segmentation. The proposed scheme can be applied to segment other images containing object characterized by their texture located on the uniform background.
1 Introduction Stem cells are a potential source of cells for regenerative medicine [1]. Monitoring and analysis of culture quantity is crucial for reliable optimization of culturing methods in the sense of fast increasing cells’ population to the size and cells’ differentiation stage needed to reach any therapeutic results [5]. Evaluation of the cell cultures with human eye and culture examination supported by immunohistochemistry and flow cytometry [28] are typically used in laboratories as the fastest methods. There is an idea of continuous cultures condition monitoring using computer supported microscopy [15]. It allows to detect speed and direction of culture development in any time of culture growth. Since automation of cells cultures observation and evaluation is a complicated process, the goal will be achieved in several steps. One of the problem which should be solved is to find a method of automatic cell counting in microscopic images acquired in constant time increments. To develop in the future complete method of cell counting, this paper presents the detection method of the area, covered M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 543–557. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
544
M. Iwanowski and A. Korzynska
by neural stem cells in culture. Based on the results of the proposed method segmented areas should be divided into pieces covering single cell regions or - in confusing cases - their size should be estimated according to the average size of cell in area of interests[14]. The paper presents the method for segmentation of cell images based on the textural segmentation and morphological watershed. Textural segmentation aims at detecting regions with intensive textural features within the image, which refer to a single cell or to a group of cells surrounded by the uniform background. Texture features are detected using the local mean absolute deviation measure. Loci of pixels of high value of textural features are considered as rough segmentation of cells. The precise segmentation is achieved by means of morphological watershed on the modified gradient image. Modification of gradient is performed by the imposition of minima referring to the result of rough segmentation. The method produces precise segmentation result being the outline of stem cells. The paper is organized as follows. Section 2 contains an introduction to the subject consisting of literature review and characteristics of images. Section 3 describes the proposed method. In Section 4, the results are presented, and finally Section 5 concludes the paper.
2 Cell Image Processing 2.1
Related Works
There are some applications which offer offline counting of cells [7, 11, 7] and cells’ image segmentation [13]. This approach is used in the whole scanning cytometry software [11]. It is assumed that in very difficult cases the human evaluation and intervention is taken into account. In the case of continuous monitoring a fully automatic method is required. Fully automatic methods were developed for particle detection in the cryoelectron microscopy. The problem of the localization of particles is similar to the problem of localization of cells in microscopic images but much more easier because of particles homogeneous morphology. The proposed methods of identification of particles based on the intensity or texture comparison, on the cross-correlation of template matching [24] or neural networks [19]. Authors of review of these methods [19] come to a conclusion that none of presented methods alone is good enough. Consequently, their successors have used the hybrid method [4, 13], supported by extra information of the local average intensity method [12], cross-correlation with pruning the list of candidates [18] or agents based systems [3] in which agents perform their action locally. For a long time the problem of microscopic images of living cells segmentations has been solved by detection of edges or regions [1], what has produced results accurate enough only for the simplest images for low density cell samples, where all cells are laying far one from another and they are rounded by
Detection of the Area Covered by Neural Stem Cells in Cultures
545
the background. Nowadays there are two directions of developing progress in this field: • •
using prior knowledge and models [18], employing the cooperation between methods, and their adaptation to the local situations in the image [13].
Within the first direction of investigation a cell with the homogeneous morphology, for example one of rounded cells in suspension can have ambiguous model of its shape. Sometimes extra information about the cell shape, coming from another cell imaging, can be used, e.g. epifluorescence of cells transfected by the cytoplasmic or membrane markers [16]. Within the second direction there are two methods: the combined texture and the edge based method, proposed by one of the authors and her collaborators [13] and the cooperative system [29]. Summarizing, detection of cells’ area is based on: • • • 2.2
color [12]: nucleus, cell membrane or cytoplasm with fluorescent stained samples; texture [13, 14]: identification of nuclei by characterizing the chromatin structure; enumeration and detection of proliferating bone marrow progenitors, cancer cells, neutrophils or artifacts in smears; intensity or edges, supported by shape [12]. Characteristics of Images Sequence and Cells in Images
The problem of microscopic image quality is rear addressed but is very important for segmentation of this type of images. Various types of microscopic techniques are used in observation of living cells: conventional light microscopy (bright field, phase contrast, dark field or Nomarski’s contrast) or confocal microscopy, because of their insignificant influence on the cell behavior [9, 20]. The transparent cell is a poorly contrasted object on the grey background in all types of cell microscopic imaging. In this paper the conventional brightfield microscopy is chosen to observe neural stem cells in culture, so this type of images will be under investigation. The quality of these images is dependent on acquisition conditions, which vary from one series of images to others and from one image in series to another. In particular, it depends on illumination at the moment, on internal inhomogeneous light distribution in microscopic optics and on the focus plane position, which is unstable in the long term observation. There are some features which characterize all images. The main features are low dynamics in grey colors and low contrast between the object and the background. The grey levels observed in the images do not cover the full range of greytones of 12-bit deep images. Furthermore, grey levels observed in the background are also observed in the cells’ region, but among darker and brighter patchworks or dots. Despite of that the fluctuations of grey levels are observed both within cell regions and within the background, the region covered by the cells is distinguishable by the range and standard deviation
546
M. Iwanowski and A. Korzynska
Fig. 1. Several examples of cell morphology: left bottom cells are small, upper cells are large, right bottom cells are elongated in the transitional stage. The image is composed of several cell image fragments, which were processed to achieve dynamics and contrast needed to be visible in the printed version, using histogram manipulation. The detailed description of cell morphology in the text.
of variations of grey levels. It is due to graytone variations caused by the noise and non-homogeneity in light distribution, observed in the background, that are dumped by various light transmission through organelles inside a cell. Cells’ population observed in culture are biologically homogeneous, while cells’ morphology is not. All types of cells morphology are presented in Fig. 1. Cells differ in mean grey level and in texture properties. They significantly differ in size among themselves. The flattened cell can cover an area up to 10 times larger in comparison to the converged one (Fig. 1 cells C and D). They significantly differ in shape. Some cells are rounded and smooth, while others have rough, ”restless” surface, forming protrusions called lammelas, filopods or uropods (Fig. 1 cells E and B). There are a lot of cells with its shape and size somewhere between the two extreme stages. They are described as transition or elongated cells (Fig. 1 cells G and F). There are other features of the cells of various morphology, which make the specific texture, and which are employed in the proposed method of cells’ area detection:
Detection of the Area Covered by Neural Stem Cells in Cultures
547
Fig. 2. Example of the input image
• •
•
• •
the visibility of the cells’ nuclei with nucleoli and small structures around them - for the flattened cell (Fig. 1 cells A and C) and sometimes for large transitional cells the presents of cytoplasmic extensions of the cell, which are poorly contrasted with the background, but visible in most cases because of halo around them - for flattened and in the transitional state cells (Fig. 1 cells F and B) the ectoplasm, the external part of cytoplasm, is visible as almost clean, slightly darker than the background, rounded by a few irregularly and occasionally dispersed dots (cells organelles) - for the flattened cells (Fig. 1 cell A) the cytoplasm is darker than the background, with irregularly but closely and densely dispersed, brighter and darker patchworks - for the convergent cells (Fig. 1 cell D) because of halo the background around cells is brighter than the background placed farther from the cells area. The brightness of halo is correlated with cells’ size in Z-axis. The converged cells and sometimes transitional cells or their parts and their pseudopodia are rounded by halo (Fig. 1 cells E and G).
3 Proposed Segmentation Method In the current section the proposed approach is briefly described. Each of two principal steps of the approach is explained more deeply in following two subsections. Input images of stem cells are characterized by low contrast - graytones of their pixels does not stretch along the entire grayscale. In order to improve the contrast, at first, histogram stretching operation is performed.
548
M. Iwanowski and A. Korzynska
Stem cells are characterized by their texture. Elements of this texture pixels or groups of pixels - are both darker and lighter than the background, what makes impossible segmentation by thresholding. Segmentation must be thus performed starting from the texture measure that reflects the level of texturing all over the image. This texture measure is computed as local mean absolute deviation calculated using averaging filters. This linear filters are using convolution mask covering the pixel’s neighborhood of given size. In addition, result of linear filtering is filtered by means of morphological filters in order to remove some point-wise noise. Such an image is characterized by higher graytone values present within the area covered by cells and lower ones in the background. Contrary however to the initial input image, this one can be thresholded in order to obtain binary mask covering the area covered by cell bodies. This mask suffers from the lack of accuracy. This is due to the fact that mean absolute deviation measure image is blurred due to linear filtering involved. Despite mentioned above lack of accuracy, the mask obtained can be used as marker depicting the position and rough area of cells. The precise outline of cells will be obtained using the morphological watershed transformation of the gradient of input image. Since the result watershed transformation depends strongly on the number and location of gradient regional minima, at first their distribution is modified using minima imposition procedure. This morphological operation forces the creation of regional image minima of the gradient of input image so that the result of the watershed is as correct as possible. The position and shape of the minima is obtained from the binary mask obtained previously from the mean absolute deviation measure. Finally, the computation of the watershed is performed on the modified gradient image. Two principal steps of the proposed segmentation scheme are described below. 3.1
Textural Rough Segmentation
Image Preprocessing Images acquired are characterized relatively low contrast. In order to improve its visibility the classic contrast stretching method is used: ⎧ 0 if f (p) < tlow ⎨ low f (p) = ft(p)−t (1) if tlow ≤ f (p) ≤ thi , ⎩ hi −tlow maxval if f (p) > thi where f and f stands for the initial and stretched image respectively, maxval is maximum pixel value. Parameters tlow and thi stand for lower and higher thresholds. Thresholds are computed on a basis of the histogram of the input image using the following equations:
Detection of the Area Covered by Neural Stem Cells in Cultures
tlow = min t : thi = max t :
t
549
hi (f ) > α ,
i=0 maxval
(2)
hi (f ) > α ,
(3)
i=t
where hi (f ) stand for the number of pixels of grayvalue i in the image f . Parameter α represents the cut level and is set up manually. Mean Absolute Deviation Areas which are referring to cells differs to the image background by the intensity of texture. Background is characterized by almost homogeneous graylevel, while pixel belonging to cells are strongly textured. Due to the latter, thresholding of input image would not produce valuable results of segmentation. There exists a variety of statistical factor that may be used to measure the texture. The classic one is standard deviation. The method which was chosen to be applied in the current study is mean absolute deviation (MAD), which is defined as follows: 1 1 |xi − M | ; M = xi , n i=1 n i=1 n
D=
n
(4)
where M stand for mean value, D - for mean absolute deviation, n - for the number of samples xi . In case of digital images its local version can be formulated. It computes both mean values and MAD within the local pixel neighborhood. This neighborhood is defined on in the same way as in the case of averaging convolution filter: g = f ∗ B ⇔ ∀p g(p) = f (p − b) · q(b), (5) b∈B
where g stand for the output image (result of filtering), f - for input image, and B - the filter mask. The mask B in case of averaging filter is defined as: 1 · 1n×n , (6) n2 where odd number n stands for the size of a mask and 1n×n is a matrix of size n × n with all its elements equal to 1. Element b(n−1)/2,(n−1)/2 of matrix B is considered as the origin of coordinate system associated with elements of a mask. Finally, the entire formula of local MAD filter can be expressed in terms of averaging filter used twice: B=
550
M. Iwanowski and A. Korzynska
(a)
(b)
(c)
(d)
Fig. 3. Cell area on the input image (a), result of MAD computation (b), result of morphological filtering (c) and its thresholding (d)
g = |f − f ∗ B| ∗ B.
(7)
The result of applying MAD filter on the sample image from Fig. 3(a) is shown in Fig. 1(b). Morphological Filtering Result of MAD reflects the presence of textured areas referring to cells, but on the other hand it contains a lot of noise. In order to remove this noise, the morphological opening by reconstruction [23, 21] is used. This filter removes small object lighter than the background without modifying the shape of the remaining image parts and is defined as: rec γB (f ) = Rfδ (εB1 (f )), 1
(8)
where εB (f ) stands for the erosion of image f with structuring element B and Rf - for the morphological reconstruction by dilation with a mask equal to image f and marker image represented by its argument. The result of morphological filtering of the image from Fig. 3(b) is shown in Fig. 3(c). Thresholding Result of MAD filtering filtered using morphological opening by reconstruction is a graytone image where image parts reflecting the cell regions are characterized by higher, relatively uniform graytones, while the background consists of pixels of low graytone value. In order to get final result of textural rough segmentation, this image is thresholded at given level t and in such a way the binary image is produced. It reflects the presence of cells in particular regions of the initial image well, but the boundaries of cells are not precise enough (which is due to linear approach in MAD filter resulting in some blurring). In order to get fine segmentation the morphological watershed will be computed. Result of rough textural segmentation will be used to produce markers, necessary for obtaining correct watershed line. The result of thresholding is shown in Fig. 3(d).
Detection of the Area Covered by Neural Stem Cells in Cultures
3.2
551
Morphological Fine Segmentation
Gradient Computation Since the watershed should be computed on the gradient image, the gradient of the input image must be obtained first. There exists however an important property of an input images that requires special type of gradient that should be used. This property refers to the halo effect which is visible around cells. In a pixel-scale this effect is a non-continuous, dark, up to 5-pixel thick boundary, which separates uniform background and textured cell body. Commonly used gradient detectors (e.g. linear: Prewitt, Sobel) detects changes of intensity within the closest pixel neighborhood. Usage of such gradient in our case would result in double gradient peak describing the cell boundary: first peak referring to boundary between halo and the background, and the second - between halo and the cell-body. This would not be a desirable effect because what is needed is a single peak describing the boundary on the top of which watershed line could be constructed. A solution to this problem requires application of morphological thick-gradient [26], in this particular case based on the dilation operator: g = δB2 (f ) − f,
(9)
where B2 stands for the structuring element, which in the case of thick gradient contains wider pixel neighborhood (not only the closest neighbors as in classic morphological gradient). The structuring element (B2 ) used in the thick gradient should be chosen as the smallest one but such that dilation using it removes all the halo-effects. Mask for Minima Imposition Minima imposition is a tool which modifies gradient in order to force the presence of image regional minima (crucial for correct result of the watershed) in given image regions. This operation is based on morphological reconstruction and flattens all the regional minima which are not marked by a supplementary binary image - a marker image and produces new minimas in all regions which are marked by the pixels of marker image of value 1. It is an obvious supplement of watershed transformation, where marker image is created separately in such a way that each marker points either at particular object present on the input image or at the background. Thus, two kind of markers are possible: inner which indicates objects (cells in our case) and outer that point at the background area. Minima imposition with such markers guarantees that watershed line will be located inside the area between the inner and outer markers. In the case of cells detection task described in the current paper, the markers are created based on the binary result of rough textural segmentation. Thresholded filtered local MAD, being the result of algorithm
552
M. Iwanowski and A. Korzynska
(a)
(b)
(c)
Fig. 4. Inner markers (a), outer markers (b) and the union of both (c)
described in the previous section, is further processed to get markers. At first, this image is processed using two consecutive morphological operators. First of them - closing with structuring element B3 merges disconnected particles of the mask. The second one - hole-filling - fills all the holes inside these particles. Thanks to both operators, the binary mask covers better cell area. In order to produce inner markers, this result of both operators described above is eroded with structuring element B4 . Erosion is necessary to leave some free space inside which the watershed line could be produced. Inner markers are thus located inside cell bodies. The goal for producing outer markers is to get regions which are located outside cells, within the background area. They are produced by another erosion but this time, of the negative mask1 (thresholded and filtered MAD). The structuring element used by this erosion is B5 . Markers produced from image shown in Fig. 3(d) are shown in Fig. 4(a) - inner markers and Fig. 4(b) - outer markers. The union of inner and outer markers is shown in Fig. 4(c). Watershed Computation The goal of the morphological watershed segmentation is to improve the result of rough textural segmentation in order get fine cell one-pixel-thick cell boundaries. Morphological watershed segmentation aims at producing binary image that consists of lines following precisely the boundary of object present on the input graytone image. The origin of the method lies in the idea wellknow in the geography and refers to a line that separates catchment basing of particular water-bodies. This idea was adopted to image processing [26] and since many years is successfully applied in this domain. This adoption was made through an assumption that graytone image can be treated as a physical map of a hypothetical terrain where the watershed lines can be created around local minima presented there. Typical way of applying morphological watersheds to image segmentation makes use of the gradient image as 1
Alternatively, instead of erosions, the homotopic thinning of given number of iterations can be applied here. Such approach would produce inner markers homotopic to orignal mask, and outer markers to its background.
Detection of the Area Covered by Neural Stem Cells in Cultures
(a)
(b)
(c)
553
(d)
Fig. 5. Gradient of the input image (a), gradient with imposed minima (b), result of watershed segmentation (c), result of segmentation superimposed on the input image (d)
the input one for the watershed computation. In such a way the warteshed line goes on all the crest lines of the gradient. Also such a line goes around all the regional minima of the gradient image. Due to the latter, the quality of segmentation depends strongly on the number of regional minima of the gradient image. Too many regional minima result in oversgmentation. In order to get the proper segmentation, the number of minima must be reduced in such a way that every single minimum refers to one object present on the original image. All unwanted minima should be thus removed before the watershed segmentation starts to run. The operation allowing removal of unwanted minima is called minima imposition and is based on morphological reconstruction [27, 26]. This operation allows also to force the creation of new image minima in order to improve the result of the watershed segmentation. In the case of cell image segmentation, the union of inner and outer markers described in the previous section are next used to perform minima imposition on the thick gradient image. On such modified gradient the watershed transform is applied. Its result contains one-pixel thick lines - precise outline of cells. In figure 5 the following images are shown: gradient image (a), gradient with imposed minima (b), its watershed (c) and watershed line superimposed on the input image (d).
4 Results There were acquired three image sequences of behavior of the neural stem cells from the HUCB-NS line, established in NeuroRepair Department laboratory, MRI, PAS. Cell samples were documented and monitored using the Cell Behavior Monitor System, IBBE PAS, which is based on the inverted optical microscope OLYMPUS IX70 and PC computer with ScopePro (Media Cybernetics) and some supported equipment to control shatters and Z-axis motorized stage and monochromatic camera CoolSnap.
554
M. Iwanowski and A. Korzynska
(a)
(b)
Fig. 6. Result of segmentation superimposed on the input image (shown in Fig 2) - (a) and fails in segmentation - (b)
Cells were seeded a few hours before observation with density 50 000 cell/ml in standard conditions (37◦ C, 5% CO2 and humidity 95%) with low serum addition. The greyscale, 12-bit deep images of 1392x1040 pixels were acquired in every 1-10 minutes up to 2 hours. They document one chosen observation plane which covers only 220μm × 165μm space on the microscopic slide with magnification 40×. From about 45 images, 6 of them were chosen randomly (two from each sequence) to perform the evaluation of the proposed method. The results of image segmentation are presented in the Fig. 6.
Table 1. Parameters used in the experiments operation preprocessing (contrast stretching) MAD filter filtering of MAD image (opening by reconstruction) thresholding (0 ≤ t ≤ 1) thick gradient binary mask closing erosion of mask (internal markers) erosion of negative mask (external markers)
parameter α = 300 n=7 B1 is disk of radius 7 t = 0.0151 B2 is disk of radius 5 B3 is disk of radius 7 B4 is disk of radius 10 B5 is disk of radius 20
Detection of the Area Covered by Neural Stem Cells in Cultures
555
The method requires a couple of parameters which have to set-up depending on the resolution of input images. Table 1 show the parameters for images as described above.
5 Conclusions The paper presented the method for segmentation of cell images based on textural segmentation and morphological watershed. Textural segmentation aims at detecting, within the image, regions with intensive textural features referring to cells. Texture features are detected using local mean absolute deviation measure. Loci of pixels of high value of textural features are considered as rough segmentation of cells. Precise segmentation is achieved by means of morphological watershed on the modified gradient image. Modification of gradient is performed by the imposition of minima referring to the result of rough segmentation. The method produces the correct and precise segmentation results in about 90% of area covered by cells. About 10% of selected area does not cover cells area but the background. False results in segmentation appear: (1) because of: including area between cells or between cells protrusions and filopods and (2) because of treating small objects (e.g. cells fragments) as cells. These areas are shown in Fig. 6(b) as white drops. There are also some parts of cells or cells protrusions excluded from the selected area, rounded in Fig. 6(b) by grey line. This false result appears because of very low contrast of edges of a single cell placed in the bottom left part of the image presented in Fig. 2 and Fig. 6. Both types of errors of the proposed method are acceptable, because the reduction and the expansion of the selected area are random and, according to authors experience with chosen images, the scale of increase is equal the scale of decrease. It can be observed also that the proposed method produces false edges found in the background along small structures placed closely in the nearest background, what sometimes appears in a sample because of cells fragments detachment process. It constitutes the more serious fail of the proposed method and should be reduced both on the step of cell culturing and on the stage on the textural map analysis. These types of fails are shown in Fig. 6 (b) as rounded by two small gray lines. All presented fails of the proposed method appears rarely and do not change the size of selected area dramatically. The selection of area covered by the cells is the first step by estimating a number of cells, the procedure with a certain degree of occurrence [14, 16]. The error of area covered by the cells selection in the proposed method seems to be accepted. It will be estimated in the future investigation by comparison to the results of other methods, e.g. to the manual method of cells area selection, done by few experienced operators. The proposed scheme can be applied to segment other images containing objects, characterized by their texture located on the uniform background.
556
M. Iwanowski and A. Korzynska
Acknowledgment This work was supported by Ministry of Science and Higher Education of Poland, project no. 3911/B/T02/2008/34. We are grateful for neural stem cells line HUCB-NS from the NeuroRepair Dpt. Lab., MRI, PAS.
References 1. Alberts, B., Bray, D., Lewis, J., Raff, M., Roberts, K., Watson, J.D.: Molecular Biology of the Cell, 3rd edn. Garland Publishing Inc., New York (1994) 2. Bellet, F., Salotti, J.M., Garbay, C.: Une approche opportunists et cooperative pour la vision de bas niveau. Traitement du Signal 12(5), 479–494 (1995) 3. Boucher, A., Doisy, A., Ronot, X., Garbay, C.: Cell Migration Analysis After in Vitro Wounding Injury with a Multi-Agent Approach. Artificial Inetligence Review 12, 137–162 (1998) 4. Boier Marti, I.M., Martineus, D.C., et al.: Identification of spherical virus particles in digitized images of entire electron micrographs. Journal of Structural Biology 120, 146–157 (2005) 5. Buzanska, L., Jurga, M., Stachowiak, E.K., Stachowiak, M.K., DomanskaJanik, K.: Stem Cell and Development 15, 391–406 (2006) 6. Cocquerez, J.P., Philipp, S.: Analyse d’images: filtrage et segmentation. Masson (1995) 7. Comaniciu, D., Meer, P.: Cell image segmentation for diagnostic pathology. In: Suri, J.S., Setarehdan, S.K., Singh, S. (eds.) Advanced algorithmic approaches to medical image segmentation: state-of-the-art application in cardiology, neurology, mammography and pathology, pp. 541–558 (2001) 8. Frank, J., Radermacher, M., et al.: Spider and web: Processing and visualization of images in 3D electron microscopy and related fields. Journal of Structural Biology 116, 190–199 (1996) 9. Goldman, R.D., Spector, D.L.: Live cell Imaging. A laboratory Manual. CSHL Press, New York (2005) 10. Iwanowski, M.: Binary Shape Characterization Using Morphological Boundary Class Discrimnation Fnctions. In: Kurzynski, M., Puchala, E., Wozniak, M., Zolnierek, A. (eds.) Computer Recognition Systems, pp. 303–312. Springer, Heidelberg (2007) 11. Jiang, K., Liao, Q.M., Dai, S.Y.: A novel white blood cell segmentation scheme using scale-space ltering and watershed clustering. In: Proc. Int. Conf. on Machine Learning and Cybernetics, vol. 5, pp. 2820–2825 (2003) 12. Kivioja, T., Ravantti, J., et al.: Local avarage intensty-based method for identifying spherical particles in electron micrographs. Journal of Structural Biology 131, 126–134 (2000) 13. Korzynska, A., Strojny, W., Hoppe, A., Wertheim, D., Hoser, P.: Segmentation of microscope images of living cells. Pattern Anal. Applic. 10, 301–319 (2007) 14. Korzynska, A.: Automatic Counting of Neural Stem Cells Growing in Cultures. In: Kurzynski, M., Puchala, E., Wozniak, M., Zolnierek, A. (eds.) Computer Recognition Systems, pp. 604–612. Springer, Heidelberg (2007) 15. Korzy˜ nska, A., Iwanowski, M.: Detection of Mitotic Cell Fraction in Neural Setem Cells in Culture. In: Pietka, E., Kawa, J. (eds.) Information Technology in Biomedicine. Advances in Soft Computing, vol. 47, pp. 365–376. Springer, Heidelberg (2008)
Detection of the Area Covered by Neural Stem Cells in Cultures
557
16. Korzy˜ nska, A., Dobrowolska, E., Zychowicz, M., Hoser, P.: Examination of the microscopic imaging of the neural stem cells for counting a number of cells. Abstracts of 9th International Symposium: Molecular basis of pathology and therapy in neurological disorders, p. 66 (2008) 17. Ludtke, J., Baldwin, P., Chiu, W.: EMAN: Semiautomared software for highresolution signal-particle reconstruction. J. of Struct. Biol. 128, 82–97 (1999) 18. Miroslaw, L., Chorazyczewski, A., Buchholz, F., Kittler, R.: Correlation-based Method for Automatic Mitotic Cell Detection in Phase Contrast Microscopy. In: Kurzynski, M., Puchala, E., Wozniak, M., Zolnierek, A. (eds.) Computer Recognition Systems, pp. 627–634. Springer, Heidelberg (2005) 19. Nicholson, W.V., Glaeser, R.M.: Review: Automatic particle detection in electron microscopy. Journal of Structural Biology 133, 90–101 (2001) 20. Periasamy, A.: Methods in Cellular Imaging. Oxford University Press, Oxford (2001) 21. Serra, J., Vincent, L.: An overview of morphological filtering. Circuit systems Signal Processing 11(1) (1992) 22. Serra, J.: Image analysis and mathematical morphology, vol. 1. Academic Press, London (1983) 23. Serra, J.: Image analysis and mathematical morphology, vol. 2. Academic Press, London (1988) 24. Smereka, M.: Detection of ellipsoidal shapes using contour grouping. In: Kurzynski, M., Puchala, E., Wozniak, M., Zolnierek, A. (eds.) Computer Recognition Systems, pp. 443–450. Springer, Heidelberg (2005) 25. Sinha, N., Ramakrishnan, A.G.: Automation of differential blood count. In: Proc. Conf. on Convergent Technologies for Asia-Pacific Region, vol. 2, pp. 547–551 (2003) 26. Soille, P.: Morphological image analysis. Springer, Heidelberg (2002) 27. Vincent, L.: Morphological Grayscale Reconstruction in Image Analysis: Applications and Efficient Algorithms. IEEE Trans. on Image Processing 2(2) (1993) 28. Yogesan, K., Jorgensen, T., Albregtsen, F., et al.: Cytometry 24, 268–276 (1996) 29. Zama, N., Katow, H.: A method of quantitative analysis of cell migration using a computerized time-lapse videomicroscopy. Zool. Sci. 5, 53–60 (1988)
Decision Support System for Assessment of Enterprise Competence Jan Andreasik Zamość University of Management and Administration Akademicka Str. 4, 22-400 Zamość, Poland [email protected]
Summary. The paper includes description of the main elements of a decision support system (DSS) for positioning enterprises according to assessment of their competences. An enterprise model created according to original conception of defining features is presented. In the author’s conception, it is assumed that the enterprise competence system is the object which is assessed. The system is assessed from two points of view: assessment of competence potential and assessment of competence gap. A procedure of making assessment in the ranges determined by the AHP (Analytic Hierarchy Process) method is depicted. A procedure of explanation of the enterprise position based on an adaptation of the EUCLID and ELECTRE TRI methods is given. Results of research on the SME sector in south-east Poland are presented. These results lead to distinguishing four classes of assessment of enterprise positions.
1 Introduction Determining actual condition of an enterprise is a big problem for managers. Fast changes of macroeconomic environment, political game, and also game of different key stakeholders force managers to analyze situation and search for description methods of an enterprise, which answer to the problems in real time. There exist new theories of an enterprise. A. Noga [1] shows twenty seven theories of an enterprise and adds new one as twenty eighth theory. In several conceptions, enterprise competence is regarded as a basis of creating theory. G. Hamel and C.K. Prahalad [2] defined competences as characteristics of competitiveness encompassing a whole set of abilities and technologies. M. Harzallach, G. Berio and F. Vernadat [3] presented the Competency Resource Aspect Individual (CRAI) model. In this model, they distinguished three types of competences: individual competences, team competences, and enterprise competences. The enterprise competences are understood as macro-competences, being aggregation of competences oriented toward the leadership in the range of products and services. Competences encompass three categories of resources (C-resources): knowledge, technologies (know-how), individual behaviors (abilities, talents, experiences). The Unified M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 559–567. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
560
J. Andreasik
Enterprise Competence Modeling Language (UECML) [4] is the most formalized approach to enterprise analysis from the point of view of competence. The authors of this conception [4] define competences required for realization specific activities (technological operations). The competences are identified in relation to services of processes and technological operations made by personnel by means of material and immaterial resources adequately assigned. The UECML is captured in the specification framework of the UML language. Using conception of index identification of intellectual capital of an enterprise, Y. Jussupowa-Mariethoz, A.R. Probst [5] worked out ontology of monitoring enterprise competence. The author of this paper uses philosophical conceptions proposed by J.M. Bocheński [6] and R. Ingarden [7]. In Bocheński’s conception, an enterprise is defined as a system consisting of six elements. Internal elements are: capital, labor, and invention. External elements are: client, region, and state. R. Ingarden presented extensive theory of object. Individual object is represented by subject of the property. Assigning property determines a state of thing. There can be positive and negative states of things. Each property has its material endowment. The author of this paper shows in Section 2 definition of an enterprise according to the conceptual apparatus of Ingarden’s theory of object.
2 Enterprise Model Assessment of enterprise condition requires to consider a new enterprise model, which enables experts to assess individual features of given characteristics. According to the conceptual apparatus of Ingarden’s theory of object, the author shows the following algorithm of modeling of an enterprise. Step 1: Defining a Feature Subject. In the author’s conception, it is assumed that the subject of assessment is the enterprise competence system. This system is assessed from two points of view: assessment of competence potential and assessment of competence gap in the context of risk of the enterprise activity. Such a distinction refers to distinguishing positive and negative state of things in Ingarden’s theory of object. Moreover, in modern theory of management, there exist conceptions of assessment of enterprise potential in terms of intellectual capital. One of the most developed conception used in construction systems with the knowledge base is the Balanced Scorecard (BSC) method [8]. On the other hand, there exist conceptions concerning enterprise assessment in terms of its activity risk. Therefore, considering enterprise condition in two dimensions: assessment of competence potential and assessment of competence gap is justified. Below, a system for defining features of the enterprise competence system is presented. <subject>:=<Enterprise Competence System ECS> <ECS>:=,
Decision Support System for Assessment of Enterprise Competence
561
According to the definition of an enterprise given by J.M. Bocheński the author proposes to distinguish five spaces of assessment of the enterprise competence system: := | | | | <environment potential>
gap>:= | range of risk of project realization> | range of risk of key stakeholder requirements> | range of risk from neighborhood> | range of risk from environment> |
In [16], the author presents lists of individual components concerning competence potential and competence gap. There was created three-level taxonomy of the enterprise competence system: level I - type Ti , level II - kind Kj , level III - range Rk . Step 2: Defining Features. In Ingarden’s formal ontology, each feature has the so-called material endowment. The author of this paper assumes that a subject of the property (competence system) will be characterized by enterprise competence assessed in every range of competence potential and competence gap. <enterprise competence>:= <possibility of using employee resources>, , , <experience of employee resources>, <possibility of using employee resources>:= ,<partly available own resources>, , <partly available outside resources>, :=, ,,, <decision making>,,<solution searching>, <articulatity> <experience of employee resources>:=<existence of successes>, ,, := <procedures>,<methods and processes>, <procedures of machine and device services>, ,
562
J. Andreasik
Each element of above-mentioned definition of competence is treated as a criterion in comparative analysis between actual enterprise competence and target enterprise competence determined in the strategy or on the basis of comparison with competence of a competitive enterprise or an enterprise being a lider. The best fit method for such comparison by experts is the AHP (Analityc Hierarchy Process) method [9]. This method is often used in modern Knowledge Based Systems (KBS) [10, 11]. In this method, the nine-point scale is used. In the presented system the following interpretation of grades are introduced: 1 Ű comparable competences, 3 - small incomparability, 5 medium incomparability, 7 - big difference, 9 - very big difference of competences, 2,4,6,8 Ű intermediate values between above. This interpretation constitutes a set of features, which reflect relations between competence of a given enterprise at the moment of making assessment and target competence or competences of competitive enterprises. P Step 3: Assessment of Enterprise Condition. For each range Rijk of R competence potential and each range Rijk of competence gap, the expert will make comparative assessment using the AHP method, for example, by means of the EXPERT CHOICE system [12]. Definitions given below determine numerical values of assessments.
Definition 1. An assessment of the enterprise competence in the specific P range of potential Rijk is calculated on the basis of the actual state estimation (AS) and target state estimation (TS) using the AHP method according to P the formula c(Rijk ) = 1 − (assessmentT S P − assessmentAS P ). Definition 2. An assessment of the enterprise competence gap in the specific R range of risk Rijk is calculated on the basis of the actual state estimation (AS) and permissible state (PS) using the AHP method according to the formula: R c(Rijk ) = assessmentP S R − assessmentAS R . is expressed by means Each state ωl of the enterprise*competence system + tj tj t P of two assessment vectors ωl = A(P )ei , A(R)ei , where A(P )eji = [c(Rijk )] t
R and A(R)eji = [c(Rijk )], after the aggregation process which will be presented in Section 3.
3 Act of Enterprise Position Explanation Essence of the ontology designed by the author is to create a diagnostic image of the enterprise analyzed on the basis of an expert assessment using argumentation extracted by an agent from a database. Figures 1 shows illustration of the enterprise position. It is the image ωl created by the expert ei in the time instant tj on the basis of argumentation delivered by the agent ak . Each image ωl is represented graphically by a point in the + * two-dimensional t t POTENTIAL-RISK space with two coordinates, i.e., ωl = A(P )eji , A(R)eji .
Decision Support System for Assessment of Enterprise Competence
Fig. 1. Illustration of the enterprise position according to assessments of competence potential and risk (competence gap)
563
Fig. 2. Illustration of the enterprise position in the POTENTIAL STATE ASSESSMENT DISCREPANCY RISK STATE ASSESSMENT DISCREPANCY space
Coordinates of the point ωl make up aggregated assessments of the competence potential state of an enterprise (POTENTIAL) and the competence gap state of an enterprise (RISK) according to the potential taxonomy and the risk taxonomy, respectively. The assessment aggregation can be made using the EUCLID method elaborated by M. Tavana [13]. The author of the present paper showed in [14] two assessment aggregation levels in the process of case indexation for a case base of the designed system for predicting enterprise economic situation. The assessment aggregation according to the EUCLID method is performed in the learning process. The task of this process is to define position classes of enterprises in the POTENTIAL - RISK space. In P this process, it is crucial to compare the competence assessment c(Rijk ) in ∗ P P the potential range with the maximal assessment c (Rijk ) = max{c(Rijk )} l
obtained in the benchmark process in the whole group of the assessed enR terprises. Comparison of the competence gap assessment c(Rijk ) in the risk ∗ R R range with the minimal assessment c (Rijk ) = min{c(Rijk )} is also made. l
In this way, transformation of the POTENTIAL - RISK space to the POTENTIAL STATE ASSESSMENT DISCREPANCY - RISK STATE ASSESSMENT DISCREPANCY space is performed. This transformation converts expert assessments into relative assessments related to assessments of the best enterprise according to competence analysis and relative assessments related to assessments of the worst enterprise according to competence gap analysis, i.e., it is made transformation from TiP - TiR space into ΔTiP - ΔTiR space. Such a transformation leads to calculating average values of discrepancy of potential state assessments and average values of discrepancy of risk
564
J. Andreasik
P R state assessments. Averages Δc(Rijk ) and Δc(Rijk ) determine four classes of the enterprise position. Act of enterprise position explanation relies on assigning the considered enterprise image ωl to one of the four classes. As in Figure 2, calculated averages of discrepancy of potential and discrepancy of risk fix four classes:
• • • •
A1 A2 A3 A4
-
the the the the
risk class (low potential, high risk), warning class because of high risk, good economic situation class (high potential, low risk), warning class because of low potential.
Act of enterprise position explanation is performed according to the following procedure. Step 1: Assigning weights of assessment significance on each taxonomy level: P P W (P ) = W (TiP ), W (Kij ), W (Rijk ) R R W (R) = W (TiR ), W (Kij ), W (Rijk )
Step 2: Making assessments of the potential state and risk state using competence analysis. Step 3: Calculating enterprise position πi in ΔTiP - ΔTiR space: 7 6 ∗ P P P P P 2 Δc(Rijk ) = W (Kij ) W (Rijk ) (c (Rijk ) − c(Rijk )) j=1
R )= Δc(Rijk
j=1
k=1
R W (Kij )
7 R R R W (Rijk ) (c∗ (Rijk ) − c(Rijk ))2 6
k=1
P Step 4: Calculating avereges Δc(Rijk ) of discrepancy of potential state R assessments and Δc(Rijk ) of discrepancy of risk state assessments on the basis of assessments calculated in Step 3 for the whole set of enterprises analyzed in the learning process. Step 5: Calculating position πi of enterprise condition assessment ωl in the space moved to the intersection point of averages of assessment discrepancies. This operation leads to determining position πi using only one parameter, i.e., the slope angle Θi of the radius vector:
Θi = arctg
βi (ωl ) αi (ωl )
P P αi (ωl ) = Δc(Rijk ) − Δc(Rijk ) R R βi (ωl ) = Δc(Rijk ) − Δc(Rijk )
Figure 3 shows this construction. Step 6: Preparing data for aggregation of the second level using the ELECTRE TRI method (cf. [15]). In each space determined by the potential type
Decision Support System for Assessment of Enterprise Competence
Fig. 3. Calculating the enterprise position in the moved space
565
Fig. 4. Illustration of parameters for sorting according to the ELECTRE TRI method
TiP and risk type TiR distinguished on the second level of taxonomy, enterprise position determined by the angle Θi is calculated. An enterprise image ωl forms a set of angles Θi : ωl = {Θi }. Due to the fact that an enterprise in each space TiP - TiR can be classified into different classes according to different values Θi , the use of the ELECTRE TRI method is proposed for the task of aggregation determining membership of only one class. In this method, it is needed to define values of profiles separating individual classes and thresholds of indiscernibility, preference and veto. It is also needed to determine weights of individual criteria, i.e., significance of assessments made by an expert in individual spaces according to the potential type and risk type. Profiles: r1 = 90◦ , q1 = 5◦ , p = 10◦ , v = 20◦ ; r2 = 180◦ , q2 = 5◦ , p = 10◦ , v = 20◦ ; r3 = 270◦ , q3 = 5◦ , p = 10◦ , v = 20◦ . In the sorted space (Figure 4), classes are placed from the worst one (Class A1 - activity risk) to the best one (Class A3 - good economic situation). The ELECTRE TRI method realizes a function f assigning the class Aj , where Aj ∈ {A1 , A2 , A3 , A4 }, to the image ωl determined by the set of position angles {θi }, i.e., f (ωl ) = Aj .
4 Research Results Within the confines of the EQUAL Project No. F0086 conducted by College of Management and Public Administration in Zamość, the case base of analyzed enterprises in the SME sector from south-east Poland has been elaborated. In analysis, there is assumed a structure of taxonomy of potential and risk,
566
J. Andreasik
respectively, in which five types of potential and five types of risk are defined on the first level of each taxonomy. Therefore, five POTENTIAL - RISK spaces have been arisen: 1. Capital potential - Risk of satisfying the demand for capital, 2. Innovation and investment potential - Risk of investments and innovation conducted, 3. Key stakeholder potential - Risk of the enterprise activity by stakeholders, 4. Neighborhood potential - Neighborhood risk, 5. Environment potential - Environment risk. The enterprise assessment model and explanation procedure make up a base of the elaborated SOK-P1 system, in which expert assessments according to taxonomy defined earlier are delivered using special editors. The system is available for enterprises included in the EQUAL program by the "e-barometr" web page [16]. In a case base, an enterprise is represented by a set of five positions ωl = {θ1 , θ2 , θ3 , θ4 , θ5 }. The aggregation of these five positions can be made using the ELECTRE TRI method.
5 Conclusions This paper is oriented to showing act of explanation of the enterprise position. Each case being an enterprise image in the expert assessment is defined by a set of indexes, which are angles of positions of the enterprise calculated in POTENTIAL - RISK spaces. The original enterprise assessment model has been presented, in which an expert makes the assessment on the basis of agent argumentation. Assessment aggregation in the subprocess of indexation is based on the EUCLID and ELECTRE TRI methods. The EUCLID method allows us to separate four classes, into which enterprise images included in the database are classified: class A1 : risk (low potential, high risk), class A2 warning (high risk), class A3 - good situation (high potential, low risk), class A4 - warning (low potential). Making a classification makes up both the effect of indexation and the first part of the act of enterprise position explanation. The second part of the explanation act results from aggregation of positions calculated earlier using the ELECTRE TRI sorting method. According to this methods, we obtain classification of a given enterprise image into one of four classes on the basis of the pessimistic or optimistic procedure. The both parts of the explanation act form a content of the protocol of enterprise position explanation. In the learning process in the EUCLID method, 220 enterprises of south-east Poland from the SME sector have been entered in the database. Research was made by two expert teams within the confines of the EQUAL Project. The elaborated system is designed for predicting economic situation of enterprises in the SME sector.
Decision Support System for Assessment of Enterprise Competence
567
References 1. Noga, A.: Teorie przedsi¸ebiorstw. PWE, Warsaw (2009) (in Polish) 2. Hamel, G., Prahalad, C.K.: Competing for the Future. Harvard Business School Press (1994) 3. Harzallah, M., Berio, G., Vernadat, F.: Analysis and modeling of individual competencies: toward better management of human resources. IEEE Transac˜ Part A: Systems and Humans 36(1), tions on Systems, Man, and Cybernetics U 187–207 (2006) 4. Pepiot, G., Cheikhrouhou, N., Furbringer, J.M., Glardon, R.: UECML: Unified Enterprise Competence Modeling Language. Computers in Industry 58, 130– 142 (2007) 5. Jussupova-Mariethoz, Y., Probst, A.R.: Business Concepts Ontology for an Enterprise Performance and Competences Monitoring. Computers in Industry 58, 118–129 (2007) 6. Bocheński, J.M.: Przyczynek do filozofii przedsi¸ebiorstwa przemysłowego. In: Logika i Filozofia. Wybór pism, pp. 162–186. PWN, Warsaw (1993) (in Polish) 7. Ingarden, R.: Spór o istnienie świata. Tom II, Ontologia formalna, cz.1. Forma i Istota. PWN, Warsaw (1987) (in Polish) 8. Kaplan, R.S., Horton, D.P.: The balanced scorecard: translating strategy into action. Harvard Business School Press, Boston (1996) 9. Saaty, T.L.: The analytic hierarchy process. McGraw-Hill, New York (1980) 10. Carlucci, D., Schiuma, G.: Knowledge assets value creation map. Assessing knowledge assets value drivers using AHP. Expert Systems with Applications 32, 814–821 (2007) 11. Huang, H.C.: Designing a knowledge-based system for strategic planning: A balanced scorecard perspective. Expert Systems with Applications 36, 209–218 (2009) 12. Forman, E.H., Selly, M.A.: Decision by objectives. World Scientific, Singapore (2001) 13. Tavana, M.: Euclid: strategic alternative assessment matrix. Journal of MultiCriteria Decision Analysis 11, 75–96 (2002) 14. Andreasik, J.: A Case-Based Reasoning System for Predicting the Economic Situation on Enterprises. In: Kurzynski, M., et al. (eds.) Tacit Knowledge Capture Process (Externalization). Computer Recognition Systems 2. ASC, vol. 45, pp. 718–730. Springer, Heidelberg (2007) 15. Mousseau, V., Słowiński, R., Zielniewicz, P.: A user-oriented implementation of the ELECTRE-TRI method integrating preference elicitation support. Computers & Operations Research 27, 757–777 (2000) 16. e-barometr manual, http://www.wszia.edu.pl/eng/files/e-barometr-manual.pdf ˜ Diagnostic Approach. In: Proceedings of 17. Andreasik, J.: Enterprise Ontology U the Conference on Human System Interaction (HIS 2008), Krakow, Poland, pp. 497–503 (2008) 18. Andreasik, J.: Inteligent System for Predicting Economic Situation of SME. In: Józefczyk, J., Thomas, W., Turowska, M. (eds.) Proceedings of the 14th International Congress of Cybernetics and Systems of WOSC, Wroclaw, Poland (2008)
The Polish Coins Denomination Counting by Using Oriented Circular Hough Transform Piotr Porwik, Krzysztof Wrobel, and Rafal Doroz Institute of Informatics, University of Silesia, ul. Bedzinska 39, 41-200 Sosnowiec {piotr.porwik, krzysztof.wrobel, rafal.doroz}@us.edu.pl
Summary. This paper concerns the coins recognition method, where the modification of the Circular Hough Transform (CHT) has been used. The proposed method allows to recognize denomination of coins in still, clear, blurred or noised images. This paper shows that the Hough transform is an effective tool for coins detection even in the presence of noises such as "salt and pepper" or Gaussian noise. It has been stated, that the proposed approach is much less time consuming than the CHT. In the proposed application, also computer memory requirement is profitable, in contrast to the CHT. In the test procedures, the Polish coins were used and have been recognized and counted. Experiments shown that the proposed modification, achieves consistently high performance compared to commonly used Hough’s techniques. Finally, the proposed approach was compared to the standard CHT dedicated for circular objects. Significant advantages proposed method arise from simplification and reduction of the Hough space. It is necessary to emphasize, that introduced modifications do not have the influence on the objects recognition quality. Presented investigations were carried out for Polish customer.
1 Introduction Hough Transform directly detects the object’s edges using image global features, so Hough transform can be used to isolate features of a particular shape within an image. The desired features are specified in parametric Hough’s forms. The Hough transform is the most commonly used for the detection of regular curves such as lines, circles, ellipses, etc. Popular methods of detecting circular or elliptic shapes from images are the Circular Hough Transform and Elliptical Hough Transform [4,5,7,10,14]. In practice, where nature scenes are registered and retrieved (medical or biomedical microscopy images, coins images, quality control of industry manufactured elements, etc.) appears problem of the objects detection and counting. These images should be recognized on-line and it is an important and main technological requirement. The detection of circular shapes is a common task in computer vision and image recognition systems. Real source images of circular objects should be classified, counted and sorted very often. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 569–576. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
570
P. Porwik, K. Wrobel, and R. Doroz
Fortunately, these objects are frequently well structured and shape of objects can be detected by means of well known methods. One of these methods is Hough Transform. The Hough techniques developed for medical or industry images can be also applied to other specific domains. In registered photo-camera images often appear recognition troubles caused by non-even object illumination, low-contrast, noise or restricted, partial visibility of objects. These disadvantages were overcome by proposed modification. The idea of such a modification in the next part of the paper has been stated.
2 Image Preparation In many cases images captured by digital cameras or scanners can be represented as bitmap form. There are many bitmapped formats, including JPEG, GIF, BMP and TIFF. In this paper BMP format was used. The RGB image stores values of the red (r), green (g), and blue (b) which normally form a colour image. Each colour is make up of varying amounts of red, green, and blue. A digital image I(x, y), x ∈ X and y ∈ Y , can be defined as a twodimensional signal, that contains intensity colour information arranged along an x and y spatial axis. In the most widely used method, where coloured image is converted to greyscale, each image point (r, g, b) is replaced by its intensity value at the point (x, y) according to the formula [12,13,16]: L(x, y) = 0.299 · r + 0.587 · g + 0.114 · b
(1)
After to greyscale conversion, in the analyzed image appropriate contours are detected and finally, whole image is binarized. The object contours were build by convolution of the initial image I with 3 × 3 horizontal and vertical SobelŠs filters [3,6,13,16], and in the next stage a criterion of selection pixels, that have high information content, has been determined. The binarization threshold was calculated on the basis of the method where upper threshold of binarization was used. The upper threshold value was fixed from the formula: 0 f or I(x, y) ≤ 130 I bin (x, y) = (2) 1 otherwise Figure 1 presents the sequence of the pre-processed images.
a)
b)
c)
d)
Fig. 1. Simple image pre-processing: a) the real image, b) its greyscale form, c) the same image after 3 × 3 Sobel’s filter application, d) black-white binarized image
The Polish Coins Denomination Counting by Using Oriented CHT
571
3 The Hough Transform. Short Background The Standard Hough Transform (SHT) is applied as a very powerful tool for detection of the parametric curves in images [1,2,8,9]. The key idea of the method can be illustrated by analysing sets of co-linear points in the image. It can be implemented as voting process that maps image edge points into appropriately defined parameter space. Peaks in the space correspond to the parameters of detected curves. Hence, these parameters can be used for mathematical description of the object. Unfortunately, the standard Hough’s algorithm has some disadvantages, because satisfactory level of the curves detection requires the large size of the memory reservation – in the memory space are stored parameters of the objects. This property requires a large accumulator space and much processing time. In mentioned technique occur difficulties in detection of some specific curves, difficulties in finding local maxima, low accuracy, large storage and low speed. The SHT can be applied to any geometric shape that can be described by an equation of the following form [14]: F (x, y, p) = 0
(3)
where the x and y are the row and column of the image’s space. The simplest form of the SHT is line transform in Cartesian coordinate. The line can be represented in the normal form: ϑ = xcos(α) + ysin(α)
(4)
and the vector p = [ϑ, α].
4 The Circular Hough Transform The same procedure can be used for finding the circles, but dimensionality of the searching space will be increased [4,5,10,11]. Now, a three dimensional parameter space (xs , ys , r) is needed, where xs and ys are the coordinates of the centre of a circle, and r is the radius of the analysed circle. Hence, the following, simple equation of circle description will be used: (x − xs )2 + (y − ys )2 = r2
(5)
and vector p = [xs , ys , r]. The Hough-type algorithms used for detecting circles are computationally more expensive than line detection algorithms, due to the bigger number of parameters involved in describing the shapes. For digital image of X × Y resolution, number of the accumulator cells can be calculated on the basis of equation: Ψ = X · ΦX · Y · ΦY · (rmax − rmin ) · ΦR
(6)
where: rmin , rmax – acceptable range of radii in the recognized circles, ΦX , ΦY , ΦR – accuracy of the appropriate parameters: x, y and r respectively.
572
P. Porwik, K. Wrobel, and R. Doroz
From (6) follows, that memory complexity of the CHT is very large, and for an cases when image includes a large number of circular objects, not all objects will be detected due to insufficient memory [10,11]. If threshold of the circular objects detection will be inappropriate selected, then some real circles will not be detected and some false circles can be also found. Additional, with image size increasing, the quantity of data will become too large and the data processing will be slow.
5 Modification of the CHT The above presented equation (5) can be simple re-written in the form: 5 (7) ys = y − r2 − (x − xs )2 Any circle is mapped in the two-dimensional table. Hence, number of tables represents number of recognized circles. Each table is identified by radius of the circle. Let η will be a number of recognized circles on the image surface, then memory requirements can be estimated by means of the equation: Ψ = η · X · ΦX · Y · ΦY
(8)
In practice, in the source image, only pre-defined groups of the objects are searched, so in the most cases the condition Ψ < Ψ is fulfilled. Similarly to Standard Hough Transform, the accumulator table population is realized by analysing of pixels of a source image. For every pixel, which belongs to a circumference, and for any radius which belongs to the earlier defined radii set, value of the accumulator cell, indexed by the pair xs , ys is incremented. The above-mentioned modification of the CHT was designed either to improve robustness or to simplify computations. In the every accumulator table local maxima are searched: • •
searching of the local maximum in every accumulator table (Fig. 3a), searching of the local maximum in vertical sections of the accumulators (Fig. 3b).
In the proposed modification, unlike the SHT method, accumulator tables are appropriately scratched. It will be explained in the next paragraph. In the first stage, in the accumulator cells, the largest value Vmax is searched. In the next stage, on the basis of the Vmax , all values of the accumulator cells are once more populated, according to the formula: Ac for Ac ≥ (P D · Vmax )/100% new Ac = (9) 0 for Ac < (P D · Vmax )/100% where P D is value of detection threshold and Ac stands for single accumulator cell. This value is established before accumulator table exploration. In the first stage the largest value of the accumulator is searched (Fig. 2a). In the second
The Polish Coins Denomination Counting by Using Oriented CHT 2 1 2 2 4 1 4 1 1 2
1 3 1 3 4 3 3 1 3 0
1 1 3 3 5 5 2 3 2 1
3 1 1 2 2 2 2 2 2 5 2 2 6 17 8 8 9 16 4 4 4 3 1 1 1 0 0 1 0 0
3 2 2 5 6 8 4 3 1 1
1 1 1 3 3 1 3 3 5 4 5 3 2 3 3 1 2 3 1 15
2 1 2 2 4 1 4 1 1 2
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
a)
0 0 0 0 0 0 0 0 0 0 0 0 0 17 0 0 0 16 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
b)
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 15
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 17 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 15
573 0 0 0 0 0 0 0 0 0 0
c)
Fig. 2. a) Fragment of the accumulator table, b) the same table after applied the eq. (9) for the P D = 60%, c) the unique the local maximum
Fig. 3. Principles of the accumulator searching: a) searching of the local maxims in the accumulator tables, b) searching of the local maxims in vertical section of the accumulators
step, the accumulator table is once more explored and all cells are populated, according to the eq. (9) (Fig. 2b). In this example the P D=60%. In the third step, environment of the largest values is reset (Fig. 2c). The environment area was experimentally matched and was established to 5×5. This procedure is repeated for every accumulator table. Thanks to these procedures, the wrong circle descriptions in the accumulator tables are eliminated. In the second stage, the accumulator tables are vertically compared (Fig.3b). If two or more tables (p1 , p2 , ..., pk ) include circles described by the same centre coordinates or difference between different centres is less than threshold l, then such circles are rejected from accumulator.
6 Investigation Results Performed investigations were carried out by means of the Intel Pentium 4, 3.4 GHz and 2GB RAM machine. Programme was prepared in the Java environment. Above described parameters of the algorithm were experimentally established: P D = 60%, l = 40, ΦX = 1.0, ΦY = 1.0 and ΦR = 1.0. It
574
P. Porwik, K. Wrobel, and R. Doroz
a)
b)
c)
Fig. 4. Exemplary images of the coins distribution
should be emphasized, that in the standard CHT method, the parameter l is not used. As circular objects the Polish coins were used. The images with different coins location have been tested. The exemplary images present Figs. 4a-4c. The Fig. 4a presents coins, which do not have common points. In the Fig. 4b some coins are overlapped. In the Fig. 4c the coins touch each other. All images were captured with resolution of 600×720. The images 4a,4b and 4c were corrupted in the Matlab environment by using the Image Processing Toolbox. The images were corrupted by a Gaussian noise (mean=0.0, variance=0.01), "salt and pepper" type noise (noise density d=0.05) and blur procedure (with radius=3). In the next stage, all images were processed according to the principles discussed in the Section 2 of the paper. In the first stage analysed images should be calibrated. It is necessary because coins from different countries can be recognized. Additionally, a source image can have different resolution. Calibration process consists of two steps. In the first step, arbitrary coin is selected, and its denomination worth is announced. During calibration stage all coins radii were automatically re-scaled [17]. Obtained results of the circles recognition in Table 1 have been presented. It was impossible to achieve correct results in the standard CHT method, for any combination of mentioned above parameters selection. This was overcome in the modified algorithm. In the second type of investigations, algorithms computation time was measured. The obtained result presents Table 2. This table includes two temporary time values. The first value indicates the time of accumulator table population and the second value, the time searching of circles parameters. From Table 2 follows, that the modified algorithm features the best time complexity compared to standard CHT method. conducted investigations the memory complexity was also checked. Because images from Figs. 4a-4c have the same resolution, their differentiation is not needed. In the standard CHT method occupation of memory was 11,3 MB in contrast to proposed modification, where memory occupancy level was 3,8 MB.
The Polish Coins Denomination Counting by Using Oriented CHT
575
Table 1. The CHT method compared to modified algorithm Number of incorrect (also Lost circles additional) circle detections Image Image a b c a b c Tested images (Fig. 4) CHT 16 27 23 0 1 12 Proposed modification 0 0 0 0 0 0 Images corrupted by Gaussian noise (mean=0.0, variance=0.01) CHT 12 27 25 2 7 6 Proposed modification 1 1 2 1 2 2 Images corrupted by "salt and pepper" type noise (noise density d = 0.05) CHT 23 31 21 13 1 12 Proposed modification 1 0 1 1 0 1 Blurred images (circular averaging 2D Matlab filter. In this example the radius=3) CHT 146 123 139 7 6 5 Proposed modification 1 1 0 1 3 1
Table 2. Comparison of the time computation for the standard CHT and the modified algorithm
CHT Proposed modification
Computation time [s] Image a Image b Image c 165.8+0.2=166.0 154.75+0.22=154.97 158.59+0.20=158.79 2.17+0.28=2.45 1.969+0.281=2.25 2.05+0.28=2.33
7 Conclusion and Remarks Increasing of the accuracy of circles detection, causes increasing of the accumulator tables size both, the standard CHT and the modified method. In the modified algorithm, accumulator size is significantly less than the CHT accumulator table. In this study was used Hough transform to detect many coins with different radii. In this paper, the very simple Sobel edge detector was used to get the best result. It can be also concluded that the proposed modification of the CHT method is an effective tool for coins recognition even in the presence of different types of noise. Described in the paper modification of the CHT method can also detect blurring coins. Proposed method can be simple adjusted to recognize the different types of coins. Hence, the coins total denomination worth can be also determined. Proposed approach can be also applied to detect other circular objects.
576
P. Porwik, K. Wrobel, and R. Doroz
References 1. Ballard, D.: Generalizing the Hough Transform to Detect Arbitrary Shapes. Pattern Recognition 14(2), 111–122 (1981) 2. Bergen, J., Shvaytser (Schweitzer), H.: A probabilistic algorithm for computing Hough transforms. Journal of Algorithms 12(4), 639–656 (1991) 3. Castleman, K.R.: Digital Image Processing. Printice-Hall, New Jersey (1996) 4. Davies, E.R.: Finding ellipses using the Generalized Hough transform. Pattern Recognition Letters 9(2), 87–96 (1989) 5. Duda, R., Hart, P.: Use of Hough transformation to detect lines and curve in pictures. Comm. of Association for Computing Machinery 15, 11–15 (1972) 6. Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Addison-Wesley, Reading (1992) 7. Hrebien, M., Korbicz, J., Obuchowicz, A.: Hough Transform. In: (1+1) Search Strategy and Watershed Algorithm in Segmentation of Cytological Images. Advances in Soft Computing, vol. 45, pp. 550–557. Springer, Berlin (2007) 8. Illingworth, J., Kittler, J.: A survey of the Hough transform. Computer Vision. Graphics and Image Processing 44, 87–116 (1988) 9. Illingworth, J., Kittler, J.: The adaptive Hough transform. IEEE Trans. Pattern Anal. Mach. Intell. 10, 690–698 (1987) 10. Inverso, S.: Ellipse Detection Using Randomized Hough Transform. Final Project: Introduction to Computer Vision 4005-757 (2006) 11. Kavallieratou, E.: A binarization algorithm specialized on document images and photos. In: Proceedings of the Eighth Int. Conf. on Document Analysis and Recognition (ICDAR 2005), pp. 463–467 (2005) 12. Niblack, W.: An Introduction to Digital Image Processing. Strandberg Publishing Company Birkeroed, Denmark (1985) 13. Parker, J.R.: Algorithms for Image Processing and Computer Vision. John Wiley & Sons, Chichester (1987) 14. Roushdy, M.: Detecting Coins with Different Radii based on Hough Transform in Noisy and Deformed Image. GVIP Journal 7(1), 25–29 (2007) 15. Ramirez, M., Tapia, E., Block, M., Rojas, R.: Quantile Linear Algorithm for Robust Binarization of Digitalized Letters. In: Proceedings of the Ninth Int. Conf. on Document Analysis and Recognition (ICDAR 2007), vol. 2, pp. 1158– 1162 (2007) 16. Russ, J.C.: The Image Processing Handbook, 2nd edn. CRC Press, Boca Raton (1995) 17. http://en.wikipedia.org/wiki/Polish_coins_and_banknotes
Recognizing Anomalies/Intrusions in Heterogeneous Networks Michał Choraś1,2, Łukasz Saganowski2, Rafał Renk1,3 , Rafał Kozik2 , and Witold Hołubowicz1,3 1 2 3
ITTI Ltd., Poznań [email protected] Institute of Telecommunications, UT&LS Bydgoszcz [email protected] Adam Mickiewicz University, Poznań [email protected]
Summary. In this paper innovative recognition algorithm applied to Intrusion and/or Anomaly Detection System presented. We propose to use Matching Pursuit Mean Projection (MP-MP) of the reconstructed network signal to recognize anomalies/intrusions in network traffic. The practical usability of the proposed approach in the intrusion detection tolerance system (IDT S) in the INTERSECTION project is presented.
1 Introduction INTERSECTION (INfrastructure for heTErogeneous, Resilient, SEcure, Complex, Tightly Inter-Operating Networks) is a European co-funded project in the area of secure, dependable and trusted infrastructures. The main objective of INTERSECTION is to design and implement an innovative network security framework which comprises different tools and techniques for intrusion detection and tolerance. The INTERSECTION framework as well as the developed system called IDT S (Intrusion Detection and Tolerance System) consists of two layers: in-network layer and off-network layer. The role of the off-network layer is to support network operators in controlling complex heterogeneous and interconnected networks and real-time security processes such as network monitoring, intrusion detection, reaction and remediation. In this paper we focus on presenting innovative approach to the off-network Anomaly Detection System. Novel techniques applied to ADS/IDS based on signal processing are proposed. Signal-based anomaly detection type IDS will be used as the secondary detection/decision module to support real-time IDS. Such approach is proposed for off-network layer of the INTERSECTION framework. The operator (e.g. at telecoms premises) will have a chance to M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 577–584. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
578
M. Choraś et al.
Fig. 1. INTERSECTION in-network and off-network approach to intrusion/ anomaly detection in complex heterogeneous networks
observe the results of signal-based IDS in a near real-time in order to trigger or stop the reaction of real-time IDS. Such approach will both increase the security (less detected anomalies/attacks) and increase the tolerance (less false positives). The overview of the Matching Pursuit based IDS/ADS role in the INTERSECTION architecture is given in Figure 1.
2 Intrusion Detection System Based on Matching Pursuit 2.1
Rationale and Motivation
Signal processing techniques have found application in Network Intrusion Detection Systems because of their ability to detect novel intrusions and attacks, which cannot be achieved by signature-based approaches [1]. Approaches based on signal processing and on statistical analysis can be powerful in decomposing the signals related to network traffic, giving the ability to distinguish between trends, noise, and actual anomalous events. Wavelet-based approaches, maximum entropy estimation, principal component analysis techniques, and spectral analysis, are examples in this regard which have been investigated in the recent years by the research community [2]-[6]. However, Discrete Wavelet Transform provides a large amount of coefficients which not necessarily reflect required features of the network signals. Therefore, in this paper we propose another signal processing and decomposition method for anomaly/intrusion detection in networked systems. We developed original Anomaly Detection Type IDS algorithm based on Matching Pursuit.
Recognizing Anomalies/Intrusions in Heterogeneous Networks
2.2
579
Introduction to Matching Pursuit
Matching Pursuit signal decomposition was proposed by Mallat and Zhang [7]. Matching Pursuit is a greedy algorithm that decomposes any signal into a linear expansion of waveforms which are taken from an overcomplete dictionary D. The dictionary D is an overcomplete set of base functions called also atoms. D = {αγ : γ ∈ Γ }
(1)
where every atom αγ from dictionary has norm equal to 1: αγ = 1
(2)
Γ represents set of indexes for atom transformation parameters such as translation, rotation and scaling. Signal s has various representations for dictionary D. Signal can be approximated by set of atoms αk from dictionary and projection coefficients ck : |D|−1
s=
ck αk
(3)
n=0
To achieve best sparse decomposition of signal s (min) we have to find vector ck with minimal norm but sufficient for proper signal reconstruction. Matching Pursuit is a greedy algorithm that iteratively approximates signal to achieve good sparse signal decomposition. Matching Pursuit finds set of atoms αγk such that projection of coefficients is maximal. At first step, residual R is equal to the entire signal R0 = s. R0 = αγ0 , R0 αγ0 + R1
(4)
If we want to minimize energy of residual R1 we have to maximize the projection |αγ0 , R0 |. At next step we must apply the same procedure to R1 . R1 = αγ1 , R1 αγ1 + R2
(5)
Residual of signal at step n can be written as follows: 9 8 Rn s = Rn−1 s − Rn−1 s|αγk αγk
(6)
Signal s is decomposed by set of atoms: s=
N −1
αγk |Rn sαγk + Rn s
(7)
n=0
Algorithm stops when residual Rn s of signal is lower then acceptable limit.
580
2.3
M. Choraś et al.
Our Approach to Intrusion Detection Algorithm
In basic Matching Pursuit algorithm atoms are selected in every step from entire dictionary which has flat structure. In this case algorithm causes significant processor burden. In our coder dictionary with internal structure was used. Dictionary is built from: — Atoms, — Centered atoms, Centered atoms groups such atoms from D that are as more correlated as possible to each other. To calculate measure of correlation between atoms function o(a, b) can be used [2] . : 2 |a, b| o (a, b) = 1 − (8) a2 b2 The quality of centered atom can be estimated according to (9): Ok,l =
1 o Ac(i) , Wc(k,l) |LPk,l |
(9)
i∈LPk,l
LPk,l is a list of atoms grouped by centered atom. Ok,l is mean of local distances from centered atom Wc(k,l) to the atoms Ac(i) which are strongly correlated with Ac(i) . Centroid Wc(k,l) represents atoms Ac(i) which belongs to the set i ∈ LPk,l . List of atoms LPk,l should be selected according to the Equation 10: max o Ac(i) , Wc(k,l) ≤
i∈LPk,l
min
t∈D\LPk,l
o Ac(t) , Wc(k,l)
(10)
In the proposed IDS solution 1D real Gabor base function (Equation 11) was used to build dictionary [8]-[10]. αu,s,ξ,φ (t) = cu,s,ξ,φ α(
t−u ) cos(2πξ(t − u) + φ) s
(11)
where: 2 1 α(t) = √ e−πt s
(12)
cu,s,ξ,φ - is a normalizing constant used to achieve atom unit energy, In order to create overcomplete set of 1D base functions dictionary D was built by varying subsequent atom parameters: Frequency ξ and phase φ, Position u, Scale s [11]. Base functions dictionary D was created with using 10 different scales (dyadic scales) and 50 different frequencies.
Recognizing Anomalies/Intrusions in Heterogeneous Networks
581
3 Experiments and Results In the following section experimental results are shown. Both real network traces as well as simulated network traces (using NS2) had been used in the verification phase. 3.1
Experimental Results Based on Real Traces
In our previous work we presented the usability and performance of our ADS in recognizing normal/attacked traces from Mawi and Caida projects [13]. We also showed efficiency of our method in detecting known worms [14]. Hereby we focused on anomaly detection scenario. Therefore we used traces offered by INTERSECTION projects partners, namely CINI - Unina (University of Napoli) and Fraunhofer, with normal and anomalous traces from real networks [15]. In the tested traces, all anomalies are due to attacks, however normally, it is not always the case (in fact most network anomalies are not dangerous). We calculated M P − M P for all traces in order to determine if the traffic is normal or anomalous. Firstly, we trained the system with 25% of traces. The remaining traces were used in the testing phase. In the following tables (Tables 1 and 2) the suspicious traces (with anomalies/attacks) are marked (by bold). The threshold (for allowing MP-MP value difference between "normal" traces and current examined trace) is set to 30%. Table 1. Matching Pursuit Mean Projection for Port 80 TCP test traces Port 80 TCP Matching Pursuit Mean Projection Unina1 Unina2 Unina3 Unina4 Unina5 Unina6 Unina7 Unina8 Unina9 Unina10 Unina11 Unina12 Unina13 Unina14 Unina15
211.71 89.33 255.33 170.65 186.64 285.65 285.65 212.91 339.91 393.06 277.08 476.88 309.30 242.93 234.61
582
M. Choraś et al. Table 2. Matching Pursuit Mean Projection for Port 80 TCP dump Port 80 TCP Matching Pursuit Mean Projection Tcp trace1 Tcp trace2 Tcp trace3 Tcp trace4 Tcp trace5 Tcp trace6 Tcp trace7 Tcp trace8 Tcp trace9 Tcp trace10 Tcp trace11 Tcp trace12 Tcp trace13 Tcp trace14
3.2
576.06 676.30 450.72 611.40 478.50 592.15 409.31 321.93 567.63 507.98 360.73 508.00 565.65 576.23
NS2 Simulation Experiments
We run network simulations using LAN topology of 10 nodes. Three of them were routers only. Node number 3 is a server which is being attacked by node 4. We measured packed flow on node 2. Of course, simulations were run several times to achieve several vectors. We simulated DoS attack. In such approach, an attacker floods server with requests, slowing down its performance. We had generated background traffic first, then we generated attacks.
Fig. 2. LAN Topology and scenario description
Recognizing Anomalies/Intrusions in Heterogeneous Networks
583
Table 3. Dos attack generated by NS simulator Overall traffic
Matching Pursuit Mean Projection
"normal" trace1 "normal" trace2 Attacked trace1 Attacked trace2
38.00 59.71 27.80 159.54
Fig. 3. NS Traffic Analysis
The topology of our simulated network is shown in Fig. 2. M P − M P based features showing anomalies/attacks are presented in Fig. 3. The values of M P − M P for normal and attacked traces are shown in Table 3.
4 Conclusion In the article our developments in feature extraction for Intrusion Detection systems are presented. We showed that Matching Pursuit may be considered as very promising methodology which can be used in networks security framework. Upon experiments we concluded that Matching Pursuit Mean Projection differs significantly for normal and anomalous/attacked traces. The major contributions of this paper is a novel algorithm for detecting anomalies based on signal decomposition. In the classification/decision module we proposed to use developed matching pursuit features such as mean projection. We tested and evaluated the presented features and showed that experimental results proved the effectiveness of our method. The proposed Matching Pursuit signal based algorithm applied for anomaly detection IDS will be used as detection/decision module in the
584
M. Choraś et al.
INTERSECTION Project security-resiliency framework for heterogeneous networks.
Acknowledgement The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 216585 (INTERSECTION Project).
References 1. Esposito, M., Mazzariello, C., Oliviero, F., Romano, S.P., Sansone, C.: Evaluating Pattern Recognition Techniques in Intrusion Detection Systems. In: PRIS 2005, pp. 144–153 (2005) 2. Cheng, C.-M., Kung, H.T., Tan, K.-S.: Use of spectral analysis in defense against DoS attacks. In: IEEE GLOBECOM 2002, pp. 2143–2148 (2002) 3. Barford, P., Kline, J., Plonka, D., Ron, A.: A signal analysis of network traffic anomalies. In: ACM SIGCOMM Internet Measurement Workshop (2002) 4. Huang, P., Feldmann, A., Willinger, W.: A non-intrusive, wavelet-based approach to detecting network performance problems. In: ACM SIGCOMM Internet Measurement Workshop (November 2001) 5. Li, L., Lee, G.: DDos attack detection and wavelets. In: IEEE ICCCN 2003, October 2003, pp. 421–427 (2003) 6. Dainotti, A., Pescape, A., Ventre, G.: Wavelet-based Detection of DoS Attacks. In: 2006 IEEE GLOBECOM, San Francisco, CA, USA (November 2006) 7. Mallat, S., Zhang: Matching Pursuit with time-frequency dictionaries. IEEE Transactions on Signal Processing 41(12), 3397–3415 (1993) 8. Troop, J.A.: Greed is Good: Algorithmic Results for Sparse Approximation. IEEE Transactions on Information Theory 50(10) (October 2004) 9. Gribonval, R.: Fast Matching Pursuit with a Multiscale Dictionary of Gaussian Chirps. IEEE Transactions on Signal Processing 49(5) (May 2001) 10. Jost, P., Vandergheynst, P., Frossard, P.: Tree-Based Pursuit: Algorithm and Properties. In: Swiss Federal Institute of Technology Lausanne (EPFL), Signal Processing Institute Technical Report. TR-ITS-2005.013 (May 17, 2005) 11. Andrysiak, T., Choraś, M.: Image Retrieval Based on Hierarchical Gabor Filters. International Journal Applied Mathematics and Computer Science (AMCS) 15(4), 471–480 (2005) 12. Dainotti, A., Pescape, A., Ventre, G.: Worm Traffic Analysis and Characterization. In: Proceedings of ICC, pp. 1435–1442. IEEE CS Press, Los Alamitos (2007) 13. Renk, R., Saganowski, Ł., Hołubowicz, W., Choraś, M.: Intrusion Detection System Based on Matching Pursuit. In: Proc. Intelligent Networks and Intelligent Systems, ICINIS 2008, pp. 213–216. IEEE CS Press, Los Alamitos (2008) 14. Saganowski, Ł., Choraś, M., Renk, R., Hołubowicz, W.: Signal-based Approach to Anomaly Detection in IDS Systems. International Journal of Intelligent Engineering and Systems 1(4), 18–24 (2008) 15. http://www.grid.unina.it/Traffic/Traces/ttraces.php
Vehicle Detection Algorithm for FPGA Based Implementation Wieslaw Pamula Silesian University of Technology, Faculty of Transport ul.Krasinskiego 8, Katowice, Poland [email protected]
Summary. The paper presents a discussion of necessary properties of an algorithm processing a real world video stream for detecting vehicles in defined detection fields and proposes a robust solution which is suitable for FPGA based implementation. The solution is build on spatiotemporal filtering supported by a modified recursive approximation of the temporal median of the detection fields ocupancy factors. The resultant algorithm may process image pixels serially which is an especially desirable property when devising logic based processing hardware.
1 Introduction Processing real world video streams for detecting vehicles presents a chalenge for developing a robust algorithm for reliable detection. There is a number of problems which must be addressed. First is the illumination changes of various character, caused by changing weather conditions, time of day, moving shadows of surounding buildings or trees. Difficult to cope with are camera sensitivity changes accounted for automatic gain circuits. Large, highly contrasting with background, objects cause significant changes of gain in the acquisition path of cameras in result distorting the overall illumination of the image stream. Another important issue is random camera movements caused by road traffic aroused vibrations or wind gusts. Congestion and traffic controllers cause variable in time vehicle stops which have to be accounted for. Artificial illumination at night utilizing high power flourescent lights can give a strong "beat" - cyclic illumination variations - due to video sampling interacting with unsynchronized power supply based light intensity changes [1]. Figure 1. illustrates the different character of illuminaton changes: a) shows AGC derived distortion, b) moving sun and moving shadows of buildings, c) shows a drastic sensitivity change for night vision. Peaks depict vehicles - high and wide are trams, low are cars. AGC distortion has a saw like character while moving sun and shadows produce "rolling hillocks" and "valleys". The mean value of ocupancy factors in these cases exceeds the peaks of night vision factors. M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 585–592. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
586
W. Pamula
Fig. 1. Occupancy factors of a detection field a) changing camera sensitivity, b) sun and shadows, 3) night
Although hardware based illumination changes may be eliminated by using more sophisticated camera models, this is not acceptable by road traffic administrators. Second is the choice of the optimal algorithm for vehicle detection satisfying the criteria of high reliability, low maintenance and suitability for logic based implementation facilitating low cost manufacturing. Vehicle detection algorithms can be classified into groups: based on extracting objects from background and finding representations of objects in the image. Extracting predominantly means subtracting scene background from the current video frame, as the required speed leaves little time for performing segmentation or other more elaborate processing steps. Finding representations of vehicles takes advantage of filtering and model matching methods. This involves a conversion of the pixel representation of an image to a devised feature space representation. Vehicle detection systems mostly use variants of object extraction algorithms [2], [3]. The paper is divided into 4 sections. Section 2 outlines the vehicle detection algorithm; in the next section discussion of implementation issues is presented. The ending section proposes further work on tuning the solution.
Vehicle Detection Algorithm for FPGA Based Implementation
587
2 Detection Algorithm Background modelling forms the base of subtraction algorithms. Stochastic models of image pixel intensity changes are used as background estimates. The values of pixels over a number of past frames are averaged or the median is derived and this becomes the background frame [4]. Accurate estimation requires a deep frame buffer and thus a large memory. A way to relieve this burden is to use recursive approximation methods. Equ.1. shows a median approximation requiring previous background frame Bt−1 and current image contents It Bt (x) = Bt−1 (x) + sgn(It (x) − Bt−1 (x))
(1)
Background Bt converges to the temporal median of the image. Adaptation rate is a function of the frame rate and pixel value resolution [5]. Simple stochastic models are suitable for stable illumination conditions found for instance in road tunnels or on motorways. Diverse changes require a model which combines the characteristics of several models representing the character of different illumination changes. Following this approach a mixture of Gaussian distributions is proposed to represent pixels of the background [6]. Weight parameters of the mixture represent the time proportions that pixels, of defined ranges of values, stay in the scene. An elaborate scheme for updating is used to adapt the model to changing illumination. It is based on tuning learninig coefficients and susceptible to bad identification of the character of illumination changes. Modifications of this model, which simplify tuning and cope with the problem of heavy traffic occluding real background were devised [7], [8]. Vehicle detectors based on background modelling require carefull maintenance procedures. Using features for detecting vehicles comes down to the problem of choosing an appriopriate feature space, that is easy to process. Keeping in mind the requirement of fast processing eliminates features that are derived using image filtering and segmentation techniques [9], [10]. This leaves elementary pixel neighbourhood characteristics for consideration. Edges are convienient features for representation of objects. To avoid problems with background modelling the detection algorithm is based on processing the feature space of vehicles. As indicated objects are represented by edges. The performance of basic edge operators Sobel, Prewitt and morphological gradient was examined both in one dimensional and 2 dimensional versions [11]. Especially processing performed in line with the incoming image pixels was analysed. Operators detecting horizontal edges, simplified Sobel and Prewitt produced results sensitive to object alignment with image edges and binarization thresholds T. Horizontal morphological gradient operator calculated in gray scale as equ.2. was chosen as the feature extractor.
588
W. Pamula max
min
E(x) = ( s (I(x)) − s (I(x))) > T
(2)
where s is the neighbourhood of pixel x. Edges generated by this operator form distinct object contours for thresholds with a wide histeresis. The task of detecting vehicles is defined as recognizing the presence of entering objects into defined detection fields and transformed to edge space becomes detecting edges of entering objects. It is assumed that objects other than vehicles can be ignored in the analysed video sequence as these are not relevant for road traffic control. Such a conjecture simplifies the detection problem. An object oriented approach was chosen for specifying the scope of processing requirements and related data structures of the algorithm [12]. The basic objects are data structures associated with memory operations. Calculation tasks further characterize their properties. Fig. 2. presents the outline of the detection algorithm. An unmarked Petri net is used to link data objects - places, and processing steps - transitions, of the algorithm [13]. Analysis of properties of the net and possible transformation will indicate ways for optimising the processing flow. This model was chosen to emphasize processing requirements and separate them from hardware particularities thus enabling efficient examination of implementation variants. Apparent concurrency is of special interest as it contributes to high processing throughput. Desirable are regular data objects in order to simplify memory operations. Input data object of the processing scheme is raw data from a CCTV camera. This data structure is processed - transition t1 to obtain a video frame object and synchronization signals, marking the start of detection cycles. These objects are input places for transition t2 which calculates object edges. Combined with masks and threshold data objects, in transition t3 occupancy factors are determined. Factors are stored and in the next transition the detection fields states are calculated. Masks define the size and position of fields; additionally each field has an occupation threshold assigned during a configuration procedure. These values
Fig. 2. Detection algorithm
Vehicle Detection Algorithm for FPGA Based Implementation
589
are used for calculating the state of detection field. Masks are stored in local memory. The crucial object is a set of occupancy factors gathered over past image frames. This history buffer is used for correcting occupancy factors. Other data objects such as detection fields masks, pixel update frequencies and current video frame reside in local memory. 2.1
Field Occupancy Modelling
Occupancy factors are determined by counting edge pixel inside masked detection fields. The values for each field are stored in a history buffer. The contents of this buffer is additionally averaged over a number of frames to eliminate random peaks. A separate buffer is used for storing occupancy residue. The occupancy residue represents a measure of ambient noise which distorts the determination of detection fields states. It is updated iteratively and tracks the changes in ambient illumination, camera sensitivity fluctuations and day/night switching. Similarilly to temporal median it is calculated equ.3 Rt (f ) = Rt−1 (f ) + sgn(Ot (f ) − Rt−1 (f ))
(3)
where f is the field number. 2.2
Selective Adaptation
To avoid including object edges this procedure is stopped when an object enters the detection field. This selective adaptation is triggered for each field by abrupt occupancy changes which are determined by examining the history of occupancy values. akt = 1 when
|Ot − Rt−a | >k Ot
(4)
where k is a change steepness constant and a a smoothing constant. The absolute value of change allows for situations when vehicles move on highly edged surface such as cobbled road, and when entering a detection field cover such surface diminishing the number of observed edges. Selectively adapted occupancy residue value is a function of temporal median of Rt and akt equ.5. Rt (f ) =
when akt = 1 Rt−1 (f ) Rt−1 (f ) + sgn(Ot (f ) − Rt−1 (f )) when akt = 0
(5)
Fig. 3 presents occupancy factors corrected using occupancy residue values. Detection threshold can be set to a uniform value for all illumination situations.
590
W. Pamula
Fig. 3. Corrected occupancy factors a) changing camera sensitivity, b) sun and shadows, 3) night
3 FPGA Implementation The presented net diagrams in fig. 2 show data flows and processing tasks, imposing no constraints on hardware implementation. One can devise a processor based unit or a configurable logic array for implementing these. Devising a processor solution requires modification of transitions to account for processor specifics. Execution efficiency will highly depend on the size of used processor word and features of available instruction set. Analysis of net diagrams indicates that some transitions may be performed concurrently. This feature can be effectively utilized in elaborating the configuration of logic arrays. Architectures using arrays of logic gates can be tailored to frameworks of concurrently acting components. Distinctive feature of logic based implementation is the ability to merge operations and derive results without multiple data relocations. Although it is possible to organize asynchronous operation, in this case, when data needs to be fetched from memory, clocking is more desirable. Synchronizing operations additionally eliminates data skew which may arise when manipulating complex data entities. Prior to hardware implementation a software version of the algorithm was tested on a set of video streams from road traffic cameras. The software run on a PC (3,4GHz P4) equipped with a frame grabber and a massive hard disk. More than 50 hours of different traffic situations were processed [12].
Vehicle Detection Algorithm for FPGA Based Implementation
591
The video streams were analysed at 1 frame per second of processor time. More than 95% of vehicles arriving at detection fields were accounted for. The conjecture that processing done with video frames as basic data objects facilitates real time implementation is hard to prove. "Off the shelf" components have significantly lower performance than anticipated. Ordinary memory access times are in the range of tens of ns which leads to even 10 times longer data exchange times. Adding to this processor machine cycles of a similar duration, required for data processing, vehicle detection times of several hundred ms may be attained. A solution based on logic arrays is much faster but still falls behind real time timing target. The main source of time loss is inefficient frame based memory access. Streamlining data exchange is the key to successful algorithm implementation. Pixel by pixel processing is an alternative solution. At first it may appear infeasible. A modification of net diagrams by redefining data entities is adequate to model this alternative solution. New entities refer to single pixel data and associated auxiliary table entries. Revision of fig.2 indicates that only the scope of transitions changes. In result a new approach for organizing data exchanges is required. Video data is usually acquired using a standard video decoder IC which provides data synchronously with a 27 MHz clock. Data is coded in compliance with ITU-R BT 656 standard. Luminance is outputted every second clock cycle and this is the time interval in which transitions t1-t3 must fit in. T4 and t5 must be performed during vertical synchronization interval when no image pixels are provided by the camera.
4 Conclusions and Future Work The presented algorithm is implemented in a prototype version of a vehicle detector and presently goes through field tests. It was converted to a VHDL description and routed using Xilinx ISE environment. A low cost FPGA of the Sparta3E series was utilized for the construction. The overall logic resources account for about 185 thousand logic gates. Additionally an USB processor is used for configuring the detection parameters and a standard video preprocessor is used for providing a stable IEC656 video stream. It is envisaged that the FPGA structure will be optimised, to reduce the number of logic gates used, in result a smaller device could be chosen thus reducing the hardware costs.
References 1. Wu, Y., Shen, J., Dai, M.: Traffic object detections and it’s action analysis. Pattern Recognition Letters 26, 1963–1984 (2005) 2. Michalopoulos, P.G.: Vehicle detection video through image processing: the autoscope system. IEEE Trans. Vehicular Technol. 40(1), 21–29 (1991)
592
W. Pamula
3. Coifman, B., Beymer, D., McLauchlan, P., Malik, J.: A real-time computer vision system for vehicle tracking and traffic surveillance. Transport Research Part C: Emerging Technologies 6(4), 271–278 (1998) 4. Cucchiara, R., Grana, C., Piccardi, M., Prati, A.: Detecting moving objects, ghosts, and shadows in video streams. IEEE Trans. Pattern Analysis Machine Intelligence 25, 1337–1342 (2003) 5. Manzanera, A., Richefeu, J.C.: A new motion detection algorithm based on Σ − Δ background estimation. Pattern Recognition Letters 28, 320–328 (2007) 6. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 246–252 (1999) 7. KaewTraKulPong, P., Bowden, R.: An Improved Adaptive Background Mixture Model for Realtime Tracking with Shadow Detection. In: Proc. 2nd European Workshop on Advanced Video Based Surveillance Systems, AVBS 2001. Video based surveillace systems. Computer Vision and Distributed Processing. Kluwer Academic Publishers, Dordrecht (2001) 8. Bhandakar, S.M., Luo, X.: Fast and Robust Background Updating for Real-time Traffic Surveillance and Monitoring. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 278–284 (2005) 9. Yadav, R.B., Nishchal, N.K., Gupta, A.K., Rastogi, V.K.: Retrieval and classification of shape-based objects using Fourier, generic Fourier, and waveletFourier descriptors technique: A comparative study. Optics and Lasers in Engineering 45, 695–708 (2007) 10. Suna, Z., Bebisa, G., Miller, R.: Object detection using feature subset selection. Pattern Recognition 37, 2165–2176 (2004) 11. Project Report: Modules of Video Traffic Incidents Detectors ZIR-WD for Road Traffic Control and Surveillance. WKP-1/1.4.1/1/2005/14/14/231/2005, vol. 1-6, Katowice, Poland (2007) 12. Damasevicius, R., Stuikys, V.: Application of the object-oriented principles for hardware and embedded system design. Integration the VLSI Journal 38, 309–339 (2004) 13. Murata, T.: Petri Nets: Properties, Analysis and Applications. Proceedings of the IEEE 77, 541–580 (1989)
Iris Recognition Ryszard S. Chora´s University of Technology & Life Sciences 85-796 Bydgoszcz, S. Kaliskiego 7, Poland [email protected]
Summary. In this paper, we presented an iris recognition algorithm based on ROI iris image, Gabor filters and texture features based on the Haralick’s approach. The proposed system includes three modules: image preprocessing, feature extraction, and recognition modules.
1 Introduction Iris texture patterns are believed to be different for each person, and even for the two eyes of the same person. It is also claimed that for a given person, the iris patterns change little after youth. The iris (see Fig.1) is the colored portion of the eye that surrounds the pupil. Its combination of pits, striations, filaments, rings, dark spots and freckles make for a very accurate means of biometric identification [6]. Its uniqueness is such that even the left and right eye of the same individual is very different. A major approach for iris recognition today is to generate feature vectors corresponding to individual iris images and to perform iris matching based on some distance metrics [6], [11]. For the last decade, a number of researchers have worked on iris recognition. According to the various iris features utilized, these algorithms can be grouped into four main categories. 1. Phase-based method: The phase is chosen as an iris feature because in 1981 Oppenheim and Lim had demonstrated the importance of phase for the perception of visual features. Regarding the multiscale two-dimensional (2-D) Gabor wavelets as the carrier waves, Daugman[2], [3] extracted the phase measures as the iris feature. The phase is coarsely quantized to four values and the iris code is 256 bytes long. Then the dissimilarity between the input iris image and the registered template can be easily determined M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 593–600. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
594
R.S. Chora´s
Fig. 1. The iris
by the Hamming distance between their IrisCodes. Because phase congruency is a robust local image feature and can be approximated by a local energy model, Huang et al. [5] adopted a bank of Log-Gabor filters to represent local orientation characteristics of the iris. 2. Zero-crossings representation [11][13]: The zero-crossings of wavelet transform provide meaningful information of image structures [27]. Boles and Boashash [1] calculated only a one-dimensional wavelet transforms zerocrossings over concentric circles on the iris. Then two different dissimilarity functions were employed for matching. In [13], although the Haar wavelet frame decomposition method was used to extract the local features of iris pattern, in essence the result is a sort of zero-crossing description because at last the wavelet frame coefficients were encoded based on the threshold zero. Different from other iris recognition methods, this method [13] used the geometric moments of the decomposition result as an index to select a set of candidates in the database for further fine matching. 3. Texture analysis [14][19]: Naturally, random iris pattern can be seen as texture, so many well-developed texture analysis methods can be adapted to recognize the iris. Gabor filters are used to extract the iris features. from Gabor filters, a bank of circular symmetric filters was designed to capture the discriminating information along the angular direction of the iris image [16], [17]. Wildes et al. [17], [18] represented the iris pattern using a four-level Laplacian pyramid and the quality of matching was determined by the normalized correlation results between the acquired iris image and the stored template. Park et al. [11] decomposed the iris image into eight directional subband outputs using a directional filter bank. Lim et al. [22] applied a 2-D Haar wavelet transform four times to decompose the iris image and constructed a compact code from the high-frequency coefficients. 4. Local intensity variation [5], [9]: For a transient signal, the local variations denote its most important properties. In [14], Tan et al. characterized the local intensity variations by Gaussian–Hermite moments.
Iris Recognition
595
Fig. 2. Typical iris recognition stages
Typical iris recognition systems share the common structure illustrated by Figure 2. The initial stage deals with iris segmentation. This consists in localize the iris inner (pupillary) and outer (scleric) borders. In order to compensate the varying size of the captured iris it is common to translate the segmented iris region, represented in the cartesian coordinate system, to a fixed length and dimensionless polar coordinate system. The next stage is the feature extraction. In the final stage it is made a comparison between iris features, producing a numeric dissimilarity value. Robust representations for iris recognition must be invariant to changes in the size, position and orientation of the patterns. This means that a representation of the iris data invariant to changes in the distance between the eye and the capturing device, in the camera optical magnification factor and in the iris orientation.As described in [2], the invariance to all of these factors can be achieved by the translation of the captured data to a double dimensionless pseudo-polar coordinate system. Formally, to each pixel of the iris, regardless its size and pupillary dilation, a pair of real coordinates (r, θ), where r is on the unit interval [0, 1] and θ is an angle in [0, 2π]. The remapping of the iris image I(x, y) from raw cartesian coordinates (x, y) to the dimensionless non concentric polar coordinate system (r, θ) can be represented as: I(x(r, θ), y(r, θ)) → I(r, θ)
(1)
where x(r, θ) and y(r, θ) are defined as linear combinations of both the set of pupillary boundary points (xp (θ), yp (θ)) and the set of limbus
596
R.S. Chora´s
Fig. 3. Transformed region
boundary points along the outer perimeter of the iris (xs (θ), ys (θ)) bordering the sclera: x(r, θ) = (1 − r) · xp (θ) + r · xs (θ) y(r, θ) = (1 − r) · yp (θ) + r · ys (θ.)
(2)
Iris edge point detection means to find some points on the inner and outer boundary of iris. We should find out the coarse canter of the pupil in the binary image. As we know, the intensity value of the pupil region is the lowest in the whole image. We can use the equation below to detect the coarse center of the pupil from the binary image. The remapping is done so that the transformed image is rectangle with some dimension, typically as in Daugman [2] 48 × 448 (Fig. 3). Because most of the irises are affected by upper and lower eyelids, the iris is divided into two rectangular (Fig. 4a) or two angular sectors (Fig. 4b) having the same size. The blocks of interest (ROI) should be isolated from the normalized iris image.
2 Feature Extraction Two sectors with Fig. 4 are transformed into a normalized rectangular blocks each of size 32 × 128 to achieve size independent iris recognition. We uses in our iris recognition systems features based on Gabor functions analysis and texture features based on the Haralick’s approach.
Fig. 4. The iris ROI
Iris Recognition
597
Fig. 5. Localization of blocks
2.1
Gabor Features
It consists in convolution of image with complex Gabor filters which is used to extract iris feature. As a product of this operation, complex coefficients are computed. In order to obtain iris signature, complex coefficients are evaluated and coded. The two-dimensional Gabor filter is defined as 2 1 ¯ 1 x y¯2 g(x, y) = exp[j2πW x ¯] (3) exp − + 2πσx σy 2 σx2 σy2 where
√ x cos θ sin θ x ¯ , j = −1, · = y − sin θ cos θ y¯
and σx and σy are the scaling parameters of the filter, W is the radial frequency of the sinusoid and θ ∈ [0, π] specifies the orientation of the Gabor filters. The normalized iris blocks images (Fig. 4) are divided into two stripes, and each stripe into K × L blocks. The size of each block is k × l (k = l = 16). Localization of blocks is shown in Fig. 5. Each block is filtered by k x+ k 2 y+ 2
Gab(x, y, α) =
I(x, y) · g(x, y)
(4)
k x− k 2 y− 2
The orientation angles of this set of Gabor filters are αi |αi =
iπ , i = 0, 1, 2, 3 4
(5)
Fig. 6. Original block iris image (a) and real part of Gab(x, y, αi ) for αi = 0◦ (b), αi = 45◦ (c), αi = 90◦ (d), αi = 135◦ (e)
598
R.S. Chora´s
Fig. 7. Iris Code
The magnitudes of the Gabor filters responses are represented by three moments X Y 1 G(x, y, α) μ(α, σx , σy ) = XY x=1 y=1 X Y 2 ||G(x, y, α)| − μ(α, σx , σy )| std(α, σx , σy ) =
(6)
(7)
x=1 y=1
X Y 1 G(x, y, α) − μ(α, σx , σy ) Skew = XY x=1 y=1 std(α, σx , σy )
3
(8)
The feature vector is constructed using μ(α, σx , σy ) , std(α, σx , σy ) and Skew as feature components. To encode the iris we used the real part of (4) as Code(x, y) = 1 if Re(Gab(x, y, αi ) ≥ th Code(x, y) = 0 if Re(Gab(x, y, αi ) < th
(9)
The iris binary Code can be stored as personal identify feature. 2.2
Haralick’s Texture Features
The iris features information is extracted based on the Haralick’s approach. The co-occurrence matrixes Pδ,θ (x, y) are bi-dimensional representations Table 1. Texture parameters iris image Left ROI Iris Image Parameter θ δ=1 0 and 180 0.012 ASM 90 and 270 0.009 0 and 180 911.819 Con 90 1413.594 270 2891.061 0 and 180 3.182E-4 Corr 90 2.791E-4 270 1.729E-4 0 and 180 0.262 IDM 90 and 270 0.161 0 and 180 7.302 E 90 7.791 270 7.664
Iris Recognition
599
showing the spatial occurrence organization of the gray levels in an image. They represent a bi-dimensional histogram of the gray levels, where fixed spatial relation separates couples of pixels, defining the direction and distance (δ, θ) from a referenced pixel to its neighbor. The features are: 1. Second Angular Moment SAM =
l k
[Pδ,θ (x, y)]2
(10)
x=1 y=1
2. Contrast Con =
l K
(x − y)2 Pδ,θ (x, y)
(11)
x=1 y=1
3. Correlation k Corr =
l
y=1 [xyPδ,θ (x, y)]
x=1
− μx μy
σx σy
(12)
4. Inverse Differential moment IDM =
k l Pδ,θ (x, y) 1 + (x − y)2 x=1 y=1
(13)
5. Entropy E=−
l k
Pδ,θ (x, y) log Pδ,θ (x, y)
(14)
x=1 y=1
3 Matching and Conclusion The matching algorithm finds the proximity of two irises calculating the Average Euclidean Distance of the two features vectors. A new method has been presented for iris localization and recognition based on detection ROI, Gabor features and texture features based on the Haralick’s approach. This paper analyses the details of the proposed method. Experimental results have demonstrated that this approach is promising to improve iris recognition for iris person identification.
References 1. Boles, W.W., Boashash, B.: A human identification technique using images of the iris and wavelet transform. IEEE Transactions on Signal Processing 46(4), 1185–1188 (1998)
600
R.S. Chora´s
2. Daugman, J.G.: High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(11), 1148–1161 (1993) 3. Daugman, J.G.: Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. IEEE Trans. Acoust., Speech, Signal Processing 36, 1169–1179 (1988) 4. Gabor, D.: Theory of communication. J. Inst. Elect. Eng. 93, 429–459 (1946) 5. Huang, J., Ma, L., Wang, Y., Tan, T.: Iris recognition based on local orientation description. In: Proc. 6th Asian Conf. Computer Vision, vol. II, pp. 954–959 (2004) 6. Jain, A.K., Bolle, R.M., Pankanti, S. (eds.): Biometrics: Personal Identification in Networked Society. Kluwer, Norwell (1999) 7. Ma, L., Wang, Y., Tan, T.: Iris recognition using circular symmetric filters. In: Proceedings of the 16th International Conference on Pattern Recognition, vol. 2, pp. 414–417 (2002) 8. Ma, L., Tan, T., Wang, Y., Zhang, D.: Personal identification based on iris texture analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(12), 1519–1533 (2003) 9. Ma, L., Tan, T., Zhang, D., Wang, Y.: Local intensity variation analysis for iris recognition. Pattern Recognition 37(6), 1287–1298 (2005) 10. Ma, L., Wang, Y., Tan, T.: Iris recognition based on multichannel Gabor filtering. In: Proc. 5th Asian Conf. Computer Vision, vol. I, pp. 279–283 (2002) 11. Park, C., Lee, J., Smith, M., Park, K.: Iris-based personal authentication using a normalized directional energy feature. In: Proc. 4th Int. Conf. Audio- and Video-Based Biometric Person Authentication, pp. 224–232 (2003) 12. Proenc¨ ya, H., Alexandre, L.A.: UBIRIS: A noisy iris image database. In: Roli, F., Vitulano, S. (eds.) ICIAP 2005. LNCS, vol. 3617, pp. 970–977. Springer, Heidelberg (2005) 13. Proenc¨ ya, H., Alexandre, L.A.: Iris segmentation methodology for noncooperative iris recognition. IEE Proc. Vision, Image & Signal Processing 153(2), 199–205 (2006) 14. Sanchez-Reillo, R., Sanchez-Avila, C.: Iris recognition with low template size. In: Bigun, J., Smeraldi, F. (eds.) AVBPA 2001. LNCS, vol. 2091, pp. 324–329. Springer, Heidelberg (2001) 15. Sun, Z., Wang, Y., Tan, T., Cui, J.: Improving iris recognition accuracy via cascaded classifiers. IEEE Trans. on Systems, Man, and Cybernetics-Part C 35(3), 435–441 (2005) 16. Tisse, C., Martin, L., Torres, L., Robert, M.: Person identification technique using human iris recognition. In: Proc. Vision Interface, pp. 294–299 (2002) 17. Wildes, R.P.: Iris recognition: an emerging biometric technology. Proceedings of the IEEE 85(9), 1348–1363 (1997) 18. Wildes, R.P., Asmuth, J.C., Green, G.L., Hsu, S.C., Kolczynski, R.J., Matey, J.R., McBride, S.E.: A machine vision system for iris recognition. Mach. Vision Applicat. 9, 1–8 (1996) 19. Yuan, X., Shi, P.: A non-linear normalization model for iris recognition. In: Li, S.Z., Sun, Z., Tan, T., Pankanti, S., Chollet, G., Zhang, D. (eds.) IWBRS 2005. LNCS, vol. 3781, pp. 135–141. Springer, Heidelberg (2005)
A Soft Computing System for Modelling the Manufacture of Steel Components Andres Bustillo1 , Javier Sedano2 , Leticia Curiel1 , Jos´e R. Villar3 , and Emilio Corchado1 1 2
3
Department of Civil Engineering, University of Burgos, Burgos, Spain {abustillo,lcuriel,escorchado}@ubu.es Department of Electromechanical Engineering. University of Burgos, Burgos, Spain [email protected] Department of Computer Science, University of Oviedo, Spain [email protected]
Summary. In this paper we present a soft computing system developed to optimize the laser milling manufacture of high value steel components, a relatively new and interesting industrial technique. This multidisciplinary study is based on the application of neural projection models in conjunction with identification systems, in order to find the optimal operating conditions in this industrial issue. Sensors on a laser milling centre capture the data used in this industrial case study defined under the frame of a machine-tool that manufactures steel components like high value molds and dies. The presented model is based on a two-phase application. The first phase uses a neural projection model capable of determine if the data collected is informative enough based on the existence of internal patterns. The second phase is focus on identifying a model for the laser-milling process based on low-order models such as Black Box ones. The whole system is capable of approximating the optimal form of the model. Finally, it is shown that the Box-Jenkins algorithm, which calculates the function of a linear system from its input and output samples, is the most appropriate model to control such industrial task for the case of steel components.
1 Introduction Laser milling, in general, consists on the controlled evaporation of waste material due to its interaction with high-energy pulsed laser beams. The operator of a conventional milling machine is aware at all times of the amount of waste material removed, but the same can not be said of a laser milling machine. Then a soft computing model that could predict the exact amount of material that each laser pulse is able to remove would contribute to the industrial use and development of this new technology. In this case we are focus on laser milling of steel components. It is an especially interesting industrial process, due to the broad use of steel as base material for different kind of manufacture tools, like molds and dies. One of the applications of this technology to these industrial tools is the deep indelible engraving of serial numbers or barcodes for quality M. Kurzynski and M. Wozniak (Eds.): Computer Recognition Sys. 3, AISC 57, pp. 601–609. c Springer-Verlag Berlin Heidelberg 2009 springerlink.com
602
A. Bustillo et al.
control and security reasons for automotive industry [1]. The soft computing model proposed in this paper is able to optimize the manufacturing process and to control laser milling to the level of accuracy, productivity and surface quality that are required for the manufacture of deep indelible engravings. The rest of the paper is organized as follows. Following the introduction, a two-phase process is described to identify the optimal conditions for the industrial laser milling of steel components. The case study that outlines the practical application of the model is then presented. Finally, some different modelling systems are applied and compared, in order to select the optimal model, before ending with some conclusions and future work.
2 An Industrial Process for Steel Components Modelling 2.1
Analyse the Internal Structure of the Data Set
Cooperative Maximum-Likelihood Hebbian Learning (CMLHL) [2] is used in order to analyse the internal structure of the data set, which describe a steel manufactured component to establish whether it is sufficiently informative. CMLHL is a Exploratory Projection Pursuit (EPP) method [3, 4, 5]. EPP provides a linear projection of a data set, but it projects the data onto a set of basic vectors which help to reveal the most interesting data structures; interestingness is usually defined in terms of how far is the distribution from the Gaussian distribution [6]. Maximum-Likelihood Hebbian Learning (MLHL) [5, 7] identifies interestingness by maximising the probability of the residuals under specific probability density functions that are non-Gaussian. An extended version is the CMLHL [2] model, which is based on MLHL [5, 7] but adds lateral connections [8, 2] that have been derived from the Rectified Gaussian Distribution [6]. Considering an N-dimensional input vector (¯ x), and an M-dimensional output vector ( y¯), with W¯ij being the weight (linking input j to output i), then CMLHL can be expressed [9, 10] as: F eed − f orward step : yi =
N
Wij × xj , ∀i,
(1)
j=1
Lateral activation passing : yi (t + 1) = [yi (t) + δ(b − A × y)] F eedback step : ej = xj −
M
(2)
Wij × yi , ∀j,
(3)
W eight change : ΔWij = η × yi × sign(ej ) × |ej |p−1
(4)
i=1
Where η is the learning rate, τ is the strength of the lateral connections, b the bias parameter, p a parameter related to the energy function [2, 5, 7] and A a symmetric matrix used to modify the response to the data [2]. The effect of this matrix is based on the relation between the distances separating the output neurons.
A Soft Computing System for Modelling the Manufacture
2.2
603
The Identification Criterion
The identification criterion evaluates which of the group of candidate models is best adapted to and which best describes the data sets collected in the experiment; i.e., given a model M (θ∗ ) its prediction error may be defined by equation (5); and a good model [9] will be that which makes the best predictions, and which produces the smallest errors when compared against the observed data. In other words, for any given data group Z t , the ideal model will calculate the prediction error (t, θN ), equation (5), in such a way that for any one t = N , a particular θˆ∗ (estimated parametrical vector) is selected so that the prediction error (t, θˆN ) in t = 1, 2,. . . , N , is made as small as possible. (t, θ∗ ) = y(t) − yˆ(t|θ∗ ).
(5)
The estimated parametrical vector θˆ that minimizes the error, equation (8), is obtained from the minimization of the error function (6). This is obtained by applying the least-squares criterion for the linear regression, i.e., by applying the quadratic norm = 12 2 , equation (7). N 1 (F (t, θ)) N i=1
(6)
N 1 1 (y(t) − yˆ(t|θ))2 N t=1 2
(7)
θˆ = θˆN (Z N ) = argminθ∈DM VN (θ, Z N )
(8)
VN (θ, Z N ) = VN (θ, Z N ) =
The methodology of black-box structures has the advantage of only requiring very few explicit assumptions regarding the pattern to be identified, but that in turn makes it difficult to quantify the model that is obtained. The discrete linear models may be represented through the union between a deterministic and a stochastic part, equation (9); the term e(t) (white noise signal) includes the modelling errors and is associated with a series of random variables, of mean null value and variance λ. y(t) = G(q −1 ) × u(t) + H(q −1 ) × e(t)
(9)
The structure of a black-box model depends on the way in which the noise is modelled H(q −1 ); thus, if this value is 1, then the OE (Output Error) model is applicable; whereas, if it is different from zero a great range of models may be applicable; one of the most common being the BJ (Box Jenkins) algorithm. This structure may be represented in the form of a general model, where B(q −1 ) is a polynomial of grade nb , which can incorporate pure delay nk in the inputs, and A(q −1 ), C(q −1 ), D(q −1 ) and F (q −1 ) are autoregressive polynomials ordered as na , nc , nd , nf , respectively (10). Likewise, it is possible to use a predictor expression, for the on-step prediction ahead of the
604
A. Bustillo et al.
output y(t|θ)(11). Polynomials used in (10) are B, F and B, F, C and D for OE and BJ models, respectively. C(q −1 ) B(q −1 ) × u(t) + × e(t) (10) F (q −1 ) D(q −1 ) 3 4 D(q −1 ) × A(q −1 ) D(q −1 ) × B(q −1 ) × u(t) + 1 − × y(t) (11) yˆ(t|θ) = −1 −1 C(q ) × F (q ) C(q −1 ) A(q −1 ) × y(t) = q −nk ×
Procedure for Modelling the Laser Milling Process. The identification procedure is carried out in accordance with two fundamental patterns: a first pre-analytical and then an analytical stage that assists with the determination of the parameters in the identification process and the model estimation. The pre-analysis test is run to establish the identification techniques [9, 10, 11, 12], the selection of the model structure and its order estimation [13, 14], the identification criterion and search methods that minimize it and the specific parametrical selection for each type of model structure. A second validation stage ensures that the selected model meets the necessary conditions for estimation and prediction. Three tests were performed to validate ˆ the model: residual analysis (t, θ(t)), by means of a correlation test between inputs, residuals and their combinations; final prediction error (FPE) estimate, as explained by Akaike [15]; and the graphical comparison between desired outputs and the outcome of the models through simulation one (or k) steps before.
3 Modelling Steel Components: An Industrial Task This research is interested on the study and identification of the optimal conditions for laser milling of deep indelible engraving of serial numbers or barcodes on steel components using a commercial Nd:YAG laser with a pulse length of 10μs. Three parameters of the laser process can be controlled: laser power (u1 ), laser milling speed (u2 ) and laser pulse frequency (u3 ). The laser is integrated in a laser milling centre (DMG Lasertec 40). To simplify this industrial problem a test piece was designed and used in all of the laser milling experiments. It consisted on an inverted, truncated, pyramid profile that had to be laser milled on a flat metallic piece of steel. The truncated pyramid had angles of 135o, and a depth of 1 mm, but as the optimized parameters for the laser milling of steel were not known at that point in time, both parameters showed errors, which are referred as angle error (y1 ) and depth error (y2 ). A third parameter to be considered is the removal rate, that is, the volume of steel removed by the laser per minute (y3 ). A last parameter is the surface roughness of the milled piece (y4 ), measured on the flat surface of the truncated pyramid. These four variables have to be optimized, because the industrial process required a precise geometrical shape, the shorter manufacturing time and a good surface roughness of the
A Soft Computing System for Modelling the Manufacture
605
Table 1. Variables, units and values used during the experiments. Output y(t), Input u(t). Variable (Units)
Range
Angle error of the test piece, y1 (t) Depth error of the test piece, y2 (t) Material removal rate (mm3 /min), y3 (t) Surface roughness of the test piece (μm), y4 (t) Laser power in percent of the maximum power performed by the laser (%), u1 (t) Laser milling speed (mm/s), u2 (t) Laser pulse frequency (kHz), u3 (t)
-1 to 1 -1 to 1 0.02 to 0,75 0.32 4.38 20 to 100 200 to 800 20 to 100
piece. We applied different modelling systems to achieve the optimal conditions of these four parameters, although for demonstration we only show one of these four parameters. The experimental design was performed on a Taguchi L25 with 3 input parameters and 5 levels, so as to include the entire range of laser milling settings that are controllable by the operator. Table 1 summarizes the input and output variables of the experiment which define the case of study. After the laser milling, actual inverted pyramid depth, walls angle and surface roughness (y3 ) of the bottom surface were measured using optical devices. The material removal rate (y4 ) was calculated from the whole time required for the manufacture of each sample and the actual volume of removed material. The measured walls angle and the pyramid depth were compared with the nominal values in the CAD model, thereby obtaining the two errors (y1 and y2 ). The test piece and the prototype were described in detail beforehand [16].
Fig. 1. The first of the two projections obtained by CMLHL
606
3.1
A. Bustillo et al.
Application of the Two Phases of the Modelling System
The study has been organized into two phases or steps: 1. Analysis of the internal structure of the data set based on the application of several unsupervised connectionist models, CMLHL projections are used. 2. Application of several identification models in order to find the one that best defines the dynamic of the laser milling process. Step 1. The CMLHL projections: Fig. 1 shows the results obtained by means of CMLHL projections. This model is able to identify five different clusters order mainly by power. After studying each cluster it can be noted a second classification based on the speed and frequency as it is shown in Fig. 1. All this indicates that the data analysed is sufficiently informative. Step 2. Modelling the laser milling process: Fig. 2, shows the result of output y1 (t), angle error, for the different models. They show the graphic representations of the results, for OE and BJ models, in relation to the polynomial order and the delay in the inputs; various delays for all inputs and various polynomial orders [nb1 nb2 nb3 nc nd nf nk1 nk2 nk3 ] were considered
Fig. 2. Output response of two different models: the OE -upper row- and BJ -the lower row- methods. Left column figures represents the angle error output y1 (t) simulation, while the right column figures correspond with the one step ahead prediction of the angle error output y1 (t). The validation data set was not used for the estimation of the model. The order of the structure of the model is [1 1 1 2 2 2 1 1 1]. The solid line represents true measurements and the dotted line represents estimated output.
A Soft Computing System for Modelling the Manufacture
607
Table 2. Indicator values for several proposed models of the angle error Model
Indexes
Black-box OE model with nb1 = 2, nb2 = 1, nb3 = 1, nc = 2, nd = 2, nf = 2, nk1 = 1, nk2 = 1, nk3 = 1. The model is estimated using the prediction error method, the degree of the model selection is carried out from the best AIC criterion (the structure that minimizes AIC). Black-box OE model nb1 = 1, nb2 = 1, nb3 = 1, nc = 2, nd = 2, nf = 2, nk1 = 1, nk2 = 1, nk3 = 1.The model is estimated using the prediction error method, the degree of the model selection is carried out with the best AIC criterion (the structure that minimizes AIC). Black-box BJ model with nb1 = 2, nb2 = 1, nb3 = 1, nc = 2, nd = 2, nf = 2, nk1 = 1, nk2 = 1, nk3 = 1. The model is estimated using the prediction error method, the degree of the model selection is carried out with the best AIC criterion (the structure that minimizes AIC). Black-box BJ model withnb1 = 1, nb2 = 1, nb3 = 1, nc = 2, nd = 2, nf = 2, nk1 = 1, nk2 = 1, nk3 = 1. The model is estimated using the prediction error method, the degree of the model selection is carried out with the best AIC criterion (the structure that minimizes AIC).
FIT:44.04%, FIT1:44.04% FIT10:44.04%, V: 0.02 FPE:0.23, NSSE:7.71e4 FIT:21.2%, FIT1: 21.2% FIT10: 21.2%, V: 0.023 FPE:0.162, NSSE:0.0015 FIT:100%, FIT1:100% FIT10:100%, V: 0.12 FPE:0.27, NSSE:2.73e31 FIT:100%, FIT1:100% FIT10:100%, V: 0.97 FPE:1,75, NSSE:4.17e30
to arrive at the highest degree of precision, in accordance with the structure of the models that have been used; see Table 1. In Fig. 2, the X-axis shows the number of samples used in the validation of the model, while the Y-axis represents the range of output variables. Table 2 shows a comparison of the qualities of estimation and prediction of the best models obtained, as a function of the model, the estimation method, and the indexes, which are defined as follows: •
• • •
The percentage representation of the estimated model (expressed as so many percent %) in relation to the true system: the normalized mean error that is computed with one-step, ten-steps prediction (FIT1 and FT10, respectively) or by means of simulation (FIT). The one-step and ten-steps predictions ( yˆ1 (t|m) and yˆ10 (t|m)) and the model simulation yˆ∞ (t|m) are also shown. The loss or the error function (V): the numeric value of the mean square error that is calculated from the estimation data set. The generalization error value (NSSE): the mean square error calculated with the validation data set. The average generalization error value (FPE): the FPE criterion calculated from the estimation data set.
From the graphical representation (Fig. 2) it can be concluded that the BJ model is capable of simulating and predicting the behaviour of the laser milled piece for angle error, as they meet the indicators and is capable of modelling
608
A. Bustillo et al.
Table 3. Function and parameters that represent the behaviour of the laser milled piece for the angle error. The degree of the BJ model polynomials are nb1 = 1, nb2 = 1, nb3 = 1, nc = 2, nd = 2, nf = 2, nk1 = 1, nk2 = 1, nk3 = 1. [ 1 1 1 2 2 2 1 1 1]. Final polynomials. B1(q) = 0.01269 × q −1 B2(q) = 0.0004895 × q −1 B3(q) = 0.01366 × q −1 C(q) = 1 + 1.541 × q −1 + 1.02 × q −2
D(q) = 1 + 1.208 × q −1 + 0.3098 × q −2 F 1(q) = 1 + 0.4094q −1 − 0.16 × q −2 F 2(q) = 1 − 1.678 × q −1 + 0.7838 × q −2 F 3(q) = 1 − 1.1 × q −1 + 0.7671 × q −2 e(t) is white noise signal whit variance 0.08
more than 95% of the true measurements. The tests were performed using Matlab and the System Identification Toolbox. Table 3 shows the final BJ model.
4 Conclusions and Futures Lines of Work We have done an investigation to study and identify the most appropriate modelling system for laser milling of steel components. Several methods were investigated to achieve the best practical solution to this interesting problem. The study shows that the Box-Jenkins algorithm is best adapted to this case, in terms of identifying the best conditions and predicting future circumstances. It is important to emphasize that a relevant aspect of this research lies in the use of a two-step model when modelling the laser milling process for steel components: a first step, which applies projection methods to establish whether the data describing the case study is sufficiently informative. As a consequence, the first phase eliminates one of the problems associated with these identification systems, which is that of having no prior knowledge of whether the experiment that generated the data group may be considered acceptable and will present sufficient information in order to identify the overall nature of the problem. Future work will be focus on the study and application of this model to other kinds of materials of industrial interest, such as cast single-crystal nickel superalloys for high-pressure turbine blades and also the application of this model to the optimization of different but similar industrial problems, like laser cladding, laser super-polishing and laser drilling. Acknowledgement. Thanks to the support received from ASCAMM Technological Centre -http://www.ascamm.com-, which provided the laser milling data and performed all the laser tests. The authors would especially like to thank Mr. P. Palouzie and Mr. J. Diaz for their kind-spirited and useful advice. This research has been partially supported through project BU006A08 of JCyL and through project CIT-020000-2008-2 of Spanish Ministry of Education and Innovation.
A Soft Computing System for Modelling the Manufacture
609
References 1. Wendland, J., Harrison, P.M., Henry, M., Brownell, M.: Deep Engraving of Metals for the Automotive Sector Using High Average Power Diode Pumped Solid State Lasers. In: Proceedings of the 23rd International Conference on Applications of Lasers and Electro-Optics (ICALEO 2005) (2005) 2. Corchado, E., Fyfe, C.: Connectionist Techniques for the Identification and Suppression of Interfering Underlying Factors. J. of Pattern Recognition and Artificial Intelligence 17(8), 1447–1466 (2003) 3. Diaconis, P., Freedman, D.: Asymptotics of Graphical Projections. The Annals of Statistics 12(3), 793–815 (1984) 4. Friedman, J.H., Tukey, J.W.: Projection Pursuit Algorithm for Exploratory Data-Analysis. IEEE Transactions on Computers 23(9), 881–890 (1974) 5. Corchado, E., MacDonald, D., Fyfe, C.: Maximum and Minimum Likelihood Hebbian Learning for Exploratory Projection Pursuit. Data Mining and Knowledge Discovery 8(3), 203–225 (2004) 6. Seung, H.S., Socci, N.D., Lee, D.: The Rectified Gaussian Distribution. In: Advances in Neural Information Processing Systems, vol. 10, pp. 350–356 (1998) 7. Fyfe, C., Corchado, E.: Maximum Likelihood Hebbian Rules. In: Proc. of the 10th European Symposium on Artificial Neural Networks (ESANN 2002) (2002) 8. Corchado, E., Han, Y., Fyfe, C.: Structuring Global Responses of Local Filters Using Lateral Connections. Journal of Experimental & Theoretical Artificial Intelligence 15(4), 473–487 (2003) 9. Ljung, L.: System Identification. Theory for the User. Prentice-Hall, Englewood Cliffs (1999) 10. N¨ ogaard, N., Ravn, O., Poulsen, N.K., Hansen, L.K.: Neural Networks for Modelling and Control of Dynamic Systems. Springer, Heidelberg (2000) 11. Nelles, O.: Nonlinear System Identification. From Classical Approaches to Neural Networks and Fuzzy Models. Springer, Heidelberg (2001) 12. Haber, R., Keviczky, L.: Nonlinear System Identification, Input-Output Modeling Approach. Part 1: Nonlinear System Parameter Estimation. Kluwer Academic Publishers, Dordrecht (1999) 13. Stoica, P., S¨ oderstr¨ om, T.: A useful parametrization for optimal experimental design. IEEE Trans. Automatic. Control AC-27 (1982) 14. He, X., Asada, H.: A new method for identifying orders of input-output models for nonlinear dynamic systems. In: Proceedings of the American Control Conference (1993) 15. Akaike, H.: Fitting autoregressive models for prediction. Ann. Inst. Stat. Math. 20, 425–439 (1969) 16. Arias, G., Ciurana, J., Planta, X., Crehuet, A.: Analyzing Process Parameters that influence laser machining of hardened steel using Taguchi method. In: Proceedings of 52nd International Technical Conference, SAMPE 2007 (2007)
Author Index
Fabija´ nska, Anna
Andreasik, Jan 559 Astudillo, C´esar A. 169 Augustyniak, Piotr 365
Gabrys, Bogdan 311 Glomb, Przemyslaw 71, 103 Goszczy´ nska, Hanna 407 Grabska, Ewa 527 Grzegorek, Maria 63 Grzymala-Busse, Jerzy W. 381
Babout, Laurent 389 Baruque, Bruno 265 Blachnik, Marcin 257 Blaszczyk, Pawel 179 Bortoleto, Silvio 439 Borysewicz, Krzysztof 499 Budka, Marcin 311 Bulegon, Hugo 439 Burduk, Robert 303 Bustillo, Andres 601
Hippe, Zdzislaw S. 381 Holubowicz, Witold 577 Horoba, Krzysztof 247 Hoser, Pawel 489 Hrebie´ n, Maciej 373
Callejas, Zoraida 331 Cerva, Petr 331 Chora´s, Michal 143, 577 Chora´s, Ryszard S. 593 Corchado, Emilio 265, 601 Couprie, Michel 389 Cudek, Pawel 381 Curiel, Leticia 601 Cyganek, Boguslaw 231 de Amorim, Renato Cordeiro Dec, Stanislaw 407 Doros, Marek 407 Doroz, Rafal 569 Dulewicz, Annamonika 479 Duplaga, Mariusz 151 Dzie´ nkowski, Mariusz 35 Ebecken, Nelson 439 El-Sonbaty, Yasser 195
389
Impedovo, Donato Iwanowski, Marcin
339 543
Janaszewski, Marcin 389 Jaszczak, Pawel 479 Jedrzejczyk, Mariusz 389 Jezewski, Janusz 247 J´ o´zwiak, Rafal 151, 447 J´ o´zwik, Adam 407, 479 519 Kapuscinski, Tomasz 355 Kartaszy` nski, Rafal Henryk 423 Klepaczko, Artur 87 Kocinski, Marek 87 Kolebska, Krystyna 407 Komorowski, Dariusz 431 Koprowski, Robert 463, 471 Korbicz, J´ ozef 373 Korzynska, Anna 543
612
Author Index
Kowalczyk, Leszek 407 Kozik, Rafal 535, 577 Koziol, Mariusz 275 Kr´ otkiewicz, Marek 135 Kulikowski, Juliusz L. 159, 319 Kuniszyk-J´ o´zkowiak, Wieslawa 35, 347 Kurzynski, Marek 285, 455 Kwasnicka, Halina 499 Lampert, Thomas A. L´ opez-C´ ozar, Ram´ on
119, 127 331
Malina, Witold 205 Materka, Andrzej 87 Michalak, Marcin 213 Mikolajczak, Pawel 423 Miszczak, Jan 407 Mokrzycki, Wojciech S. 113 Nouza, Jan 331 Nurzy´ nska, Karolina
19
O’Keefe, Simon E.M. 119, 127 Oommen, John B. 169 Ostrek, Grzegorz 447 Pamula, Wieslaw 585 Paradowski, Mariusz 499 Pears, Nick E. 127 Pi¸etka, Boguslaw D. 479 Pietraszek, Stanislaw 431 Placzek, Bartlomiej 79 Podolak, Igor T. 239 Porwik, Piotr 569 Postolski, Michal 389 Przelaskowski, Artur 151, 447 Przybyla, Tomasz 247 Przybylo, Jaromir 53 Przytulska, Malgorzata 159 Rafajlowicz, Ewaryst 27 Raniszewski, Marcin 221 Refice, Mario 339 Renk, Rafal 577 Rokita, Przemyslaw 43
Roman, Adam 239 Roman, Angelmar Constantino Romaszewski, Michal 71 Rotkiewicz, Arkadiusz 397 Saganowski, L ukasz 577 Said, Hany 195 Samko, Marek A. 113 Sas, Jerzy 509 Schmidt, Adam 95 Sedano, Javier 601 Simi´ nski, Krzysztof 11, 187 Sklinda, Katarzyna 447 Skrzypczy´ nski, Piotr 3 ´ Slusarczyk, Gra˙zyna 527 Smiatacz, Maciej 205 Smolka, El˙zbieta 35, 347 Stanczyk, Urszula 293 Stapor, Katarzyna 179 Stefa´ nczyk, Ludomir 389 Suszy´ nski, Waldemar 35 ´ Swierczy´ nski, Zbigniew 43 ´ Swietlicka, Izabela 347 Szczepaniak, Piotr S. 397 Tarlowski, Rafal 143 Tomczyk, Arkadiusz 397 Villar, Jos´e R.
601
Wafa, Moualhi 415 Walecki, Jerzy 447 Wierzbicka, Diana 159 Wojtkiewicz, Krystian 135 Wolczowski, Andrzej 455 Woloszynski, Tomasz 285 Wolski, Cyprian 397 Wozniak, Michal 275 Wrobel, Krzysztof 569 Wrobel, Zygmunt 463, 471 Wysocki, Marian 355 Zagrouba, Ezzeddine 415 Zalewska, Ewa 407 Zolnierek, Andrzej 455, 509
439