Ingo Glöckner Fuzzy Quantifiers
Studies in Fuzziness and Soft Computing, Volume 193 Editor-in-chief Prof. Janusz Kacprzyk Systems Research Institute Polish Academy of Sciences ul. Newelska 6 01-447 Warsaw Poland E-mail:
[email protected] Further volumes of this series can be found on our homepage: springer.com
Vol. 186. Radim Bˇelohlávek, Vilém Vychodil Fuzzy Equational Logic, 2005 ISBN 3-540-26254-7
Vol. 178. Claude Ghaoui, Mitu Jain, Vivek Bannore, Lakhmi C. Jain (Eds.) Knowledge-Based Virtual Education, 2005 ISBN 3-540-25045-X
Vol. 187. Zhong Li, Wolfgang A. Halang, Guanrong Chen (Eds.) Integration of Fuzzy Logic and Chaos Theory, 2006 ISBN 3-540-26899-5
Vol. 179. Mircea Negoita, Bernd Reusch (Eds.) Real World Applications of Computational Intelligence, 2005 ISBN 3-540-25006-9
Vol. 188. James J. Buckley, Leonard J. Jowers Simulating Continuous Fuzzy Systems, 2006 ISBN 3-540-28455-9
Vol. 180. Wesley Chu, Tsau Young Lin (Eds.) Foundations and Advances in Data Mining, 2005 ISBN 3-540-25057-3 Vol. 181. Nadia Nedjah, Luiza de Macedo Mourelle Fuzzy Systems Engineering, 2005 ISBN 3-540-25322-X Vol. 182. John N. Mordeson, Kiran R. Bhutani, Azriel Rosenfeld Fuzzy Group Theory, 2005 ISBN 3-540-25072-7 Vol. 183. Larry Bull, Tim Kovacs (Eds.) Foundations of Learning Classifier Systems, 2005 ISBN 3-540-25073-5 Vol. 184. Barry G. Silverman, Ashlesha Jain, Ajita Ichalkaranje, Lakhmi C. Jain (Eds.) Intelligent Paradigms for Healthcare Enterprises, 2005 ISBN 3-540-22903-5 Vol. 185. Spiros Sirmakessis (Ed.) Knowledge Mining, 2005 ISBN 3-540-25070-0
Vol. 189. Hans Bandemer Mathematics of Uncertainty, 2006 ISBN 3-540-28457-5 Vol. 190. Ying-ping Chen Extending the Scalability of Linkage Learning Genetic Algorithms, 2006 ISBN 3-540-28459-1 Vol. 191. Martin V. Butz Rule-Based Evolutionary Online Learning Systems, 2006 ISBN 3-540-25379-3 Vol. 192. Jose A. Lozano, Pedro Larrañaga, Iñaki Inza, Endika Bengotxea (Eds.) Towards a New Evolutionary Computation, 2006 ISBN 3-540-29006-0 Vol. 193. Ingo Glöckner Fuzzy Quantifiers: A Computational Theory, 2006 ISBN 3-540-29634-4
Ingo Glöckner
Fuzzy Quantifiers A Computational Theory
ABC
Dr. Ingo Glöckner FernUniversität in Hagen LG Prakt. Inf. VII Informatikzentrum Universitätsstr. 1 58084 Hagen Germany E-mail:
[email protected]
Library of Congress Control Number: 2005936355
ISSN print edition: 1434-9922 ISSN electronic edition: 1860-0808 ISBN-10 3-540-29634-4 Springer Berlin Heidelberg New York ISBN-13 978-3-540-29634-8 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com c Springer-Verlag Berlin Heidelberg 2006 Printed in The Netherlands The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: by the author and TechBooks using a Springer LATEX macro package Printed on acid-free paper
SPIN: 11430025
89/TechBooks
543210
Preface
From a linguistic perspective, it is quantification which makes all the difference between “having no dollars” and “having a lot of dollars”. And it is the meaning of the quantifier “most” which eventually decides if “Most Americans voted Kerry” or “Most Americans voted Bush” (as it stands). Natural language (NL) quantifiers like “all”, “almost all”, “many” etc. serve an important purpose because they permit us to speak about properties of collections, as opposed to describing specific individuals only; in technical terms, quantifiers are a ‘second-order’ construct. Thus the quantifying statement “Most Americans voted Bush” asserts that the set of voters of George W. Bush comprises the majority of Americans, while “Bush sneezes” only tells us something about a specific individual. By describing collections rather than individuals, quantifiers extend the expressive power of natural languages far beyond that of propositional logic and make them a universal communication medium. Hence language heavily depends on quantifying constructions. These often involve fuzzy concepts like “tall”, and they frequently refer to fuzzy quantities in agreement like “about ten”, “almost all”, “many” etc. In order to exploit this expressive power and make fuzzy quantification available to technical applications, a number of proposals have been made how to model fuzzy quantifiers in the framework of fuzzy set theory. These approaches usually reduce fuzzy quantification to a comparison of scalar or fuzzy cardinalities [197, 132]. Thus for understanding a quantifying proposition like “About ten persons are tall”, we must know the degree to which we have j tall people in the given situation, and the degree to which the cardinal j qualifies as “about ten”. However, the results of these methods can be implausible in certain cases [3, 35, 54], i.e. they can violate some basic linguistic expectations. These problems certainly hindered the spread of the ‘traditional’ approaches into commercial applications. This monograph proposes a new solution which shows itself immune against the deficiencies of existing approaches to fuzzy quantification. The theory covers all aspects of fuzzy quantification which are necessary for a comprehensive treatment of the subject, ranging from abstract topics like the
VI
Preface
proper semantical description of fuzzy quantifiers to practical concerns like robustness of interpretations and efficient implementation. The novel theory of fuzzy quantification is • • • • •
a compatible extension of the Theory of Generalized Quantifiers [7, 9]; not limited to absolute and proportional quantifiers; a genuine theory of fuzzy multi-place quantification; not limited to finite universes of quantification; not limited to quantitative (automorphism-invariant) examples of quantifiers; • based on a rigid axiomatic foundation which enforces meaningful and coherent interpretations; • fully compatible with the formation of negation, antonyms, duals, and other constructions of linguistic relevance. To accomplish this, the theory introduces • a formal framework for analysing fuzzy quantifiers; • a system of formal postulates for those models of fuzzy quantification that are semantically conclusive; • a number of theorems showing that these models are plausible from a linguistic point of view and also internally coherent; • a discussion of several constructive principles along with an analysis of the generated models; • a collection of algorithms for implementing fuzzy quantifiers in the models and examples for their application. Specifically, this work approaches the semantics of fuzzy NL quantification by formalizing the essential linguistic expectations on plausible interpretations, and it also presents examples of models which conform to these semantical requirements. The book further discloses a general strategy for implementing quantifiers in a model of interest, and it develops a number of supplementary methods which will optimize processing times. In order to illustrate these techniques, the basic procedure will then be detailed for three prototypical models, and the complete algorithms for implementing the relevant quantifiers in these models will be presented. Hence the proposed solution is not only theoretically appealing, but it also avails us with computational models of fuzzy quantification which permit its use in practical applications. The quantifiers covered by the new algorithms include the familiar absolute and proportional types known from existing work on fuzzy quantification, but they also comprise some other cases of linguistic relevance. In particular, the treatment of quantifiers of exception (“all except k”) and cardinal comparatives (“many more than”) is innovative in the fuzzy sets framework. The material presented in the book will be of interest to those working at the crossroads of natural language processing and fuzzy set theory. The range of applications comprises fuzzy querying interfaces, where fuzzy quantifiers
Preface
VII
support more flexible queries to information systems; systems for linguistic data summarization, where fuzzy quantifiers are used to build succinct linguistic summaries of given databases; decision-support systems, where fuzzy quantifiers implement multi-criteria decisionmaking, and many others. In all of these applications, the ‘old’ models of fuzzy quantification can easily be replaced with the improved models developed here. I hope that by removing the ‘plausibility barrier’ faced by earlier techniques and rendering possible a more reliable use of fuzzy quantifiers, the new theory of fuzzy NL quantification will contribute to a break-through of these techniques into commercial applications.∗
Hagen September 2005
∗
Ingo Gl¨ ockner
The monograph is based on the author’s PhD thesis titled “Fuzzy Quantifiers in Natural Language: Semantics and Computational Models” [52] which was published by Der Andere Verlag, Osnabr¨ uck (now T¨ onning), Germany under ISBN 3-89959-205-0. Kind permission of Der Andere Verlag is acknowledged to republish this improved version of the manuscript.
Overview of the Book
The book is organized as follows. The first chapter presents a general introduction into the history and main issues of fuzzy natural language quantification. To this end, the basic characteristics of linguistic quantifiers are first reviewed in order to distill some requirements on a principled theory of linguistic quantification. Following this, we consider the problem of linguistic vagueness and its modelling in terms of continuous membership grades, which is fundamental to fuzzy set theory. Zadeh’s traditional framework for fuzzy quantification [197, 199] is then explained. It is argued that the framework is too narrow for a comprehensive description of fuzzy NL quantification, and the particular approaches that evolved from it are shown to be inconsistent with the linguistic data. The second chapter introduces a novel framework for fuzzy quantification in which the linguistic phenomenon can be studied with the desired comprehensiveness and formal rigor. The framework comprises: fuzzy quantifiers, which serve as the operational model or target representation of the quantifiers of interest; semi-fuzzy quantifiers, which avail us with a uniform and practical specification format; and finally QFMs (quantifier fuzzification mechanisms), which establish the link between specifications and operational quantifiers. The basic representations underlying the proposed framework are directly modelled after the generalized quantifiers familiar in linguistics [7, 9]. Compared to the two-valued linguistic concept, semi-fuzzy quantifiers add approximate quantifiers like “almost all”, while fuzzy quantifiers further admit fuzzy arguments (“tall”, “young” etc.). Thus, semi-fuzzy quantifiers and fuzzy quantifiers are the apparent extension of the original generalized quantifiers to Type III and Type IV quantifications in the sense of Liu and Kerre [108]. The organization of my approach into specification and operational layers greatly simplifies its application in practice. The commitments intrinsic to this analysis of fuzzy quantification are discussed at the end of the chapter, which also judges the actual coverage of the proposed approach compared to the phenomenon of linguistic quantification in its full breadth.
X
Overview of the Book
The third chapter is concerned with the investigation of formal criteria which characterize a class of plausible models of fuzzy quantification. The basic strategy can be likened to the axiomatic description of t-norms [148], which constitute the plausible conjunctions in fuzzy set theory. Thus my approach is essentially algebraic. The basic idea is that of formalizing the intuitive expectations on plausible interpretations in order to avoid the notorious problems of existing approaches. To this end, the chapter introduces a system of six basic requirements which distill a larger catalogue of linguistic desiderata. The criteria are chosen such that they capture independent aspects of plausibility and that, taken together, they identify a class of plausible models which answers the relevant linguistic expectations. The fourth chapter gives evidence that the proposed axioms indeed capture a class of plausible models. To this end, an extensive list of criteria will be considered which are significant to the linguistic plausibility of the computed interpretations and to their expected coherence. All of these criteria are validated by the proposed models. This supports my choice of axioms even if some of these might appear rather ‘abstract’ at first sight. Apart from this purpose of justifying the proposed class of models, the formalization of semantical criteria is also a topic of independent interest. By investigating such criteria, we can further our knowledge about quantifier interpretations in natural languages. The space of possible models will be further explored in the fifth chapter. Specifically we will pay attention to certain subclasses of models, i.e. classes of models with some common structure or joint properties. The relative homogeneity of the models within these classes permits the definition of important concepts e.g. regarding the specificity of results. We shall further identify the class of standard models which comply with the standard choice of connectives in fuzzy set theory. The role of these standard models to fuzzy quantification can be likened to that of Abelian groups vs. general groups in mathematical group theory. The sixth chapter , then, is devoted to the study of some additional properties, like continuity (smoothness), which are ‘nice to have’ from a practical perspective, but not always useful for theoretical investigations, or sometimes even awkward in this context. This is why these properties should not be included into the core requirements on plausible models, but they can of course be used to further restrict the considered models if so desired. The chapter further discusses some ‘critical’ properties which cannot be satisfied by the models because they contradict some of the core requirements. (usually these properties even fail in much weaker systems). The existence of such cases is not surprising, of course, because fuzzy logic, as a rule, can never satisfy all axioms of Boolean algebra. These difficulties will usually be resolved by showing that the critical property conflicts with very basic requirements, and by pruning the original postulate to a compatible ‘core’ requirement. Having laid these theoretical foundations, we shall proceed to the issue of identifying prototypical models, which are potentially useful to applications.
Overview of the Book
XI
The investigation of various constructive principles will give us a grip on such concrete examples. Existing research has typically tried to explain fuzzy quantification in terms of cardinality comparisons, based on some notion of cardinality for fuzzy sets. However, such reduction is not possible for arbitrary quantifiers. Hence a comprehensive interpretation of linguistic quantifiers must rest on a more general conception. Chapters seven to ten will describe some suitable choices for such constructions which result in the increasingly broader classes of MB -models (chapter seven), Fξ -models (chapter eight), and finally FΩ or Fω -models (chapter nine). All of these models rest on a generalized supervaluationist approach based on a three-valued cutting mechanism. The tenth chapter , by contrast, will present a different mechanism based on the extension principle, which results in the constructive class of Fψ -models. This class of models which are definable in terms of the standard extension principle, is then shown to provide a different perspective on the class of FΩ -models, to which it is coextensive. Apart from introducing these classes of prototypical models, it will also be shown how how important properties of the models, like continuity, can be expressed in terms of conditions imposed on the underlying constructions. This facilitates the test whether a model of interest is sufficiently robust, its comparison of models with respect to their specificity etc. In particular, this analysis reveals that all practical Fψ - or FΩ -models belong to the Fξ -type. The eleventh chapter develops the algorithmic part of the theory, and is thus concerned with the issue of efficient implementation. Obviously, it only makes sense to consider practical (i.e. sufficiently robust) models. Thus, we can confine ourselves to analyzing models of the Fξ -type. The general strategies for efficient implementation described in this chapter will be instantiated for three prototypical models. The considered quantifiers include the familiar absolute and proportional types, as well as quantifiers of exception and cardinal comparatives. Some application examples are also discussed at the end of the chapter. The twelvth chapter proposes an extension of the basic framework for fuzzy quantification. The generalization will cover the most powerful notion of quantifiers developed by mathematicians, so-called Lindstr¨om quantifiers [106]. These quantifiers are also of potential linguistic relevance, and I explain how they can be used to model certain reciprocal constructions in natural language which give rise to branching quantification. The thirteenth chapter will resume the main contributions of this work and propose some directions for future research. There are two appendices. Appendix A presents the study of existing approaches cited in the introduction. It proposes an evaluation framework for approaches to fuzzy quantification based on Zadeh’s traditional framework. This analysis makes it possible to apply the semantical criteria developed in the main part of the book to the existing approaches to fuzzy quantification described in the literature.
XII
Overview of the Book
The book presents a total of 275 theorems and it would have been impossible to include, or even sketch, the proofs of all of them here. In order to keep the size of this work within limits and also to improve its readability, the theorems cited in this work have been proven separately in a series of technical reports [53, 55, 56, 57, 58, 59].† Appendix B lists a theorem reference chart which connects every theorem cited here with its original proofs published in these reports.
†
These reports are available from http://pi7.fernuni-hagen.de/gloeckner or directly from the author, email:
[email protected].
Contents
Overview of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX 1
An Introduction to Fuzzy Quantification: Origins and Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Logical Quantifiers vs. Linguistic Quantifiers . . . . . . . . . . . . . . . 1.3 The Vagueness of Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Fuzzy Set Theory: A Model of Linguistic Vagueness . . . . . . . . . 1.5 The Case for Fuzzy Quantification . . . . . . . . . . . . . . . . . . . . . . . . 1.6 The Origins of Fuzzy Quantification . . . . . . . . . . . . . . . . . . . . . . . 1.7 Issues in Fuzzy Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 The Modelling Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 The Traditional Modelling Framework . . . . . . . . . . . . . . . . . . . . . 1.10 Chapter Summary: The Need for a New Framework . . . . . . . . .
1 1 4 13 18 20 23 25 27 29 50
2
A Framework for Fuzzy Quantification . . . . . . . . . . . . . . . . . . . . . 2.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Fuzzy Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Dilemma of Fuzzy Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Semi-fuzzy Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Quantifier Fuzzification Mechanisms . . . . . . . . . . . . . . . . . . . . . . . 2.6 The Quantification Framework Assumption . . . . . . . . . . . . . . . . 2.7 A Note on Nullary Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 A Note on Undefined Interpretation . . . . . . . . . . . . . . . . . . . . . . . 2.9 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 55 65 68 70 74 75 78 79 80
3
The 3.1 3.2 3.3 3.4
85 85 86 88 90
Axiomatic Class of Plausible Models . . . . . . . . . . . . . . . . . . Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . Correct Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Membership Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Induced Propositional Logic . . . . . . . . . . . . . . . . . . . . . . . . . .
XIV
Contents
3.5 The Aristotelian Square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.6 Unions of Argument Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.7 Monotonicity in the Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.8 The Induced Extension Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 101 3.9 Determiner Fuzzification Schemes: The DFS Axioms . . . . . . . . 106 3.10 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4
Semantic Properties of the Models . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 111 4.2 Correct Generalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3 Properties of the Induced Truth Functions . . . . . . . . . . . . . . . . . 112 4.4 A Different View of the Induced Propositional Logic . . . . . . . . . 115 4.5 Argument Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.6 Cylindrical Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.7 Negation and Antonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.8 Symmetrical Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.9 Intersections of Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.10 Argument Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.11 Monotonicity Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.12 Properties of the Induced Extension Principle . . . . . . . . . . . . . . 129 4.13 Quantitativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 4.14 Extensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.15 Contextuality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 4.16 Semantics of the Standard Quantifiers . . . . . . . . . . . . . . . . . . . . . 136 4.17 Fuzzy Inverse Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4.18 The Semantics of Fuzzy Multi-Place Quantification . . . . . . . . . 141 4.19 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5
Special Subclasses of Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 149 5.2 Models Which Induce the Standard Negation . . . . . . . . . . . . . . . 150 5.3 Models Which Induce the Standard Disjunction . . . . . . . . . . . . 154 5.4 The Standard Models of Fuzzy Quantification . . . . . . . . . . . . . . 155 5.5 Axiomatisation of the Standard Models . . . . . . . . . . . . . . . . . . . . 157 5.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6
Special Semantical Properties and Theoretical Limits . . . . . . 161 6.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 161 6.2 Continuity Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.3 Propagation of Fuzziness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 6.4 Conjunctions and Disjunctions of Quantifiers . . . . . . . . . . . . . . . 165 6.5 Multiple Occurrences of Variables . . . . . . . . . . . . . . . . . . . . . . . . . 165 6.6 Convex Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.7 Conservativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 6.8 Fuzzy Argument Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.9 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Contents
XV
7
Models Defined in Terms of Three-Valued Cuts and Fuzzy-Median Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 7.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 179 7.2 Alpha-Cuts are not Suited to Define Models . . . . . . . . . . . . . . . . 180 7.3 From Two-Valued Logic to Kleene’s Logic, and Beyond . . . . . . 183 7.4 Adaptation of the Mechanism to Quantifiers . . . . . . . . . . . . . . . . 184 7.5 Three-Valued Cuts of Fuzzy Subsets . . . . . . . . . . . . . . . . . . . . . . . 187 7.6 The Fuzzy-Median Based Aggregation Mechanism . . . . . . . . . . 190 7.7 The Unrestricted Class of MB -QFMs . . . . . . . . . . . . . . . . . . . . . 191 7.8 Characterisation of MB -DFSes . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 7.9 A Simplified Construction Based on Mappings B : H −→ I . . . 197 7.10 Characterisation of the Models in Terms of Conditions on B . 198 7.11 Examples of MB -Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 7.12 Properties of MB -Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 7.13 A Standard Model with Unique Properties: MCX . . . . . . . . . . . 206 7.14 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8
Models Defined in Terms of Upper and Lower Bounds on Three-Valued Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 8.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 221 8.2 The Unrestricted Class of Fξ -QFMs . . . . . . . . . . . . . . . . . . . . . . . 223 8.3 Characterisation of Fξ -Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 8.4 Examples of Fξ -Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 8.5 Properties of the Fξ -Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 8.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
9
The Full Class of Models Defined in Terms of Three-Valued Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 9.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 237 9.2 The Unrestricted Class of FΩ -QFMs . . . . . . . . . . . . . . . . . . . . . . 237 9.3 Characterisation of the FΩ -Models . . . . . . . . . . . . . . . . . . . . . . . . 240 9.4 The Unrestricted Class of Fω -QFMs . . . . . . . . . . . . . . . . . . . . . . 243 9.5 The Classes of Fω -Models and FΩ -Models Coincide . . . . . . . . . 243 9.6 Characterisation of the Fω -Models . . . . . . . . . . . . . . . . . . . . . . . . 244 9.7 Examples of FΩ -Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 9.8 Properties of FΩ -Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 9.9 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
10 The 10.1 10.2 10.3 10.4 10.5 10.6
Class of Models Based on the Extension Principle . . . . . 259 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 259 A Similarity Measure on Fuzzy Arguments . . . . . . . . . . . . . . . . . 259 The Unrestricted Class of Fψ -QFMs . . . . . . . . . . . . . . . . . . . . . . 261 Characterisation of the Fψ -Models . . . . . . . . . . . . . . . . . . . . . . . . 264 The Classes of Fψ -Models and Fω -Models Coincide . . . . . . . . . 266 The Unrestricted Class of Fϕ -QFMs . . . . . . . . . . . . . . . . . . . . . . 268
XVI
Contents
10.7 10.8 10.9 10.10
The Classes of Fϕ -Models and Fψ -Models Coincide . . . . . . . . . 270 Characterisation of the Fϕ -Models . . . . . . . . . . . . . . . . . . . . . . . . 271 An Alternative Measure of Argument Similarity . . . . . . . . . . . . 272 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
11 Implementation of Quantifiers in the Models . . . . . . . . . . . . . . . 277 11.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 277 11.2 ‘Simple’ Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 11.3 Direct Implementation of Special Quantifiers in MCX . . . . . . . 282 11.4 The Core Algorithms for General Quantifiers . . . . . . . . . . . . . . . 287 11.5 Refinement for Quantitative Quantifiers . . . . . . . . . . . . . . . . . . . 293 11.6 Computation of Cardinality Bounds . . . . . . . . . . . . . . . . . . . . . . . 309 11.7 Implementation of Unary Quantifiers . . . . . . . . . . . . . . . . . . . . . . 313 11.8 Implementation of Absolute Quantifiers and Quantifiers of Exception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 11.9 Implementation of Proportional Quantifiers . . . . . . . . . . . . . . . . 321 11.10 Implementation of Cardinal Comparatives . . . . . . . . . . . . . . . . . 332 11.11 Chapter Summary and Application Examples . . . . . . . . . . . . . . 344 12 Multiple Variable Binding and Branching Quantification . . . 355 12.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 355 12.2 Lindstr¨ om Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 12.3 (Semi-)fuzzy L-Quantifiers and a Suitable Notion of QFMs . . . 359 12.4 Constructions on L-Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 12.5 The Class of Plausible L-Models . . . . . . . . . . . . . . . . . . . . . . . . . . 365 12.6 Nesting of Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 12.7 Application to the Modelling of Branching Quantification . . . . 371 12.8 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 13 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 13.1 A Novel Theory of Fuzzy Quantification . . . . . . . . . . . . . . . . . . . 375 13.2 Comparison to Earlier Work on Fuzzy Quantification . . . . . . . . 383 13.3 Future Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 A An Evaluation Framework for Approaches to Fuzzy Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 A.1 Motivation and Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . 391 A.2 The Evaluation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 A.3 The Sigma-Count Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 A.4 The OWA Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 A.5 The FG-Count Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 A.6 The FE-Count Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 A.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 B Theorem Reference Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Contents
XVII
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
1.1 Motivation and Chapter Overview Quantifiers are ubiquitous in everyday language, and examples of quantification pervade all areas of daily life (see Table 1.1). However, quantifiers are not only frequently used in spoken and written language: they also have a considerable share in what is being said. Whether there are “many clouds over Italy” or “very few clouds”, say, can make quite a difference for prospective tourists planning their summer vacation. Due to the crucial importance of NL quantifiers to the meaning and expressiveness of language, but also to the admissible inferences (i.e., the valid logical conclusions), it is not surprising that quantifying constructions aroused interest since the very beginning of scientific research. In fact, the analysis of quantification, which was originally considered the subject of logic and thus philosophy, can be traced back to the Ancient Greek, notably to the ‘first formal logician’, Aristotle [17, p. 40]. In his Topics, Aristotle describes the subject of logic as follows. “Reasoning is an argument in which, certain things being laid down, something other than these necessarily comes about through them.”
(English translation in Boche´ nski [17, (10.05), p. 44]) This focus on valid rules and inferential patterns explains the early interest in universal quantification, because universal propositions are necessary to describe general laws which hold for all possible instantiations. For example, the infamous “All men are mortals” makes an assertion about arbitrary men. Similar considerations apply to existential quantification. Existential propositions let us talk about an individual without explicitly naming it, and even without knowing its identity, cf. “Peter screamed” vs. “A man screamed”. In addition, existential quantification results from universal quantification and negation. These dependencies become visible in the well-known ‘square of opposition’ or ‘Aristotelian square’, see Sect. 3.5 below. Hence the universal and existential modes of quantification are of special importance to reasoning. Moreover, they represent the simplest and least peculiar examples of NL quantification. I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 1–53 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
2
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
Table 1.1. Examples of NL quantification in various areas of everyday life. Source: The Economist 25-31/05/2002 Finance and Economics: Many firms have stopped making markets (p. 75) Business: Most bosses assume they can change prices often and with little effort (p. 63) Few business schools teach pricing as a discipline (p. 63) Politics: Many Indians admit that they have misgoverned their only Muslim-majority state. . . (p. 25) Several separatist leaders . . . seem even more willing to co-operate with India (p. 26) Science and Technology: Some philosophers see free will as an illusion that helps people to interact with one another (p. 85) Most courts, for example, accept a claim of insanity as a defence in certain criminal cases (p. 85) Literature and Arts: Few living novelists write better than Mr Winton about the sea (p. 89) Studies of Feminism: Some radical women preached free love while most emphasised sexual purity (p. 89)
It is therefore not surprising that logicians tentatively restricted attention to these regular cases only. Still, the understanding of quantification advanced at a slow pace, and it took more than 2,000 years until the puzzles of universal and existential quantification were finally solved. In part, this slow progress exemplifies the intellectual difficulties associated with a second-order abstraction like quantification. Its chief reason, however, was the focus on syllogistic reasoning. According to Boche´ nski, a syllogism is “a λ´oγoς” [form of speech] “in which if something is posited, something else necessarily follows. Moreover such λ´oγoı are there treated as formulas which exhibit variables in place of words with constant meaning” [17, p. 2]. In practice, the syllogisms used for reasoning are composed of three parts: a major premiss, a minor premiss, and a conclusion. An example is [144, p. 206]: All men are mortal (Major premiss) Socrates is a man (Minor premiss) Therefore: Socrates is mortal (Conclusion).
The belief of Aristotle and his followers that “all deductive inference, when strictly stated, is syllogistic” [144, p. 207] obstructed progress in the development of quantificational logic. This is because Aristotelian logic is concerned with universal and existential quantification, but it lacks the notions of universal and existential quantifiers. In other words, the quantifiers “all” and “some” did not appear as logical abstractions in their own right, but only as
1.1 Motivation and Chapter Overview
3
structural elements of the syllogism. This hindered the development of a convincing semantical analysis for universal and existential quantification. It was G. Frege who developed the modern doctrine of quantifiers which “in contrast to the Aristotelian tradition, . . . are conceived as separate from the quantified function and its copula, and are so symbolized” [17, p. 347]. Thus the existential and universal quantifiers – ∀ (“for all”) and ∃ (“exists”) in our notation – became first-class citizens of logic. The subsequent adoption and refinement of Freges doctrine marked a revolution in the historical development of logic which resulted in today’s systems of first-order predicate logic, higher-order logic, and type theory. The new analysis in terms of function and argument(s) and the flexible use of quantifiers marked the end of syllogistic reasoning. It was replaced with modern calculi which unlike syllogisms, are applicable to arbitrary formulas of predicate logic. In this way, formal logic was finally equipped with a model of quantification which suits its purposes. Apart from the apparent application to mathematics, there have also been attempts to utilize the logical quantifiers ∀ and ∃ for the semantic description of natural language. One of the pioneering works is Russell’s analysis of definite descriptions like “the author of Waverley”. As Russell argues, these descriptions are reducible to combinations of the logical quantifiers. The proposition “Scott was the author of Waverley”, for example, which involves the definite singular quantifier “the”, is rephrased thus [144, p. 785]: There is an entity c such that the statement “x wrote Waverley” is true if x is c and false otherwise; moreover c is Scott. In terms of the logical quantifiers, this analysis becomes ∃c(aw(c) ∧ ∀x(aw(x) → x = c) ∧ c = Scott) , where ‘aw’ abbreviates “author of Waverley”. The analysis of a special kind of NL quantifier in terms of the logical quantifiers may look promising. However, there is no apparent generalization of Russell’s approach to a broader class of NL quantifiers. In addition, Russell’s theory of descriptions has been faced with substantial criticism due to its poor account of the involved presuppositions.1 In this chapter, we will see that comparable difficulties must be expected if one tries to express other linguistic quantifiers in terms of ∀ and ∃. Specifically, the logical quantifiers fail to account for the wealth of NL quantifiers because they are too weak, i.e. unable to capture the meaning of important examples. Moreover, “the familiar ∀ and ∃ are extremely atypical quantifiers. They have special properties which are entirely misleading when one is concerned with quantifiers in general” [7, p. 160]. An example of these structural differences 1
for example, “the author of Waverley” presupposes that such an author exists. See van der Sandt [146] and Sag/Prince [145] for a survey of work on presuppositions and pointers into the literature.
4
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
is the dependency of many NL quantifiers on a pattern of several arguments. And certain assumptions legitimated by ∀ and ∃ must be given up in the general case. In order to develop a practical notion of quantifiers, we must then take a look into that property of NL known as ‘vagueness’ or ‘fuzziness’. Basically, the vagueness of NL makes a limited repository of NL concepts applicable to a potentially unbounded variety of phenomena, by allowing a partial mismatch between the descriptive means and the observed state of affairs. Thus the vagueness of language introduces imprecision or ‘imperfection’ but it makes language a practical communication medium. The chapter provides some background information on linguistic vagueness and the attempts at its modelling by presenting the fundamentals of vagueness theory. Only one of these methods has achieved practical significance, viz Zadeh’s fuzzy set theory which interprets vague (or ‘fuzzy’) predicates in terms of mathematical models known as ‘fuzzy sets’. The method is not only applicable to NL predicates, but also potentially useful for modelling NL quantifiers. Following a discussion of the origins of fuzzy quantification theory and its main directions of research, the central problematic of this monograph will be introduced, i.e. the modelling problem of establishing a system of meaningful interpretations for linguistic quantifiers in a suitable framework for fuzzy quantification. We will review Zadeh’s traditional solution to the modelling problem and the specific interpretation methods that evolved from it. The subsequent evaluation of these approaches based on examples reported in the literature and some new observations will have negative results concerning the coverage and plausibility of these methods. The conclusion of the chapter relates these difficulties to Zadeh’s superordinate framework and outlines the necessary changes to overcome these problems in an improved framework.
1.2 Logical Quantifiers vs. Linguistic Quantifiers In this comparison of logical quantifiers and NL quantifiers, we will roughly follow the classic arguments of Barwise and Cooper [7] that the logical quantifiers are insufficient for modelling NL semantics, thus substantiating the need for a generalization. Before we can discuss this matter, however, some terminological subtleties must first be clarified. In the linguistic theory of quantification, quantifying elements like “all”, “most” etc. are usually referred to as ‘determiners’. The term ‘quantifier’, then, is reserved for the (interpretation of) nominal phrases only, i.e. for the class ‘NP’ of expressions such as proper names, descriptions, and quantified terms [50, p. 226].2 From a linguist’s point of view, making a sharp distinction between determiners and quantifiers can be necessary, especially at the crossroads of quantifiers and syntactic analysis. In other cases, however, the distinction only adds complexity. For the purposes of this monograph, it is favourable to adopt a ‘flat’ model of quantifiers, 2
Hence “all men” is a quantifier, but “all” is a determiner in the linguistic terminology.
1.2 Logical Quantifiers vs. Linguistic Quantifiers
5
which offers a uniform representation for both determiners and quantifiers corresponding to NPs (see Chap. 2 for details). Such a flat model is often used in the literature when emphasis is placed on the quantifying elements ‘per se’ and their semantic description [10, p. 444]. The model makes it possible to drop the awkward distinction and treat determiners as a special case of quantifiers in this monograph. The chosen terminology is also in better conformance with existing research on fuzzy quantifiers (to be discussed below), where expressions like “all” or “most” are generally called ‘quantifiers’ rather than ‘determiners’. The fuzzy approaches deviate from the linguistic notion of quantifiers in yet another point, which is concerned with the possible ranges of quantification. When interpreting a quantifying proposition, one always needs a base set associated with the quantifier, i.e. a non-empty set E (for ‘entities’) which supplies the individuals over which the quantification ranges. For the semantic interpretation of a formula like ∀x expensive(x), say, one uses a ‘model’ or ‘structure’ which among other things, specifies the collection of ‘all things’ against which expensive(x) will be tested. Similar considerations apply to the interpretation of quantifying propositions in natural language, like the corresponding “Everything is expensive”. In the usual case of quantification based on determiners and NPs, the base set of the quantifier coincides with the given universe of discourse, i.e. the collection of individuals we can talk about. In the above example of the logical quantifiers in predicate logic, the base set used for interpreting ∀ and ∃ is always the set of entities provided by the model. In fact, this is the only type of quantification considered by most linguists, which usually focus on determiners and NPs. This narrow notion of quantifiers rests on the hypothesis that NPs are the only sources of NL quantifiers over the domain of discourse, see Barwise and Cooper [7, p. 177]. However, there are other constructions in NL which arguably involve quantification, although they do not fit into the NP picture. As opposed to the ‘explicit’ quantifications considered so far, which are always connected to determiners like “most”, these cases make ‘implicit’ use of quantification, which is needed for their semantical interpretation. Typical examples are temporal adverbs like “always”, “never”, “often”, “a few times”; and spatial adverbs like “everywhere”, “somewhere” etc. In this case, the base set of the corresponding quantifiers differs from the universe of discourse, and quantification now ranges over such things as points in time or space (the precise nature of the base sets obviously depends on the chosen temporal and spatial modelling.) Another source of examples are dispositions, i.e. propositions like “Slimness is attractive” which are preponderantly, but not necessarily always, true [200, p. 713]. In this case no quantifier is visible at the NL surface at all. However, the above disposition can be given the pragmatical interpretation “Usually slimness is attractive”, which depends on some kind of quantification over situations or circumstances. This approach was pioneered in Zadeh’s work on dispositional reasoning based on fuzzy set theory [199, 200]. Similar ideas emerged in non-monotonic logic
6
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
Table 1.2. Quantifying propositions and their translations into predicate logic: Some elementary examples NL proposition Everything is expensive All bachelors are men Some men are married No man is his own father
Logical paraphrase ∀x expensive(x) ∀x(bachelor(x) → man(x)) ∃x(man(x) ∧ married(x)) ¬∃x(man(x) ∧ father(x, x))
where the modelling of default reasoning in terms of generalized quantifiers was proposed [147]. In fuzzy set theory, it is customary to adopt a broad notion of quantifiers which also covers such ‘implicit’ cases. We will follow this practice here and admit both the explicit and implicit types of quantification. Now that the terminological commitments have been explained, we are ready to contrast the logical account of quantifiers with some facts about ‘natural’ quantifiers actually found in human languages. Let us begin with the issue of expressive power, i.e. is it possible to define the meaning of NL quantifiers in terms of the logical quantifiers ∀ and ∃ of first-order predicate logic? It should not come as a surprise that this cannot be done in general; consequently, we are interested in identifying the precise class of those NL quantifiers which are first-order definable. The linguistic equivalents of the logical quantifiers, to begin with – i.e. “all”, “some” as well as the derivations “no” and “not all” – have obvious renderings in first-order logic, see Table 1.2 for some examples. Noticing that “some” means “at least one”, we can try and extend the analysis to quantifying propositions of the general form “There are at least k X’s”, for some cardinal k ∈ N. The following definitions of corresponding quantifiers [≥k] should be straightforward: [≥1]xϕ(x) ↔ ∃xϕ(x) [≥2]xϕ(x) ↔ ∃x1 ∃x2 (ϕ(x1 ) ∧ ϕ(x2 ) ∧ x1 = x2 ) [≥3]xϕ(x) ↔ ∃x1 ∃x2 ∃x3 (ϕ(x1 ) ∧ ϕ(x2 ) ∧ ϕ(x3 ) ∧ x1 = x2 ∧ x1 = x3 ∧ x2 = x3 ) .. . [≥k]xϕ(x) ↔ ∃x1 · · · ∃xk (
k
i=1
ϕ(xi ) ∧
xi = xj ) .
i=j
Of course, it is also possible to express these quantifiers in terms of cardinalities. Hence “There are at least k X’s” asserts that the cardinality of the collection of X’s is larger than or equal to k. The example is representative of the general class of quantifiers defined in terms of absolute counts. These will be called absolute quantifiers in this work, c.f. [53, p. 82], [108, p. 2] and [98, p. 208]; however, the terms ‘cardinal determiners’ [91, p. 98] and ‘quantifiers of the first kind’ [197, p. 149] are also common. Further examples of this class comprise apparent derivations like “more than k” and “at most k”, as well
1.2 Logical Quantifiers vs. Linguistic Quantifiers
7
as the ‘bounding’ type of quantifiers like “exactly ten” or “between five and ten”, see Keenan and Moss [91, p. 123]. The corresponding quantifiers, denoted [>k], [≤k], [=k] and [≥k; ≤u], respectively, are all first-order definable: [>k]xϕ(x) ↔ [≥k + 1]xϕ(x) [≤k]xϕ(x) ↔ ¬[>k]xϕ(x) [=k]xϕ(x) ↔ [≥ k]xϕ(x) ∧ [≤ k]xϕ(x) [≥k; ≤u]xϕ(x) ↔ [≥ k]xϕ(x) ∧ [≤ u]xϕ(x) . Thus typical absolute quantifiers appear to be first-order definable. The issue was clarified by van Benthem [10, p. 462, Th-6.2], who has a general result on logical definability: “All first-order definable quantifiers are logically equivalent to Boolean compounds of the types at most k (non-) A are (not) B and there are at most k (non-) A.” Apart from its apparent assertion about absolute quantifiers, what this actually means is that as a rule, all other classes of NL quantifiers are not first-order definable. Consider the quantifier “most”, for example. The truth of a proposition like “Most men are married” obviously depends on the relative share of those men which are married. The example is representative of the general class of quantifiers which depend on a ratio of cardinalities, so-called proportional quantifiers. These quantifiers are also known as ‘proportional determiners’ [91, p. 123], ‘relative quantifiers’ [108, p. 2], [98, p. 209] or ‘quantifiers of the second kind’ [197, p. 149]. Further examples of the proportional type comprise “half of the”, “every third”, “five percent”, “more than ten percent”, “less than twenty percent”, “between ten and twenty percent” etc. Proportional quantifiers are frequently used in natural language (in particular the vague examples “many”, “few”, “almost all” etc. which will be discussed later) and also of obvious relevance to applications. However, these quantifiers are not first-order definable, except for a few cases like “100 percent” and “more than 0 percent”, which degenerate to the logical quantifiers “all” and “some”, respectively. An exemplary proof that “more than half” is not first-order definable was presented by Barwise and Cooper [7, Th-C12, p. 213]. Hence predicate logic lacks the expressive power to model common types of quantifiers, including the proportional examples: “It is not just that we do not see how to express them in terms of ∀ and ∃; it simply cannot be done” [7, p. 160]. The significance of these findings mainly stems from the fact that first-order predicate logic is the strongest system of logic for which the usual proof theory can be developed [107]. Consequently, there are inherent limits to the development of logical calculi for NL quantifiers. This somewhat pessimistic prospect for reasoning with linguistic quantifiers should not discourage the modelling of such quantifiers, though. For example, it is perfectly possible that many NL quantifiers can be expressed in a stronger system of higher-order logic.
8
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
In any case, a successful modelling must account for the peculiarities of NL quantifiers, and we shall now consider some of those characteristics which set them apart from the logical quantifiers. The most conspicious difference is concerned with the quantificational structure of natural language, i.e. linguistic quantifiers typically show more complex patterns of arguments compared to the logical ones. Consider the quantifying proposition “Some men are married”, for example. The proposition can be viewed as an instance of the general pattern “Some Y1 ’s are Y2 ’s”, which involves two arguments Y1 , Y2 . The first argument, Y1 , serves to restrict the quantification to the individuals of interest (the set of men in our case). It is therefore called the restriction of the quantifier. The second argument, Y2 , asserts something about the individuals specified by Y1 (in this case that one of these individuals is married). This second argument will be called the scope of the quantifier. The example thus demonstrates the restricted use of an NL quantifier, where the range of quantification is controlled by a restriction argument. Occasionally, one also finds the unrestricted use of an NL quantifier. A proposition like “Something is wrong”, for example, which instantiates the general pattern “Something is Y ”, does not constrain the objects of interest in any way. There is only one argument, Y , which functions as the scope of the quantifier. Because no restriction is specified, the quantification ranges over the full base set, E. It is this usage of “some” after which existential quantification in predicate logic has been modelled. In formulas of the type ∃xϕ(x), there is only a scope argument ϕ(x). We thus have unrestricted quantification, because x ranges over all individuals of the given domain. As it happens, the restricted use of “some”, as in “Some men are married”, can be reduced to the unrestricted case, i.e. to the equivalent “Something is a man and married”. It is this principle which underlies the usual translation of existential propositions into predicate logic, where “Some men are married” would typically map to the unrestricted statement ∃x(men(x) ∧ married(x)). Universally quantified propositions permit a similar analysis. Consider the assertion “All bachelors are men”, for example, which is an instance of the general pattern “All Y1 ’s are Y2 ’s”. We thus have restricted quantification, based on the restriction “bachelor” and the scope “men”. Again, the restricted case can be reduced to simple unrestricted quantification. For example, “All bachelors are men” admits the unrestricted paraphrase “All things are either not bachelors or men”. This analysis corresponds to the usual translation into predicate logic, ∀x(bachelor(x) → men(x)). Such reductions notwithstanding, the examples demonstrate that it is the restricted type of quantification which prevails in natural language. Hence NL quantifiers are usually two-place and controlled by a restriction argument, rather than totally unconstrained, ranging over the full domain. The principle of restricted quantification and the general preference for this type of quantification in NL accounts for the somewhat arbitrary nature of the base set E as a whole. The base set must be large enough to supply all individuals we can talk about, and consequently there will be a lot of individuals
1.2 Logical Quantifiers vs. Linguistic Quantifiers
9
in every given situation which are totally unrelated to the task at hand. It is therefore necessary for successful communication that one’s assertions be constrained to the objects of actual interest, and restricted quantification is the linguistic device to achieve this. In fact, there are only very limited ways of expressing unrestricted quantification in NL, because the restricted form is so closely tied to NL syntax and the general pattern “Q Y1 ’s are Y2 ’s”. Apart from a few special cases like “everything”, “something” and “nothing”, which can be considered genuinely unrestricted, and the construction “There are Q X’s”, which is possible for absolute quantifiers, it appears that we can usually only paraphrase the unrestricted form of quantification, by using formulations like “Q things are X”, or “Q of all things are X”. These facts about natural language are in apparent contrast with the situation in predicate logic, where quantifiers are always unrestricted. Of course restricted existential and universal quantification can be simulated in predicate logic, and the limitation of the logical language to the simple unrestricted case is certainly motivated by the availability of this reduction. Turning to linguistic quantifiers in general, we must therefore examine whether this reduction is also possible for arbitrary NL quantifiers. In fact, the issue has already been decided by linguistic research, and the answer came out negative: NL quantifiers which cannot be expressed in terms of their unrestricted counterparts are rather common. The prime case are again proportional quantifiers like “most”, “more than 60 percent” etc. An exemplary proof that “more than half” is irreducible to the unrestricted case was given by Barwise and Cooper, who have shown that there is no way to define “More than half of the V ’s” in terms of “More than half of all things” and the operations of first-order logic, even if one restricts attention to finite models, see [7, p. 214]. The irreducibility of many NL quantifiers to unrestricted quantification demonstrates that these quantifiers are genuine binary quantifiers. In fact, multi-place quantifiers of arity n > 2 are also quite common. A typical example is “More men than women are snorers”, which is drawn from the general pattern “More Y1 ’s than Y2 ’s are Y3 ’s”. Hence “more than” is a three-place quantifier. It is representative of a class of NL quantifiers called cardinal comparatives [92, p. 305] or comparative determiners [91, p. 123], which effect a comparison of cardinalities. For another example, consider “Most dogs and cats are either coddled or uncared-for”. It is possible to model such cases by complex, constructed quantifiers. The above proposition, for example, can be analysed in terms of the pattern “Most Y1 ’s and Y2 ’s are either Y3 ’s or Y4 ’s”, which corresponds to a four-place quantifier. The analysis of Boolean constructions like “no student or teacher” in terms of complex quantifiers has also been advocated by Keenan and Moss [91, p. 78-93] who make the linguistic point that this treatment be necessary to preserve the validity of Frege’s compositionality principle. To sum up, the logical quantifiers are generally unrestricted and confined to a single argument, while in natural language, we usually have restricted quantification and two or more arguments. Due to this
10
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
structural difference, a more general model is needed for linguistic quantifiers, e.g. some kind of many-place function. In the following, we shall consider some further properties of the logical quantifiers which do not generalize to linguistic quantifiers. The first is concerned with quantitativity. The logical quantifiers do not refer to specific elements of the domain, and can thus be defined in terms of the cardinality of their arguments or Boolean combinations. Hence ∃xϕ(x) asserts that the set of all things which satisfy ϕ(x) has positive cardinality, while ∀xϕ(x) asserts that the complement of this set has zero cardinality. In general, such quantifiers which depend on cardinalities will be called quantitative. Obviously, absolute quantifiers like “more than ten” also conform to this scheme. The same can be said about proportional quantifiers like “most”, which are defined in terms of a ratio of cardinalities (assuming that the base set E be finite). Indeed, many NL quantifiers depend on cardinalities, and are thus quantitative. However, the preponderance of quantitative examples does not mean that linguistic quantifiers are restricted to the quantitative type. Some counter-examples comprise “Hans” and other proper names, “all married” and other composites restricted by an adjective, and finally quantifiers on infinite base sets – as shown by van Benthem [9, p.474], most quantifiers of interest become non-quantitative in this case.3 A comprehensive model of NL quantification should cover all of these cases regardless of quantitativity. The usual treatment of universal and existential quantification in predicate logic suggests that the meaning of quantifiers be ‘cast in stone’ and totally independent of factors like the model chosen for interpretation. This conception is appropriate for quantifiers like ∀ and ∃, which can indeed be defined without reference to the model. In the following, we shall denote all quantifiers with this invariance property as logical quantifiers, i.e. quantifiers which can be treated as logical symbols. It is important to notice that many NL quantifiers do not fit into this category. Examples comprise the quantifier “every man” in the pattern “Every man is Y ” and other noun phrases; the quantifier “all married” in the pattern “All married Y1 ’s are Y2 ’s” and other quantifiers restricted by adjectives; and finally “Peter” in the pattern “Peter is Y ”, as well as other proper names. In all of these cases, it is only the chosen model (i.e. ‘structure’, ‘interpretation’) which supplies an interpretation for the non-logical symbols “men” and “married” (predicates) and “Peter” (treated as a constant), thus fixing the meaning of the above quantifiers. For example, we cannot interpret propositions of the form “Every man is X” unless we know the denotation of “man”, i.e. the collection of individuals it refers to. Consequently, “every” is a logical symbol, but the derived quantifier “every man” is a non-logical symbol.
3
we shall see below on p. 12 how the criterion for quantitativity can be formalized such that it also works for infinite base sets.
1.2 Logical Quantifiers vs. Linguistic Quantifiers
11
In the examples discussed so far, knowing the model was sufficient to decide on the meaning of the non-logical quantifiers. There are quantifiers, however, the meaning of which can vary even when the model is fixed, and which thus show some kind of context dependence. In fact, such quantifiers can even be found in mathematics. Consider the quantifier Qxϕ(x) of Sgro [149], for example, which asserts that the set of all things which satisfy ϕ(x) contains a non-empty open set. As pointed out by Barwise and Cooper, the meaning of Q is “determined not by logic, but by some underlying notion of distance, or, more precisely, by an underlying ‘topology’.” [7, p. 162]. Hence a richer notion of model is necessary to fix the meaning of the quantifier. In linguistics, similar methods must be employed for assigning an interpretation to proportional quantifiers like “more than 50 percent” when the base sets are infinite, see e.g. Barwise/Cooper [7, p. 163] and van Benthem [9, p. 474-477]. Furthermore, the meaning of such quantifiers as “many” or “few” is contextdependent even when the base set is finite. To be specific, these quantifiers involve some kind of comparison. It is this ‘implied’ comparison which brings about their context-dependence because “in simple uses at least, the standard of comparison is usually not given” [92, p. 258]. Keenan and Stavi therefore conclude that quantifying propositions involving “many” or “most” cannot be interpreted at all and thus assume no determinate truth value [92, p. 258]. However, these quantifiers are very common in ordinary language and people normally understand them without difficulty. This indicates that we need not join the pessimism of Keenan and Stavi. By contrast, we will postulate a richer source of information which fixes the meaning of these quantifiers as well. In order to remain agnostic about the precise nature of these ‘enriched’ models and the particular kinds of information needed for disambiguation, it is useful to delegate this type of problem to some external means, thus leaving the basic notion of a logical model intact. Following Barwise and Cooper [7, p. 163], we shall therefore assume that there is a rich context which fixes the meaning of all quantifiers of interest, and leave it to later specialized research to formalize a suitable notion of contexts. The examples presented above demonstrate that a generalized notion of quantifiers is needed to capture the meaning of linguistic quantifiers, simply because the familiar predicate calculus covers only a small fragment of NL quantifications. We have already identified some informal requirements on such a generalized notion (e.g. concerning argument structure). There are also proposals for a generalization of quantifiers in mathematical logic; the pioneering work in this area was done in the 1950’s by Mostowski [116]. The Mostowskian notion of a quantifier is apparent from the following consideration. Let us denote by {x : ϕ(x)} the set of all things which satisfy ϕ(x). It is then apparent that the logical quantifiers can be expressed in terms of {x : ϕ(x)}, i.e. ∀xϕ(x) asserts that {x : ϕ(x)} = E, and ∃xϕ(x) asserts that {x : ϕ(x)} = ∅. This means that both ∀ and ∃ are definable in terms of a suitable mapping Q : P(E) −→ {0, 1}, where P(E) denotes the powerset (set of subsets) of E, i.e. Qxϕ(x) is satisfied in the given model if and only
12
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
if Q(Y ) = 1, where Y = {x : ϕ(x)}. This suggests an apparent abstraction to the class of generalized quantifiers defined in terms of all such mappings Q : P(E) −→ {0, 1}. Essentially, we have now arrived at Mostowski’s notion of a quantifier restricted to E, where E is the given base set (to be precise, Mostowski uses a somewhat different notation). However, Mostowski further constrains the admissible choices of Q by admitting quantitative examples only. Mostowski was the first to formalize the intuitive criterion of quantitativity in terms of automorphism invariance. This means that every quantifier restricted to E in Mostowski’s sense has Q(β(Y )) = Q(Y ), for all bijections β : E −→ E and all Y ∈ P(E), i.e. Q cannot refer to any specific individuals in E. Mostowski’s proposal has become the standard formalization of quantitativity, because it offers a straightforward definition for both finite and infinite base sets. The ‘quantifiers restricted to E’ so defined, i.e. automorphism-invariant mappings Q : P(E) −→ {0, 1}, still depend on the base set E = ∅. A (full) quantifier Q in the sense of Mostowski, then, assigns to each base set E = ∅ a corresponding quantifier QE restricted to E. This definition eliminates the dependency of Mostowskian quantifiers on any specific base set, a feature which is probably more important for mathematical applications than for linguistic description. Resuming, Mostowski has proposed a straightforward generalization of the logical quantifiers, which increases expressiveness and flexibility. His notion of generalized quantifers is taylored to mathematics, though. Mostowski’s own examples are limited to ‘numerical’ or ‘cardinality’ quantifiers like “at most finitely many” and “at most denumerably many” [116, p. 14/15]. Subsequent work on mathematical generalized quantifiers comprises Keisler’s discussion of cardinality quantifiers [94], as well as the topological quantifiers studied by Sgro [149]. The Mostowskian picture is still to narrow for a thorough treatment of quantification in NL, however. In particular, it neither accounts for restricted or multi-place quantification, nor for the non-quantitativity of certain NL examples. Moreover, Mostowski’s assumption that every quantifier be defined on every possible base set E, is likely not appropriate for NL quantifiers which might be tied to certain choices of domains. Lindstr¨ om 1966 [106] introduces an even more powerful notion of quantifier, which takes the generalization begun by Mostowski to an extreme. The quantifiers of Lindstr¨ om are able to express multi-place quantification, i.e. they accept several arguments. In addition, Lindstr¨ om quantifiers are capable of binding more than one variable at a time (the simultaneous binding of several variables by a quantifier has first been described by Rosser and Turquette [141]). A Lindstr¨ om quantifier of type 1, 0, 2, say, lets us construct logical formulas like Qx,,yz (ϕ(x), ψ, χ(y, z)) , where the three-place quantifier Q binds one variable x in the first argument ϕ(x), no variables in ψ, and two variables y and z in the third argument χ(y, z). Thus, Lindstr¨ om quantifiers are indeed very expressive. However, they
1.3 The Vagueness of Language
13
too are not suited as a general model of linguistic quantifiers because Lindstr¨om quantifiers are invariant under isomorphisms and thus quantitative. Hence both Mostowski’s and Lindstr¨ om’s generalizations of quantifiers were not developed with linguistic applications in mind. Nevertheless, it should be obvious that similar modelling devices are necessary for the thorough description of quantification in NL. A generalization of quantifiers suited for linguistics should support all kinds of multi-place quantification, quantitative as well as non-quantitative examples, and it should not force NL quantifiers to be defined for arbitrary base sets. In any case, what we need is a practical model of quantifiers the application of which is not confined to the realm of mathematics. In order to cope with actual language use, the model must give up some idealizations which are legitimate in mathematical logic. The logical quantifiers are always supplied with precisely defined inputs and always determine a clear-cut result in response to such inputs; in this sense, they are idealized quantifiers. But natural language is different.
1.3 The Vagueness of Language Classical logic rests on the assumption of sharply defined, unambiguous concepts, which are totally independent of context. This assumption is certainly justified with regard to an application in mathematics. In fact, Abelian groups, vector fields and the like are the prime cases of such artificial, idealized concepts. It is remarkable that this precision of artificial, logical languages is totally absent in natural languages. In contrast, the meanings of NL terms are typically non-idealized or ‘imperfect’. There are several ways in which this imperfection shows up in NL [90, Chap. 1]: 1. 2. 3. 4.
vagueness or ‘fuzziness’ underspecificity ambiguity context-dependence
Let us start by discussing the notion of vagueness, the remaining factors will be considered later on. Vagueness is a typical characteristic of NL concepts. It shows up in adjectives like “young”, “bald”, “tall”, “expensive”, “smart”, “important”, but also in nouns like “heap”, “child” or “beauty”. In all of these cases, we experience vagueness. For example, it is hard to tell the admissible ages of persons who qualify as “young”, and it is equally difficult to relate the baldness of a person to the number of hair on the person’s head or similar factors. It is notoriously difficult, but ethically relevant, to decide on the precise age an embryo must reach to become a “child”. According to Keefe and Smith [90, Chap. 1], vague predicates like the above can be recognized from the following characteristics. As opposed to so-called precise predicates, vague predicates have borderline cases, they have fuzzy boundaries, and they are susceptible to Sorites paradoxes. We will consider
14
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
these criteria in turn. Borderline cases, to begin with, are cases in which we find ourselves unable or ‘unwilling’ to decide whether or not the property of interest applies. Thus a borderline bald person is neither clearly bald nor clearly not bald. It is important to understand that the unclarity associated with borderline cases is not due to a lack of information, because a borderline bald person will remain a borderline case even if we gather further information, e.g. by inspecting his scalp with a magnifying-glass. Similarly, deciding if a borderline tall person is tall will not be simplified if we measure the person’s height with the utmost precision. For example, 1.84 metres vs. 1.84000001 metres is not likely to make a difference. Consequently, assertions like “Bill is bald” or “Tim is tall” (assuming they are borderline cases), must be considered neither true nor false, but rather ‘indeterminate’ or ‘in-between’. This means that vague predicates violate the law of ‘tertium non datur’, which suggests an extension of the two-valued system of truth values. Vague predicates further lack sharp boundaries. For example, there is no exact point on a scale of ages which separates “young” persons from those no longer young. Rather we have a fuzzy boundary, i.e. a blurred area of gradual transition. In general, then, this means that a vague predicate F not only has boundary cases; there is not even a clear separation between the clear cases and the borderline cases. It rather appears that in the boundary area, there is a gradual shift of F -ness which runs the gamut from clear positives to clear negatives. Finally vague predicates are susceptible to Sorites paradoxes. According to Keefe and Smith [90, p. 9/10], A paradigm Sorites set-up for the predicate F is a sequence of objects xi , such that the two premises (1) F x1 (2) For all i, if F xi , then F xi+1 both appear true, but, for some suitable large n, the putative conclusion (3) F xn seems false. For example, (1) a person of age ten is young; (2) a young person remains young when aging by 1 millisecond; (3) a ninety-year old is young. By instantiating the general scheme, similar examples can be constructed for arbitrary vague predicates. In the examples given so far, we preferred ‘simple’ predicates which mainly depend on a single dimension – like “tall” (height), “young” (age) or “bald” (which was supposed to depend on the number of hair for simplicity). Obviously, multidimensional predicates like “big”, or predicates like “beautiful” or “important”, where the relevant dimensions are not even clear, also exhibit vagueness. In addition, the vagueness of natural language is not restricted to common nouns like “heap” or “importance” and adjectives like “tall” or “young”. In contrast, vagueness is ubiquituous in NL and can be found right across all syntactic categories. For examples, verbs like “hurry”, adverbs like
1.3 The Vagueness of Language
15
“slowly”, modifiers like “very” and quantifiers like “many” or “almost all” are all vague. At this point, let us elucidate the classificatory difference between vagueness and the other sources of imperfection in NL. •
Underspecificity, to begin with, is “a matter of being less than adequately informative for the purpose in hand”, examples being “Someone said something” and “an integer greater than thirty” [90, p. 5]. These examples demonstrate that underspecificity per se has nothing to do with fuzzy boundaries, borderline cases, and the Sorites paradox. • Ambiguity refers to one-to-many relationships between words and word senses. “Liver”, for example, can denote someone who lives, as in “loose liver”, but it can also refer to the part of the body, as in “liver complaint”. The individual word senses of an ambiguous term can either be vague or not, just like ordinary word senses. Hence, ambiguity as such has nothing to do with vagueness. • Context dependence describes the variability in the meaning of NL terms which can be attributed to a change in external factors. (We have already met with this phenomenon when discussing quantifiers like “many”). For example, a small basketball player may well exceed the height of a tall equestrian. Obviously, many vague predicates are context-dependent. Relational adjectives like “tall”, “young”, “expensive” in particular, depend on a standard of comparison which is usually not stated explicitly, and must therefore be resolved from the context. However, as pointed out by Keefe and Smith [90, p. 6]: “Fix on a context which can be made as definite as you like (in particular, choose a specific comparison class): “tall” will remain vague, with borderline cases and fuzzy boundaries, and the sorites paradox will retain its force. This indicates that we are unlikely to understand vagueness or solve the paradox by concentrating on context-dependence.”
In communication, the imperfection of NL results in some uncertainty regarding the information conveyed. It is therefore instructive to relate the above notions of underspecificity, ambiguity, context-dependence and vagueness to the ‘classical’ model of uncertainty, i.e. probability theory. It appears that underspecificity, ambiguity and context-dependence all introduce a number of alternatives regarding the intended interpretation. In the case of underspecificity, these alternatives result from the use of overly general terms; in the case of ambiguity, the alternatives express different word senses; and in the case of context-dependence, we have the option of choosing different comparison classes. In order to decide between these alternatives, we cannot do better but rely on experience and thus, expectations. This indicates that probability theory, which is concerned with the formalization of expectations, might provide a suitable framework for discussing these phenomena. Vagueness, however, is not primarily an epistemic question and caused by a lack of information. For example, if someone is borderline tall, then “no amount of further information about his exact height (and the heights of others) could help us decide whether
16
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
he is tall” [90, p. 2]. Consequently, a probabilistic model which fills in expectations of the missing pieces of information will not help here. Moreover, the probabilistic model assumes a clear distinction between positive and negative outcomes of an event. Vagueness, however, means a lack of such a definite or sharp distinction, thus undermining the very preconditions of probabilistic modelling. In addition, vagueness does not seem to involve any kind of “events”. Compared to the precision achieved in logic, the phenomenon of vagueness appears as a deficiency or ‘imperfection’ at first sight. Nevertheless, vagueness appears to be a universal characteristic of all natural languages. As opposed to artificial languages designed on the logician’s desk, natural languages had to stand the test of actual language use; these languages were never meant to be ‘ideal’, and they had to be practical, robust and flexible in the first place. While artificial systems of logic essentially depend on everything to be made 100% precise, this requirement is unkown to NL, which is tolerant against vague and imprecise information. The demand for accuracy intrinsic to classical logic might create a burden of ‘precisification’ and formalization which makes its application impractical or even not feasible in some cases. Moreover, the vagueness of NL accounts for the imprecision of our senses. Not surprisingly, then, perceptual predicates like “red” which directly refer to perceptual categories are typically vague. Wright [169] considers such predicates ‘tolerant’, because there is “a notion of degree of change too small to make any difference” to their applicability. Natural languages profit from this tolerance, which lets them absorb a certain degree of the imprecision of our senses. It should be of high interest to add such features to logical languages as well in order to achieve similar flexibility and robustness. The models of vagueness described in the literature can be classified into epistemic approaches, supervaluationist approaches, and finally models based on many-valued logics, usually either three-valued or continuous-valued. The latter proposals are also known as ‘degree theories of vagueness’. From the epistemic point of view [24, 25, 151, 168], borderline propositions are either true or false, but they are unknowingly so. Thus, vague predicates are not inherently different from exact predicates; we are only ignorant of their precise boundaries. Obviously, this epistemic view, which denies the very phenomenon of vagueness, does not contribute much to the goal of utilizing vagueness for robust computer programs. When trying to model vague predicates in a two-valued logic, we will be forced to introduce an artificial precise boundary, which gives a wrong impression of total accuracy, and also results in a substantial loss of information. The supervaluationist approach attempts to avoid the bias made by committing to such an artificial boundary: “Replacing vagueness by precision would involve fixing a sharp boundary between the positive and negative extensions and thereby deciding which way to classify each of the borderline cases. However, adopting any one of these ways of making a vague predicate precise – any one of “precisification” or
1.3 The Vagueness of Language
17
“sharpening” – would be arbitrary. For there are many, equally good, sharpenings. According to supervaluationism, our treatment of vague predicates should take account of all of them” Keefe and Smith [90, p. 24].
Based on the resulting alternatives, the truth of a proposition is then determined by the following principle of supervaluation: “A sentence is true iff it is true on all precisifications, false iff false on all precisifications, and neither true nor false otherwise” Keefe and Smith
[90, p. 24]. This ‘third case’ admitted in supervaluationism is often not considered a genuine truth value on a par with “true” and “false”, but rather a truth value gap [47], i.e. no truth value at all, or a truth value ‘glut’ [14], i.e. both true and false. However, if the third case is declared a genuine third truth value, say 12 , this naturally takes us to the modelling of vagueness in terms of a three-valued logic, as proposed by Tye [164]. In a theoretical set-up which only supports three cases (“true”; “false”; and “gap”, “glut”, or third truth value), we are forced to introduce a sharp boundary between the clear positives and the borderline cases (similarly between borderline cases and clear negatives). This appears unnatural and contradicts the principle of ‘tolerance’ mentioned above. Considering the vague predicate “tall”, for example, there is a strong intuition that changing the height of a person by 0.1 mm cannot decide if someone is clearly tall or not clearly tall; 1 millisecond of elapsed time will not decide if someone is clearly young or not clearly young, etc. This observation suggests the use of a continuous-valued model, in which the various shades of F -ness of a vague predicate F can be represented by real numbers ranging from 0 (completely false) to 1 (fully true). The resulting degree theories of vagueness obviously account for borderline cases (which result in intermediate truth values) and fuzzy boundaries (which can be modelled as a smooth transition between the extreme cases of 0 and 1). The Sorites paradox can now be resolved as follows: The inductive premise (2), “If F xi then F xi+1 ”, is not completely true, but only very nearly true. Although there is never a substantial drop in degrees of truth between consecutive xi ’s, repeated application of (2) will accumulate these effects. Consequently, the truth of F xi will decrease as i becomes larger, until it approaches complete falsity, as witnessed by the falsity of the putative conclusion F xn . Thus, the Sorites paradox resolves nicely, and the continuous-valued model gives a satisfactory account of all three characteristics of vagueness. In the literature, there are different proposals for degree theories, and defences of using a continuous-valued model [14,46,68,105,111,188]. However, only one of these approaches has acquired practical significance outside philosophical debates, and is highly visible both in scientific research and commercial applications.
18
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
1.4 Fuzzy Set Theory: A Model of Linguistic Vagueness In the 1930’s, the philosopher M. Black made a first suggestion that vague predicates be modelled by continuous degrees of F -ness [14]. However, Black’s speculations went almost unnoticed for some decades and he did not found a degree-based school of vagueness modelling. Thus the origins of the degreebased model of vagueness are commonly seen in L.A. Zadeh’s independent work on the subject. In 1965, Zadeh published a seminal paper [188] in which he introduces the basic notions of a fuzzy set (i.e., mathematical representation of a vague predicate). He further proposes generalizations of the classical set operations to such fuzzy sets, thus unfolding the rudiments of a set theory which incorporates ‘fuzziness’. The fundamental concept of fuzzy set theory is easily explained: A fuzzy subset X of some base set E assigns to each individual e ∈ E a membership degree µX (e) in the continuous range I = [0, 1]. The mapping µX : E −→ [0, 1] so defined is called the membership function of X. For simplicity, we shall call X a fuzzy set (rather than fuzzy subset of E) when the choice of E is clear from the context or inessential. Two-valued sets can be viewed as a special case of fuzzy sets which only assume membership degrees in the set 2 = {0, 1}. In this case, µX coincides with the characteristic function χX , or indicator function, of X. Two-valued sets are often called crisp in fuzzy set theory. The term ‘fuzzy’ is usually taken to include both the crisp and the genuinely fuzzy cases; and only when directly contrasting crisp and fuzzy sets, we shall assume a dichotomy of (genuinely) fuzzy vs. crisp/precise. Obviously, a fuzzy set X is uniquely determined by the function µX ; and membership functions, which are nothing but ordinary mappings f : E −→ [0, 1], are a possible representation of fuzzy sets. Many authors therefore identify fuzzy sets and their membership functions. For reasons to be explained in Chap.2 p.66, this identification will not be enforced here, i.e. we will generally use the µX -based notation. The collection of all fuzzy subsets of a given set E, called the fuzzy powerset of E, will be symbolized by P(E). The fuzzy counterparts of the classical settheoretical operations intersection, union and complementation are mappings 2 −→ P(E) and ¬ : P(E) −→ P(E). They are defined element-wise ∩, ∪ : P(E) in terms of their membership degrees as follows: µX1 ∩X2 (e) = min(µX1 (e), µX2 (e)) µX1 ∪X2 (e) = max(µX1 (e), µX2 (e)) µ¬X (e) = 1 − µX (e) , for all X1 , X2 , X ∈ P(E). The notion of a fuzzy relation can be developed in total analogy to the definition for ordinary sets. Thus, an n-ary fuzzy relation n) R, for some n ∈ N, is a fuzzy subset of E n . In other words, R ∈ P(E assigns a membership grade µR (e1 , . . . , en ) ∈ [0, 1] to each n-tuple of elements e1 , . . . , en ∈ E.
1.4 Fuzzy Set Theory: A Model of Linguistic Vagueness
19
In two-valued logic, it is customary to discern first-order and second-order predicates. First order predicates apply to n-tuples of individuals; thus, the semantical value of an n-ary predicate symbol is an n-ary relation. Second-order predicates, on the other hand, apply to properties of individuals, which, in an extensional setting, correspond to subsets of the domain. Consequently, the denotation of a second-order predicate is an n-ary second-order relation, i.e. a n subset R ⊆ P(E) . The notion of a second-order relation is easily generalized to the situation in fuzzy set theory. In this case, a fuzzy second-order predicate applies to fuzzy properties, which signify fuzzy subsets of the base set E, and every such choice of arguments must map to a membership grade. Consequently, a fuzzy second-order relation of arity n, which is suited to model P(E) n ). In other words, this case, will be defined as a fuzzy subset R ∈ P( R assigns a membership grade µR (X1 , . . . , Xn ) to each n-tuple of fuzzy sets X1 , . . . , Xn ∈ P(E), which signifies the extent to which the relation applies to (X1 , . . . , Xn ). We shall return to these second-order concepts below when introducing fuzzy quantifiers. Before doing that, however, let us briefly relate the fuzzy sets model to the degree-based account of vagueness and sketch its major branches of research. From the perspective of vagueness theory, Zadeh’s proposal of fuzzy sets constitutes a mathematical model of vague predicates. The membership grades express the degree of compatibility of F to the individual e. As in every degree theory, the clear positives can be identified with µF (e) = 1 and the clear negatives with µF (e) = 0. The intermediate cases where µF (e) is chosen from the open interval (0, 1) permit us to express all those shades of F -ness which are not adequately captured by the extreme cases µF (e) ∈ {0, 1}. For example, we can now use a fuzzy set tall ∈ P(E) to describe the tall people in some base set E, where µtall (e) describes the extent to which the individual e is tall. In practice, it is customary to define fuzzy predicates in terms of the relevant attributes, or dimensions, that they depend on. Hence consider a scale of heights [0, 300], measured in centimetres. We introduce a fuzzy set 300]) such that µTALL (h) describes the extent of tallness for a TALL ∈ P([0, person of height h ∈ [0, 300]. Based on the auxiliary fuzzy set, we then define µtall (e) = µTALL (height(e)) for all e ∈ E, where the attribute height : E −→ [0, 300] specifies the height of the individuals in centimetres. Although fuzzy set theory provides a continuous-valued model of vague predicates, it is not limited to the modelling of linguistic vagueness, and it has a very different agenda compared to that of many-valued logics, as pointed out by Zadeh [89, preface, p. xi]. The success of fuzzy sets both in research and applications is probably also due to Zadeh’s clear conception of the purpose that fuzziness serves in human reasoning, and the potential of humanistic computing methods which incorporate fuzzy sets and linguistic modelling. Contrasting human intelligence and the ‘machine intelligence’ achieved by today’s computers, Zadeh [89, preface, p. ix] remarks that
20
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts “The difference in question lies in the ability of the human brain – an ability which present-day digital computers do not possess – to think and reason in imprecise, nonquantitative, fuzzy terms. It is this ability that makes it possible for humans to decipher sloppy handwriting, understand distorted speech, and focus on that information which is relevant to a decision. And it is the lack of this ability that makes even the most sophisticated large scale computers incapable of communicating with humans in natural – rather than artificially constructed – languages.”
The significance of Zadeh’s intuitions was demonstrated in 1975 when Mamdani and Assilian presented the first example of a working fuzzy controller [112]. Their ground-breaking experiment demonstrated that it is possible to control a complex system (in this case, a steam engine) by a fuzzy model which is derived from a linguistic, rather than mathematical, description of the control knowledge. The apparent benefits of such fuzzy controllers catapulted fuzzy modelling from academic debate into engineering and commercial products like anti-skid brakes, washing machines, auto-focus cameras etc. Apart from fuzzy control [39, 70, 81, 125], the application of fuzzy modelling techniques was also successful in: Approximate reasoning [12, 43, 119] and fuzzy expert systems [103, 120, 204], fuzzy decision making [26, 80, 205], fuzzy design and optimization [104,206], fuzzy pattern recognition and cluster analysis [11, 124], fuzzy data fusion [16, 44], fuzzy databases [20, 128, 203], fuzzy information retrieval [18, 19, 114] and finally linguistic data summarization [83,135,171,182,202]. Today, various textbooks on fuzzy set theory are available, e.g. [41, 99, 123]. In addition, there are two collections of the influential publications of L.A. Zadeh [100, 187].
1.5 The Case for Fuzzy Quantification As mentioned in the section on vagueness theory, linguistic quantifiers like “many” can be vague (or fuzzy, as we now say). In order to clarify some points at the junction of fuzziness and NL quantification, we will make some classificatory distinctions. Consider the following examples of quantifying propositions: I. Every II. Every III.Many IV. Many
student passed the exam. sports car is expensive. students passed the exam. sports cars are expensive.
In the first example, the quantifier “every” is applied to the crisp restriction “students” and the crisp scope “(person who) passed the exam”. In this case, there is no imprecision whatsoever, and we can clearly decide between the two options “true” and “false”. Thus, the quantification results in a twovalued truth value. A quantifier which, given crisp arguments, always results
1.5 The Case for Fuzzy Quantification
21
in quantifications which are either clearly true or clearly false, will be called a precise quantifier ; “every” is an obvious example. In II., the precise quantifier “every” is applied to fuzzy arguments. There is a fuzzy restriction, the fuzzy set of sports cars, and also a fuzzy scope argument, the fuzzy set of expensive things (resolved from the context as denoting the comparison class of “expensive cars”). In this kind of situation, i.e. when at least one argument of a quantifier is genuinely fuzzy, we shall say that there is fuzziness in the arguments. The example suggests that we must then expect borderline cases and a fuzzy boundary even when the quantifier itself is precise. In III., the quantifier “many” is applied to the same choice of crisp arguments already used in the first example. Here, we clearly have borderline situations, and the other criteria for vagueness also apply. In order to avoid the possible confusion with the technical term ‘fuzzy quantifier’ to be introduced later, quantifiers like “many”, which exhibit vagueness even for crisp inputs, will be called approximate quantifiers (or ‘vague’ quantifiers). In order to capture the fuzziness of approximate quantifiers, we need a gradual modelling in terms of fuzzy sets. It is important to keep the cases exemplified by II. and III. clearly separated. In II., we obtain fuzzy quantification results because there is fuzziness in the arguments; the quantifier itself is precise. In III., by contrast, the arguments are crisp. This means that the quantification result is fuzzy only because of the vague quantifier. We hence say that there is fuzziness in quantifiers. The above examples II. and III. isolate these factors and thus demonstrate their independence. Of course these sources can also appear in combination. This is shown by example IV., where an approximate quantifier is applied to fuzzy arguments. Obviously this case also demands a gradual modelling in order to account for the possible borderline cases. Approximate quantifiers like “many” are very common in natural language. The following list presents some more examples of approximate quantifiers expressed by determiners. The corresponding classes of quantifiers are absolute quantifiers, proportional quantifiers, quantifiers of exception, cardinal comparatives and proportional comparatives, respectively. a. “(absolutely) many”, “(absolutely) few”, “about ten”, “several dozen”,. . . (approximate specification of the cardinality of a set) b. “(relatively) many/few”, “almost all”, “about 40 percent”,. . . (approximate specification of a proportion of cardinalities) c. “all except a few”, “all except about ten”,. . . (approximate specification of the allowable number of exceptions) d. “far more than”, “some more than”,. . . (approximate comparison of cardinalities) e. “a much larger proportion than”, “about the same percentage”,. . . (approximate comparison of proportions) When comparing logical and linguistic quantifiers we made the distinction between explicit and implicit quantifiers which are defined on base sets other
22
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
than the universe of discourse. Implicit quantifications, too, are often approximate by nature: f. “often”, “rarely”, “recently”, “mostly”, “almost always”,. . . (approximate temporal specification) g. “almost everywhere”, “hardly anywhere”, “partly”,. . . (approximate spatial specification) h. “usually”, “typically”,. . . (dispositional quantification) i. “in almost every way”, “in most respects”,. . . (quantification over properties, ways of doing things) As mentioned above, there are examples of implicit quantification in dispositional or habitual propositions which do not involve a visible quantifier at all, like the earlier “Slimness is attractive”. Zadeh suggests that these cases be treated as (covert) uses of vague quantifiers like “usually” [199, 200]. It is rather instructive, and also of great importance to the analysis of fuzzy quantifiers adopted in this book, to relate the above cases I.–IV. to the ‘fields of quantification’ of Liu and Kerre [108, p. 2], who classify the possible instances of fuzzy quantification into four basic categories. The classification is presented for unrestricted quantification only, i.e. for a unary quantifier Q and a single argument A, as in “There are Q A’s”, “Q things are A”, or QxA(x) in a logical notation. Noticing that both the quantifier Q and predicate A can either be crisp or fuzzy, Liu and Kerre obtain the following table of possible combinations: Table 1.3. The fields of quantification identified by Liu and Kerre [108, p. 2] Q crisp Q fuzzy
A crisp I III
A fuzzy II IV
In this work, we are mainly concerned with multi-place quantifiers. For quantifiers which accept several arguments, the classification must be generalized as follows. Type Type Type Type
I: II: III: IV:
the quantifier is precise and all arguments are crisp; the quantifier is precise, but fuzzy arguments are admitted; the quantifier may be fuzzy, but all arguments are crisp; both the quantifier and its arguments are allowed to be fuzzy.
Let us now return to the examples introduced at the beginning of this section. It is clear that in the above example I., we have a Type I quantification, in case II., a Type II quantification, etc. However, the two classifications are not identical. In examples I.–IV., there is a dichotomy between precise and approximate quantifiers; in particular, approximate quantifiers cannot be precise. In
1.6 The Origins of Fuzzy Quantification
23
the classification of Liu and Kerre, by contrast, the term ‘fuzzy’, which refers to fuzzy sets, includes both the crisp and genuinely fuzzy cases. Consequently Type I quantifications are a special case of both Type II and Type III quantifications, which in turn are special cases of Type IV quantifications. Generally speaking, a two-valued modelling is only adequate for Type I quantifications. The most general form, i.e. Type IV quantification, not only includes the remaining cases; it is also most relevant from the perspective of applications. Consequently, achieving a proper treatment of Type IV quantifications is the main objective of every model of fuzzy quantification.
1.6 The Origins of Fuzzy Quantification It was L.A. Zadeh who brought fuzzy NL quantifiers to scientific attention by writing the pioneering papers on the modelling of these quantifiers with methods from fuzzy set theory. In a series of papers starting in the mid-1970’s [189–197], Zadeh developed the fundamentals of possibility theory4 [189, 191] which served as the basis for his proposal on natural language semantics, the meaning representation language PRUF [192]. These research efforts were aimed at the development of a theory of approximate reasoning [189,190,192], in which the knowledge about the variables is represented in terms of possibility distributions, from which the reasoning process constructs further distributions or linguistic truth values. In these publications Zadeh exposes his first ideas about fuzzy quantifiers, however only in short passages, e.g. [193, p. 166-168], and usually in an exemplary way. In fact, there were no systematic inquiries into the subject before 1983, when Zadeh published the first treatise solely devoted to fuzzy quantifiers. In [197], he introduces the basic distinction of quantifiers of the first and the second kinds (i.e. absolute and proportional), and he develops a framework for modelling such quantifiers in terms of fuzzy numbers. In addition, several generalizations of the familiar notion of cardinality to fuzzy sets are considered, and it is shown how these cardinality measures can be utilized for implementing fuzzy quantification. Finally, Zadeh also makes a proposal for syllogistic reasoning with fuzzy quantifiers. It is this seminal publication which established fuzzy quantification as a special branch of fuzzy set theory. Some other publications of Zadeh on the subject, which are also influential, comprise: a theory of commonsense knowledge [198], which emphasizes the role of fuzzy NL quantifiers; a further refinement of fuzzy reasoning with fuzzy quantifiers [199]; Zadeh’s quantificational model of dispositions [200]; and the discussion of fuzzy quantifiers in connection with uncertainty management in expert systems [201]. 4
Roughly speaking, possibility theory is concerned with the representation and processing of imprecise information expressed by fuzzy propositions. For example, “Marcel is young” conveys some information about Marcel’s age, which can be described by a possibility distribution.
24
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
Again, there are some precursors to Zadeh’s work on fuzzy quantification. In many-valued logic, the research into quantifiers which accept manyvalued arguments was launched in 1939 when J.B. Rosser [138] presented the first treatise on the subject. Other important contributions to quantification in many-valued logics, which antedate Zadeh’s work, were made by Rosser/Turquette [139–141] and Rescher [136, 137]. The treatment of quantifiers in many-valued logic, however, is usually restricted to Type II quantifications only, i.e. generalizations of precise quantifiers applied to manyvalued arguments. In principle, some of these methods, notably the proposal of Rescher [136], are also capable of expressing many-valued Type IV quantifications. In fact, Rescher [137, p. 201] remarks that he considers the generalization of orthodox ∀ and ∃ a ‘moot question’, giving to understand that he has more interesting cases in mind. He envisions quantifications referring to the semantical status of many-valued propositions, which is made accessible from the object language. Rescher’s examples (for a three-valued logic) include the following quantifiers: Table 1.4. Rescher’s quantifiers Quantifier I
(∃ x)P x (∀T x)P x (M T x)P x (M I x)P x
Meaning “P has borderline cases” “Everything is a clear positive of P ” “Most things are clear positives of P ” “Most things are borderline cases of P ”
The basic idea of accessing the degree of determinacy (i.e. vagueness or fuzziness) through quantification is indeed an interesting one, and we will return to Rescher’s examples later. At this point, it is sufficient to observe that the quantifiers ∃I , ∀T , M T and M I , although applied to three-valued arguments, always result in a two-valued, precise quantification. Thus, Rescher’s examples, too, are not an anticipation of Type IV quantifications, which were first considered by Zadeh. Obviously, natural language quantifiers are also of intrinsic interest to linguists, and there is indeed a linguistic school of ‘generalized quantifiers’ which emerged at roughly the same time when Zadeh presented his early ideas on fuzzy quantifiers. The linguistic theory, whose beginnings are Barwise [5, 6] and Barwise/Cooper [7], will be discussed at length later on. At this point, it is sufficient to remark that the linguistic model always starts from crisp arguments and precise quantifiers; approximate cases like “many” are either not considered at all [92] or forced into the precise framework and ‘modelled’ as precise quantifiers [7, 69]. Barwise and Cooper [7] are well aware of the vagueness inherent to these quantifiers, and suggest an extension of their basic analysis to a three-valued model; however, they do not elaborate this idea further. Zadeh, it appears, is familiar with the linguistic account of NL
1.7 Issues in Fuzzy Quantification
25
quantification, which he mentions in the introduction to his 1983 publication [197, p. 149]. Nevertheless, there is no visible influence of the linguistic analysis on his proposal. To sum up, Zadeh was indeed the first to recognize that quantifying constructions in NL usually express Type IV quantifications, and the basic notion of a fuzzy quantifier he introduces incorporates both fuzziness in quantifiers and in their arguments. His 1983 publication marks the beginning of specialized research into fuzzy quantification and today, there are substantial contributions covering diverse aspects of the subject. Before turning to the technical details of Zadeh’s proposal, we shall identify the main directions of research in the evolving field. We will then discuss Zadeh’s basic concept of a fuzzy quantifier and the framework for fuzzy quantification involving such quantifiers that he proposes.
1.7 Issues in Fuzzy Quantification From a methodological perspective, we can discern three main issues in fuzzy quantification: interpretation, reasoning and summarization. These three areas differ both in their goals, and in the aspect of fuzzy quantification being investigated. Interpretation. The most basic problem to be solved is that of precisely describing the meaning of fuzzy quantifiers, i.e. the modelling problem. Making available such descriptions is a prerequisite of implementing fuzzy quantifiers on computers. Obviously the chosen interpretations must match the expected meaning of the NL quantifiers of interest in order to avoid misunderstanding. The computer models should therefore mimick the use of these quantifiers in ordinary language. In order to develop such interpretations, we need a formalism which lets us express the semantics of fuzzy quantifiers with sufficient detail. In this framework, the relationship between linguistic target quantifiers and their mathematical interpretations must then be analysed. In other words, a methodology must be developed which lets us identify the mathematical model of an NL quantifier of interest by describing its expected behaviour. Given a fuzzy quantifying proposition like “Many spacious cars are expensive”, we can then use the models of “most”, “spacious”, “cars” and “expensive” to construct the interpretation of the expression as a whole. As witnessed by the example, which is based on an approximate quantifier “many” applied to the fuzzy argument of “spacious cars”, this methodology must be developed for general Type IV quantifications. Only then will it become possible to model the interesting cases, like the above example involving gradual evaluations. For example, Type IV quantifications are also necessary to rank a collection of cars according to the quality criterion “Few important parts are made from plastic”. Similar criteria are very common in everyday reasoning and typically used when we have to decide among several options. A good deal of the
26
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
literature on fuzzy quantifiers is concerned with the issue of interpretation, although most of these publications also introduce some prototypical application. Examples comprise the works of Zadeh [197], Ralescu [132, 133] and Yager [179, 185]. See Liu and Kerre [108], Barro et al. and Delgado et al. [34] for overviews of the approaches described in the literature. A more detailed discussion of interpretations for fuzzy quantifiers and the various facets of the ‘modelling problem’ will be given in the next section. Reasoning Another class of papers is concerned with the manipulation of expressions involving fuzzy quantifiers. In this case, the goal is that of deducing further knowledge from a set of facts and rules involving fuzzy quantifiers. As opposed to interpretation, this reasoning process will usually not demand complete knowledge of the given situation, i.e. there is some uncertainty concerning the exact values of some of the variables. However, even partial or imperfect knowledge of a situation is often sufficient to draw valuable and reliable conclusions; this is evident in our everyday problem solving. In computer models of this form of ‘approximate reasoning’ or ‘reasoning under uncertainty’, such partial knowledge can be expressed, for example, in terms of facts and rules involving fuzzy predicates (or possibility distributions of linguistic variables) and fuzzy quantifiers. Ideally, the axiom schemes and inference rules used by the reasoning system should permit the derivation of new truths (valid formulas) from given ones, and the assumed calculus should also cover the full space of logical consequences of the base knowledge. The classical procedure to achieve this is to define a semantical notion of entailment and then make sure that the calculus parallel semantic entailment, i.e. the calculus should be both correct and complete. However, this route was not taken in Zadeh’s pioneering work [197,199,200]. Important contributions of other authors comprise [40,131,152,161,162,176,177], to name just a few. These approaches are often more particular about the semantical justification of their reasoning schemes. Th¨one [161], for example, shows that it is possible to draw precise conclusions in the presence of uncertainty. A survey of reasoning with fuzzy quantifiers is presented in [109]. Summarisation The third main issue in fuzzy quantification is concerned with the generation of quantifying expressions which best describe a given situation. This problematic is different from the generation of quantifying expressions in a fuzzy reasoning system, where the system’s limited knowledge of the situation is further elaborated by applying the calculus rules. In contrast, a system for data summarisation is usually supposed to have complete knowledge of the situation of interest, usually stated in terms of elementary facts. The goal is to construct a succinct description of the situation which captures the important characteristics of the data. In order to express these descriptions, the
1.8 The Modelling Problem
27
system is equipped with a repertoire of NL concepts (represented by fuzzy subsets) and of fuzzy quantifiers, which introduce the possible quantities in agreement like “few”, “many”, “almost all” etc. Typically, the situation to be described comprises a large number of individuals. This is why the extraction of the quantifying description can be viewed as a process of summary generation. The generated descriptions are particularly useful because they can be rephrased in ordinary language. A typical example of such a summary is “Much sales of components is with a high commission”, see [83, p. 31]. As witnessed by the example, the process results in a concise linguistic summary which is often more informative than a list of plain facts. Owing to these advantages, fuzzy quantifiers have become the preferred tool in linguistic data summarization [82, 83, 135, 186], the fundamentals of which were developed by Yager [171, 182]. In general there are many possible ways of summarizing a given situation. Research has therefore focussed on the development of heuristic criteria which will guide the summary generation to the most promising solutions. The fundamental requirement on a candidate summary is of course its truthfulness, i.e. it should properly describe the situation at hand. Truth or ‘validity’ is not sufficient to identify the optimal summary, though. Yager [171, 182] therefore adds a measure of informativeness. Subsequent research has identified further validity criteria, which decide on the quality of the generated summary. Kacprzyk and Strykowski [83, p. 30], for example, present a system which relies on the following indicators: The degree of imprecision/fuzziness of the summary, its degree of covering the data, the degree of ‘appropriateness’, and finally the length of the summary. The system has been used for the linguistic summarization of sales data at a computer retailer [82, 83].
1.8 The Modelling Problem This book will focus exclusively on the issue of interpretation, or more precisely, the modelling problem of achieving a meaningful interpretation for NL quantifiers with methods from fuzzy set theory. There is good reason for concentrating on this problem of improving the quality of interpretations. First of all, the modelling problem is fundamental, i.e. none of the above research directions in fuzzy quantification is really independent of its solution. To demonstrate that the modelling problem is fundamental, let us first discuss its relevance to reasoning with fuzzy quantifiers; data summarization and other applications will be considered in turn. In order to carry out reasoning with fuzzy quantifiers, one needs a calculus which specifies the ‘admissible moves’. This calculus should parallel the semantics of the logical language, i.e. it should be correct and complete in the ideal case. The semantics of the logical language, however, not only specifies an interpretation for predicates like “tall”; it is also concerned with the interpretation of the other non-logical symbols and fuzzy quantifiers in particular. This means that the chosen in-
28
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
terpretations of fuzzy quantifiers will affect the legal inference rules of the calculus. An improved understanding of fuzzy quantification will also result in improved calculi. For example, the semantical analysis of NL quantifiers might reveal further structural properties, which can then be cast into new patterns of reasoning. Now let us investigate how quantifier interpretations, and hence a solution to the modelling problem, affect the results of linguistic data summarization. This research area has focussed on the development of sophisticated search algorithms and ranking criteria which capture the most relevant quality dimensions for linguistic summaries. Furthermore system prototypes have been implemented which demonstrate the new technology. The exact point where quantifier interpretations enter the scene is, of course, the determination of the validity scores (degree of truth) of the candidate summaries, which is one of the relevant quality dimensions. This means that a given system for linguistic data summarization can directly profit from improvements on the interpretation side by exchanging the model of quantification. Plugging in a superior model will improve the ‘validity’ or ‘truth’ scores and result in a better overall ranking of summary candidates. To sum up, advances in the formal analysis and interpretation of fuzzy quantifiers are crucial for improved reasoning with fuzzy quantifiers and linguistic data summarization. In addition, the support for fuzzy quantifiers is important for applications at the crossroads of natural language processing (NLP) and user interfaces. 1. Querying of databases and information retrieval (IR) systems. An improved querying of databases and IR systems can be achieved by integrating certain expressions of natural language into the formal query language. For example, imagine a powerful database interface which supports queries involving fuzzy quantifiers and other elements of ordinary language. Prototypes of such interfaces, which permit more natural and convenient ways of querying, have already been developed for SQL databases [23] and for Microsoft ACCESS [84, 202]. In perspective, this type of database interfaces might prove useful for interactive data discovery [135]. Similar techniques can also be developed for the querying of unstructured data in information retrieval. Experimental retrieval systems with enhanced querying facilities (including fuzzy quantifiers and other techniques of fuzzy set theory) are described in [19, 64, 78, 110]. 2. Natural language interfaces. Beyond the enhancement of artificial query languages, fuzzy quantifiers might also prove useful in future natural language interfaces (NLIs), which permit the users to issue commands in unrestricted natural language. Depending on the application, the implementation of such a system should also comprise a model of fuzzy quantification in order to make sure that quantifiers in the NL queries be interpreted properly. An experimental retrieval system combining NLI technology and query processing with fuzzy quantifiers is described in [62, 63].
1.9 The Traditional Modelling Framework
29
3. Decisionmaking and data fusion. In a broader context, fuzzy quantifiers can be viewed as a class of linguistic operators for information aggregation and data fusion [16, 67]. Due to their potential for combining evaluations of individual criteria, the use of these operators has already become popular in fuzzy decision support systems, where fuzzy quantifiers serve to implement multi-criteria decisionmaking [32, 75, 85, 179]. An application of aggregation based on fuzzy quantifiers to process fuzzy temporal rules for mobile robot guidance is described in [28, 117]. These applications bear witness to the need for solving the modelling problem in order to make a coherent interpretation of fuzzy NL quantifiers possible. (For a survey of applications, see also Liu and Kerre [109].) From a more technical point of view, we can divide the overall problem of interpreting fuzzy NL quantifiers into the following stages: (a) a class of mathematical models or ‘modelling devices’ must be introduced which establishes a repository of candidate interpretations for NL quantifiers; (b) in a second step, the correspondence between NL quantifiers and the available modelling devices must be established. By a framework for fuzzy quantification, we mean proposal for solving the modelling problem of NL quantification for a certain class of quantifiers of interest. Thus, in principle, a framework for fuzzy quantification must specify the range of quantificational phenomena it attempts to cover; it must introduce the repository of candidate interpretations; and finally it must explain in formal terms how the natural language expressions of interest find their matching modelling constructs. In order to precisely describe the NL quantifiers of interest, the framework must further supply specifications of the target quantifiers. An interpretation mechanism will associate these specifications (descriptions of NL quantifiers) with matching operational quantifiers (mathematical models of the NL quantifier of interest). The specification medium introduced by the framework should make it easy to describe the quantifier of interest. However, it must also be sufficiently expressive in order to catch all quantifiers of interest. Within such a framework, the interpretation mechanism, or model of fuzzy quantification, then becomes a mapping from specifications to the corresponding operational quantifiers. Assuming a plausible choice of interpretation mechanism, this general procedure solves the modelling problem because we can now compute quantification results from the descriptions of NL quantifiers combined with the information about their arguments.
1.9 The Traditional Modelling Framework In his pioneering publication [197], Zadeh develops all the components necessary for establishing a framework for fuzzy quantification, i.e. a solution
30
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
skeleton to the modelling problem for a certain class of NL quantifiers which must then be instantiated by concrete interpretation mechanisms. The majority of later approaches to fuzzy quantification described in the literature have adopted Zadeh’s ideas. Thus Zadeh’s proposal can be said to constitute the traditional framework for fuzzy quantification. In presenting his framework, Zadeh [197, 199] demarcates a class of quantifiers to be treated; he proposes a mathematical model for these quantifiers as well as a system of specifications for such quantifiers; he evolves a general strategy for determining the target operators starting from these descriptions, and he also makes two proposals for concrete models of fuzzy quantification (interpretation mechanisms). We shall consider these components in turn. The subject of study, to begin with, are absolute and proportional quantifiers, or quantifiers of the ‘first kind’ and ‘second kind’ in Zadeh’s terminology [197, p. 149]: “We shall employ the class labels ‘fuzzy quantifiers of the first kind’ and ‘fuzzy quantifiers of the second kind’ to refer to absolute and relative counts, respectively, with the understanding that a particular quantifier, e.g. many, may be employed in either sense, depending on the context.”
Turning to concrete instances of such quantifiers, Zadeh notes that [197, p. 149]: “Common examples of quantifiers of the first kind are: several, few, many, not very many, approximately five, close to ten, much larger than ten, a large number, etc. while those of the second kind are: most, many, a large fraction, often, once in a while, much of, etc.”
It should be apparent from these examples that Zadeh adopts a broad notion of quantifiers here, i.e. he considers both quantifiers on the universe of discourse (like “several” or “most”) as well as quantifiers defined on other base sets (like “often” and “once in a while”). However, Zadeh does not distinguish quantifiers on the universe of discourse from quantifiers on other base sets. To be sure, he uses the labels ‘explicit’ vs. ‘implicit’ quantification, which we introduced to differentiate between these cases, but they bear a different meaning in Zadeh’s work. Following Zadeh, a quantifier is only considered implicit if there is no visible quantifying element at all, as is the case for dispositional propositions. By contrast, we include all linguistic constructions which need quantification for their interpretation, but do not involve a quantifier proper on the NL surface. In any case, we can assert that Zadeh’s approach is concerned with absolute and proportional quantifiers in a wide sense which also embraces examples like “often”, “usually” etc. In a subsequent paper, however, Zadeh identifies quantifiers of the first and second kinds with one-place and two-place quantifiers in general [199, p. 757]: “It is useful to classify fuzzy quantifiers into quantifiers of the first kind, second kind, third kind, etc., depending on the arity of the second-order fuzzy predicate which the quantifier represents.”
1.9 The Traditional Modelling Framework
31
In order to eliminate this potential source of confusion, we will prefer the class labels ‘absolute’ and ‘proportional’ here, rather than quantifiers of the ‘first’ and ‘second’ kinds. We will use cardinals n = 1 or n = 2 to denote the arity of the quantifiers and thus to discern the unrestricted and restricted uses. It is apparent from Zadeh’s modelling examples in [197] that he, too, is concerned with both cases. The unrestricted use then corresponds to a quantifier restricted by a crisp predicate, as in “Few men are wise”. Here, “men” can be viewed as supplying the domain in which the unrestricted statement “There are few wise” can then be interpreted. To sum up, Zadeh’s framework targets at the modelling of the unrestricted and restricted use of absolute and proportional quantifiers. Zadeh also mentions quantifiers of the third kind, exemplified by “many more than” [199, p. 757] or by likelihood ratios and the certainty factors used in expert systems [197, p. 149], but the framework is only developed for the first two kinds. Now that the scope of Zadeh’s approach to fuzzy quantification has been explained, we will discuss Zadeh’s solution to the modelling problem for the chosen quantifiers. Let us first consider the repository of modelling devices, i.e. the target interpretations for NL quantifiers of the indicated types that Zadeh proposes. It is apparent from the above characterization of Type IV quantifications that a fuzzy quantifier must accept one or more fuzzy arguments and result in a membership degree which signifies the desired outcome of quantification. Thus, a fuzzy quantifiers is essentially a fuzzy second-order relation, see [199, p. 756/757]. In his examples, Zadeh only considers unary quantifiers and quantifiers involving two arguments, although he mentions the possibility of more general cases [198, p. 149]. We will therefore focus on quantifiers of arities n = 1 or n = 2 in the following. Hence let E = ∅ be on E, then, is a mapping which some base set. A unary fuzzy quantifier Q associates a gradual quantification result Q(X) ∈ [0, 1] to each choice of the fuzzy argument X ∈ P(E). Some examples are: 1 (X) = µX (e) for some fixed e ∈ E Q 2 (X) = sup{µX (e) : e ∈ E} Q 3 (X) = µ[j] where µ[j] is the j-th largest membership grade of X, E finite Q 4 (X) = e∈E µX (e) , E finite. Q |E| The extension to fuzzy two-place quantifiers should be obvious: a binary fuzzy quantifier on E is a mapping which associates a gradual quantification re 1 , X2 ) in the unit range [0, 1] to each choice of the fuzzy arguments sult Q(X X1 , X2 ∈ P(E). For some first examples, consider the two-place fuzzy quantifiers defined as follows:
32
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
5 (X1 , X2 ) = Q 2 (X1 ∩ X2 ) Q 6 (X1 , X2 ) = Q 3 (X1 ∩ X2 ), Q
E finite Q7 (X1 , X2 ) = Q3 (X1 \ X2 ), E finite min(µX1 (e), µX2 (e)) , Q8 (X1 , X2 ) = e∈E e∈E µX1 (e)
E finite.
The above definition of unary and binary fuzzy quantifiers obviously accounts for fuzzy quantification of the intended kinds. We have one or two arguments, which are allowed to be fuzzy, and the quantifier determines a gradual result. Thus, the proposed concept of fuzzy quantifiers is capable of expressing the desired Type IV quantifications for the unrestricted and retricted uses of absolute and proportional quantifiers. Let us now consider the relationship of these modelling devices and the natural language quantifiers to be mod 8 introduced above realized 1 to Q elled. For example, are the quantifiers Q in natural language and if so, which linguistic quantifiers do they express? And conversely (this is the more important form of the modelling problem): Is there a systematic way of interpreting NL quantifiers in terms of such fuzzy quantifiers, and how can a particular choice of interpretation be justified? Thiele [158] has successfully analyzed this relationship for the ‘classical’ examples of universal and existential quantifiers. However, there are no attempts yet to directly describe this relationship for a broader class of quantifiers. It is more promising to devise a way for describing NL quantifiers and to introduce interpretation mechanisms (the particular ‘models’, or ‘approaches’ to fuzzy quantification) which decide on the interpretation of these descriptions. Hence let us consider the specifications of quantifiers and the mechanisms for interpretation that Zadeh envisions. Zadeh’s approach rests on two fundamental ideas, one of which relates to the proposed specifications, while the other clarifies the role of the interpretation mechanisms. Zadeh’s first idea is concerned with the representation of fuzzy quantifiers. As pointed out in [199, p. 756], “. . . the concept of a fuzzy quantifier is related in an essential way to the concept of cardinality – or, more generally, the concept of measure – of fuzzy sets. More specifically, a fuzzy quantifier may be viewed as a fuzzy characterization of the absolute or relative cardinality of a collection of fuzzy sets.
Hence Zadeh utilizes that the considered simple quantifiers can be expressed in terms of the cardinality of their argument or on the relative share of two cardinalities: Absolute quantifiers like “About fifty Y1 ’s are Y2 ’s” depend on |Y1 ∩ Y2 | while proportional quantifiers like “Most Y1 ’s are Y2 ’s” depend on |Y1 ∩ Y2 |/|Y1 |. In this way, it is possible to replace the second-order notion of a fuzzy quantifier with a first-order representation as a fuzzy subset of the real line (for absolute quantifiers) or of the unit interval (for proportional quantifiers):
1.9 The Traditional Modelling Framework
) = µQ (|Y |) Q(Y 1 , Y2 ) = µQ (|Y1 ∩ Y2 |/|Y1 |) Q(Y
33
for absolute quantifiers for proportional quantifiers
Notes 1.1. • This reduction is admissible in the crisp case, when the cardinalities |Y1 ∩ Y2 | and |Y1 | are well-defined. We shall see later that Zadeh envisions a similar, cardinality-based evaluation in the fuzzy case as well. However, it is not necessarily so that fuzzy quantification must always reduce to a fuzzy cardinality measure. • We have followed Zadeh in considering the unrestricted use of absolute quantifiers and the two-place use of proportional quantifiers as fundamental; it should be obvious how to adapt these equations for restricted 1 , Y2 ) = absolute and unrestricted proportional quantification, i.e. Q(Y µQ (|Y1 ∩ Y2 |) and Q(Y ) = µQ (Y /|E|), resp. • In principle, a membership function µQ : N −→ I should be sufficient for specifying absolute quantifiers which refer to cardinal numbers, after all. Some of the later models of fuzzy quantification, however, will demand a specification for arbitrary real numbers. • In the above equation for proportional quantifiers, the case that Y1 = ∅ is silently ignored in the literature. The use of an additional constant v0 ∈ I would be necessary in order to fix a unique quantification result Q(∅, Y2 ) = v0 in this case as well. To sum up, Zadeh proposes a reduction of second-order fuzzy quantifiers to first-order specifications. In terms of these simplified descriptions, then, an absolute quantifier can be described by a membership function µQ : R+ −→ I, while a proportional quantifier can be represented by a membership function µQ : I −→ I. A possible choice of µQ for the proportional quantifier “almost all” is shown in Fig. 1.1. In this case, we have a membership function µalmost all defined by µalmost all (x) = S(x, 0.7, 0.9) for all x ∈ I, where S is Zadeh’s S-function defined by 0 : x≤α 2 x−α α+γ 2 · : α<x≤ γ − α 2 S(x, α, γ) = 2 x − γ α + γ <x≤γ : 1−2· γ−α 2 1 : x>γ
(1.1)
(1.2)
for all x, α, γ ∈ I. Zadeh [197, p. 150] further proposes to view these first-order representations as fuzzy numbers:
34
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
1
0.8
0.6
0.4
0.2
0 0
0.2
0.4
0.6
0.8
1
Fig. 1.1. A possible definition of µalmost all “More generally, we shall view a fuzzy quantifier as a fuzzy number which provides a fuzzy characterization of the absolute or relative cardinality of one or more fuzzy or nonfuzzy sets.”
However, there might be some convexity implications of fuzzy numbers (depending on the chosen notion of fuzzy number) which are violated by artificial quantifiers like “an even number of”. Therefore we will avoid the term “fuzzy number” in connection with Zadeh’s first-order representations. In the book, the following unambiguous notation will be used to discern the second-order notion of fuzzy quantifiers and their first-order descriptions. : P(E) The second-order fuzzy quantifiers are symbolized as mappings Q −→ I 2 : P(E) −→ I, i.e. the µQ -notation will not be used for them and or Q they are further marked by the tilde in order to signify that they refer to Type IV quantifications and thus accept fuzzy arguments. The first orderdescriptions, on the other hand, will always be written in the µQ -notation, which then denotes a mapping µQ : R+ −→ I or µQ : I −→ I, depending on the type of the quantifier of interest (i.e. absolute or proportional). Hence 2 −→ I, while all is a second-order fuzzy quantifier almost all : P(E) almost µalmost all refers to the membership function of its first order description, µalmost all : I −→ I. In order to eliminate a possible source of confusion, we (the second-order fuzzy quanshall introduce a different terminology for Q tifier) and µQ (the membership function of the first-order representation). For the sake of making a classificatory distinction, the term ‘fuzzy quantifier’ The first-order descriptions will be reserved for the second-order construct Q. µQ : I −→ I or µQ : I −→ I, by contrast, will be called fuzzy linguistic quantifiers. It should be pointed out that Zadeh and his followers identify the above cases and generally use ‘fuzzy quantifiers’ and ‘fuzzy linguistic quantifiers’ interchangeably. In this book, however, fuzzy quantifiers and fuzzy linguistic quantifiers will be technical terms with different meanings as explained above.
1.9 The Traditional Modelling Framework
35
In addition, when talking about approaches based on fuzzy linguistic quantifiers, we will refer to those approaches which operate on these µQ ’s in order to determine the interpretation of NL quantifiers. In this way, the terminology of the traditional account of fuzzy quantification (which rests on first-order µQ ’s or ‘fuzzy linguistic quantifiers’) is clearly separated from the novel approach exposed here, which introduces a very different framework. At this point, the repository of modelling devices as well as suitable specifications of such quantifiers have been introduced (i.e. unary or binary fuzzy quantifiers as the operations, and fuzzy linguistic quantifiers as representations). It should now be pretty obvious how an interpretation mechanism Z must look like which closes the gap between the first order representations µQ = Z(µQ ). In principle, then, the and corresponding operational quantifiers Q interpretation mechanism is expected to map specifications of absolute quan(1) −→ I tifiers µQ : R+ −→ I to unary or binary quantifiers Zabs (µQ ) : P(E) 2 (2) and Zabs (µQ ) : P(E) −→ I, in order to express the unrestricted and restricted use of the quantifier for a given base set E = ∅. A similar definition is also needed for specifications of proportional quantifiers µQ : I −→ I, which (1) should be mapped to unary quantifiers Zprp (µQ ) : P(E) −→ I, for modelling 2 (2) the unrestricted use, and Zprp (µQ ) : P(E) −→ I to express restricted quan(i)
(i)
tification. By introducing this Zabs and Zprp notation, we will be able to describe the approaches in Zadeh’s tradition in a uniform way. (See Chap.A in the appendix for details). Zadeh not only proposes the use of first-order representations and fuzzy cardinality measures for modelling fuzzy quantification, though. He also explains how to design such interpretation mechanisms. His second fundamental idea on fuzzy quantification, then, is concerned with the functioning of a model of fuzzy quantification. Specifically, Zadeh [197, p. 159] proposes to determine the quantification results from the fuzzy linguistic quantifier µQ and the given arguments through a computational metaphor which reduces fuzzy quantification to a comparison of fuzzy cardinalities or ratios of cardinalities. He thus suggests to view the following propositions as semantically equivalent: There are Q A’s ⇔ Count(A) is Q Q A’s are B’s ⇔ Prop(B|A) is Q. Hence in order to evaluate an absolute quantifying statement like “There are about eighty X’s”, one needs a scalar or fuzzy measure of the cardinality of fuzzy sets, which determines a quantity Count(X). The resulting quantity must then be compared to the fuzzy linguistic quantifier, i.e. to the given µQ , and this comparison will determine the numerical score of the final quantifi(1) cation result, symbolized Zabs (µQ )(X) in the above notation. Turning to proportional quantifiers, a proposition like “Almost all X1 ’s are X2 ’s” is believed to depend on some quantity Prop(X2 |X1 ) denoting the proportion, or relative share of X1 ’s which are X2 ’s. Again, this quantity can either be a scalar
36
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
number or a fuzzy subset in the unit interval I = [0, 1]. By comparing the resulting quantity to the given µQ in some way, one then obtains the quan(2) tification result, symbolized Zprp (µQ )(X1 , X2 ) in the proposed notation. It should be apparent from this description that Zadeh’s ideas leave two degrees of freedom for developing approaches to fuzzy quantification: (a) the choice of the measure of fuzzy cardinality and relative cardinality, i.e. the definitions to substitute for Count(X) and Prop(X2 |X1 ); and (b), the way in which the comparison of these (absolute or relative) cardinalities is accomplished. Before turning to the individual approaches which instantiate this framework, let us clarify a few more differences between Zadeh’s terminology and the classifications used in this work. Zadeh’s usage of the terms ‘universe of discourse’ and ‘base set’ differs from the way they are used in this book. What we will call the ‘universe of discourse’, i.e. the dedicated base set which supplies the individuals that we can talk about, has no special name in Zadeh’s system, and is identified with other base sets, i.e. collections over which a quantification can possibly range (points in time or in space etc.). Zadeh uses the term ‘universe of discourse’, rather than ‘base set’, to denote all of these collections. In Zadeh’s writings [197, p. 159], the term ‘base set’ is reserved for the qualifying first argument A in a pattern like “Most A’s are B’s”, and it therefore corresponds to the ‘restriction’ of a quantifier in the terminology of this book. Zadeh has no special term for denoting the second argument B, which we call the ‘scope’ of the quantifier. To be sure, Zadeh also uses the word ‘scope’. In his terminology, however, the scope of a (binary) quantifier is the argument tuple (B, A), where A, B ∈ P(E) are given choices of the variables in the pattern “Q A’s are B’s”, see [197, p. 47]. 1.9.1 A Survey of Existing Approaches The abstract framework for fuzzy quantification described above introduces specifications and target operations for absolute and proportional quantifiers, and it clarifies the role of interpretation methods. In order to be useful in practice, however, the framework must be populated with concrete examples of possible interpretation mechanisms, and Zadeh [197] makes two instructive proposals. The first method described in [197], which we will call the Σ-count approach in this monograph, is based on a scalar measure of the cardinality of fuzzy sets and of fractions of such cardinalities. This scalar measure, known as the Σ-count or ‘power’ of a fuzzy set [36], serves to implement the cardinality comparisons in “Count(X) is Q” and “Prop(Y |X) is Q” to which Zadeh attempts to reduce all Type IV quantifications. Zadeh’s second proposal, called the FG-count approach in this monograph, no longer supposes that the cardinality of a fuzzy set can be represented by a single scalar number. By contrast, the cardinality of a fuzzy set will now be represented by a fuzzy subset of the cardinal numbers, the so-called FG-count [197]. Other authors have usually adopted Zadeh’s classification of absolute and relative quantifiers, and his basic framework for representing and interpreting
1.9 The Traditional Modelling Framework
37
quantifiers of these types. Most authors also share his assumption that Type IV quantifications be reducible to a comparison of (absolute or relative) cardinalities. Thus, the approaches described in the literature mainly differ in the measure of fuzzy cardinality used and in the way that the required comparison of fuzzy cardinalities is accomplished. Some of these new proposals are directly motivated from negative evidence against an earlier technique; the modified method then targets at an improvement for the critical cases. An example is Ralescu’s model of fuzzy quantification [132] which is presented as an improvement compared to Yager’s FG-count-based method [174]. Other proposals attempt a generalization of existing methods, e.g. by replacing min and max with general fuzzy conjunctions and disjunctions [33, 35] or by making use of general implications [3, p. 15, eq. (3)]. Finally, there are also some approaches based on conceptually different techniques [179, 181, 183, 185]. Let us now consider the main approaches. Yager [174] presented a model of fuzzy quantification suited for nondecreasing quantifiers, which is essentially based on the FG-count. In [175], Yager proposes a generalization of his method to arbitrary fuzzy conjunctions and disjunctions. In addition, Yager extends the method towards binary proportional quantifiers, thus incorporating importances [180, p.72]. His method, although not introduced in this way, is a generalization of the basic FG-count model (see Sect. A.5). Ralescu [132] attempts to improve upon Yager’s model with an alternative proposal for interpreting fuzzy quantifiers motivated by a possibilistic argument. The method not only considers the degree to which a given fuzzy set has at least k elements, but also the degree to which the remaining elements are not contained in it. We will call Ralescu’s proposal the FE-count approach here because the method can be reconstructed in terms of the FE-count, a measure of fuzzy cardinality described by Zadeh [197]. Each of the three basic methods mentioned so far – i.e. the Σ-count, FG-count and FE-count approaches – adopts its particular notion of fuzzy cardinality, which serves as the basis for performing cardinality comparisons. Thus, these approaches exemplify Zadeh’s basic idea of reducing Type IV quantifications to a gradual comparison “Count(X) is Q” or “Prop(Y |X) is Q”. A different approach was proposed by Yager [179], who models fuzzy quantifiers as Ordered Weighted Averaging (OWA) operators. Thus, fuzzy quantification is treated as a special case of aggregation problem. The basic OWA approach, which is restricted to unary proportional quantifiers based on nondecreasing choices of µQ , has later been extended towards nonincreasing and unimodal quantifiers [184]. In addition, Yager has developed several methods for incorporating importances [179, 181, 183]. It should be remarked that the OWA approach, although originally not defined in terms of a cardinality measure, also fits into the general framework because it can be reformulated as a function of a fuzzy cardinality, see note on p. 47. Another reduction to a fuzzy cardinality measure is described in [35]. The three approaches mentioned earlier as well as the OWA approach, have been most important to the
38
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
theoretical development of fuzzy quantification and they are most frequently found in applications (in particular the Σ-count and OWA method are very popular). Consequently, these four methods will be considered the main approaches to fuzzy quantification, and they will be discussed at some more length in the remainder of this chapter. Apart from these well-known methods, there are several minor variants and generalizations which also belong in Zadeh’s general framework and share his basic assumptions. To begin with, there are several variants of the OWA approach which differ in their formalization of two-place quantification, or ‘importance qualification’ in Yager’s terminology. Here we will focus on Yager’s original proposal [179]; two alternative techniques to include weights are described in [181,183]. A recent proposal of Yager, which is also concerned with importance qualification in an OWA setting, is mentioned in [186]. Yager’s models of two-place quantification make essential use of a coefficient derived from the given quantifier, the so-called ‘degree of orness’ (see Sect. A.4 in the appendix for details). Others have tried to develop improved methods for restricted proportional quantification which are also parametrized by the degree of orness. Vila et al. [165], for example, propose a convex combination (i.e. linear interpolation) of restricted universal and existential quantification, where the degree of orness is used as the interpolation coefficient. Delgado, S´ anchez and Vila [33] propose an integral-based method which incorporates both the FG-count and OWA approaches, depending on the choice of fuzzy conjunction and disjunction.5 The method is extended towards restricted proportional quantification, again, by employing an aggregation operator parametrized by the degree of orness. The same authors later presented a general cardinality-based method which also encompasses the FG-count and OWA approaches as special cases [35]. This proposal certainly marks an important theoretical advance because it explains these techniques from a unifying, cardinality-based perspective. In addition, the ‘core’ approaches are extended to arbitrary quantifiers (without special assumptions on monotonicity), and a general analysis of relative cardinalities and binary proportional quantification is also provided. Finally, Barro et al. [3] consider a generalization of Yager’s [180] inclusion approach towards general fuzzy implications (but they do not recommend their method due to some intrinsic problems which it shares with Yager’s proposal). Due to the conceptual vicinity of fuzzy quantification and cardinality assessments, another point of departure is the improvement or generalization of the underlying cardinality measures, from which novel methods for fuzzy quantification can then be derived. The work of Delgado et al. [35] mentioned above falls into this category. Historically, the discussion of cardinality measures for fuzzy sets starts with Zadeh’s proposal of a fuzzy cardinality in [193], and Blanchard’s [15] subsequent research into a suitable notion of fuzzy cardinality. Dubois and Prade [42] improve on Zadeh’s original proposal in [193] (which may generate non-convex fuzzy cardinalities) and on the FE5
the method is also described in English in Barro et al. [3, p. 19-21].
1.9 The Traditional Modelling Framework
39
count measure (which discerns less structure than possible) by presenting an alternative measure of fuzzy cardinality with improved formal properties. In addition, Dubois and Prade develop a methodology for cardinality comparisons of fuzzy sets, which establishes a meaningful interpretation to statements like “A has more elements than B” when A, B are fuzzy sets. However, they seem to view this as an application of cardinality measures, not as a special kind of fuzzy quantification. Some further publications on the cardinality of fuzzy sets have been contributed by Ralescu [134] and Wygralak [170]. Finally let us consider those approaches to fuzzy quantification which depart from the general picture framed by Zadeh. Prade [127], to begin with, presents a fuzzy pattern-matching approach to the evaluation of fuzzy quantifiers which can be applied to arbitrary absolute quantifiers µQ : R+ −→ I without special monotonicity requirements. The approach does not generate a unique quantification result Z(µQ )(X) ∈ I, though, but rather determines interval-valued interpretations [N, Π], where N, Π ∈ I. As shown by Bosc and Lietard [21, p. 11], the result interval contracts into a unique scalar for monotonic quantifiers; the approach then coincides with the FG-count approach. Bosc and Lietard [21, 22] further show that the basic FG-count approach can be reduced to a Sugeno integral, whereas the basic OWA approach can be expressed as a Choquet integral. This means that these ‘core’ approaches can be explained by a comprehensive theory of fuzzy measures and integrals. Moreover, the integral-based reformulation is no longer restricted to quantifiers expressed in terms of cardinalities; obviously, it can handle arbitrary fuzzy measures (i.e. nondecreasing unary quantifiers). However, Bosc and Lietard abstract from the problem of supporting non-monotonic and multi-place quantifiers. Yager [173, p. 196] presents a general method which does not adopt Zadeh’s proposal of reducing Type IV quantifications to a comparison of fuzzy cardinalities. He explains the method as follows: “One commonly used approach of accomplishing this is called the substitution approach . . . In this approach, we try to represent a quantified statement by an equivalent logical sentence involving atoms which are instances of the predicate evaluated at the elements in D.”
(Here D is the domain of discourse, i.e. the base set). Yager attributes this method to Suppes 1957 [155]. But, Rescher [137, p. 203] also mentions the method in his standard reference on many-valued logic, and he believes it was introduced earlier by Wittgenstein. In fact, as noted by Boche´ nski [17, p. 15, p. 349], the method has been rediscovered over the centuries, and its beginnings can be attributed to Albert of Saxony (1316-1390), who first proposed a reduction of existential and universal propositions to disjunctions and conjunctions, cf. [17, (34.07), p. 234]. From a practical perspective, the substitution approach is not yet a concrete method but rather a framework for methods because a number of choices must be made when applying the method to
40
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
continuous-valued quantifiers. (for a concrete example of a conforming model, see Th.7.87 on p.208 below). Another line of research independent of Zadeh’s framework was pursued by H. Thiele [156–158]. Starting from a definition of ‘general fuzzy quantifiers’, : P(E) i.e. mappings Q −→ I, Thiele develops various concepts useful for classifying such quantifiers. These methods permit him to characterize the general classes of fuzzy universal and existential quantifiers, called T- and Squantifiers, respectively. Later Thiele also discussed ‘median quantifiers’ [159]. These are likely not realized in language, however, and they will be of no importance in this monograph. Another specialized class of quantifiers, socalled ‘implicational’ quantifiers, has been analysed by H´ ajek and Kohout [77] within their checklist-paradigm for defining fuzzy truth functions. To sum up, a variety of interpretation mechanisms or more generally, methods for fuzzy quantification, have been proposed which all target at the proper modelling of these quantifiers in conformance with their meaning in ordinary language; a survey of approaches is also given in [4, 34, 108]. However, there is no consensus about the proper choice, i.e. no clear favourite which best answers the semantical expectations. Thus all models exist in parallel and the Σ-count, OWA and FG-count approaches are often met in theoretical treatise and in applications (only the FE-count approach is hardly ever used – with a good cause, as we shall see). In other words, no single model has emerged which clearly outperforms the other approaches. Quite the opposite, it appears that each method comes with its own internal deficiencies, and notes on the counter-intuitive behavior of these approaches are scattered across the literature [2, 4, 35, 42, 54, 132, 134, 184]. Zadeh’s first proposal, the Σ-count approach, was criticized by Yager [184], who presents a counter-example which documents the unwanted ‘accumulative’ behaviour of the method when there are a lot of small (but non-zero) membership grades. Barro et al. [4, p. 93] and Gl¨ ockner [54, p. 12-15] have further criticism. And Zadeh himself warns that his approach be incompatible with the formation of antonyms [197, p. 167]. Ralescu [132] criticizes Yager [174] and hence the basic FG-count approach, demonstrating that the method yields an implausible semantics when the quantifier is non-monotonic or unimodal, like the example “a few”. Although Yager excludes these cases from consideration [174, p. 639], Ralescu’s example is instructive in illustrating the limits of the FG-count model. Arnould and Ralescu [2] also argue against Yager [174] whose model is essentially the FG-count approach. However, the probabilistic model underlying their criticism makes an unusal methodological commitment. Yager’s inclusion approach [180], which extends the basic FGcount model towards importance qualification, was criticized by Gl¨ ockner [54, p. 21-25], see Sect. A.5 below. He also presents a counter-example against Ralescu’s proposal [132], which reveals that the model assigns an implausible interpretation to the existential quantifier [54, p. 25/26]. Dubois and Prade [42] put further evidence against the FG-count and FE-count approaches by criticizing the underlying cardinality measures for fuzzy sets. Gl¨ ockner [54, p. 16-
1.9 The Traditional Modelling Framework
41
20], Delgado, S´ anchez and Vila [35, p. 32] and Barro, Bugar´ın, Cari˜ nena and D´ıaz-Hermida [4, p. 95] present counter-examples against the three variants of the OWA approach for restricted proportional quantification described in [179], [181] and [183], respectively. Among other points of criticism, these extensions of the OWA method to importance qualification fail to be monotonic (i.e. inequalities between quantifiers are not preserved), and the approach cannot faithfully represent its target class of operators even for crisp arguments. The quantifier interpolation method of Vila et al. [165] fails for similar reasons, as shown by Barro et al. [4, p. 96] and Delgado et al. [35, p. 32]. Turning to the proposals of Delgado, S´ anchez and Vila, their integral-based method [33] shows non-monotonic behaviour and other incorrect results even for crisp arguments, as reported by Barro et al. [4, p. 96] with regard to restricted proportional quantification. The cardinality-based method of Delgado et al. [35], which offers a unifying perspective on both the FG-count and OWA approaches, has lost much of its original appeal since Barro et al. [4, p. 97] subjected it to a critical inspection. Specifically, the resulting interpretations can show a discontinuous dependency on the membership grades of the involved fuzzy sets, and the approach is neither compatible with negation, antonyms, nor dualisation of quantifiers. Finally, Barro et al. [3, p. 15, eq. (3)] discourage the use of their own extension of Yager’s inclusion approach; apparently, it only serves to inspect a possible direction of research and demonstrate that it should not be pursued further. In sum, then, we have a scattered picture of rivaling approaches, all of which appear to struggle with their own problems. We shall now discuss the four main approaches at some more length (further technical details and supplementary information can be found in Appendix A). In order to provide a common point of reference for the various counter-examples that will be presented, a fixed scenario has been chosen. Thus, all examples will originate from a domain of meteorological images which describe ‘cloudiness situations’. These images form part of the document base of an experimental retrieval system for multimedia weather documents [62, 63]. This system had to be capable of ranking satellite images according to accumulative criteria like “Almost all of Southern Germany is cloudy”. Consequently, we will now be concerned with an image ranking task , where the images of interest are first evaluated for their compatibility with a given quantifying criterion, and the computed numerical scores are then used to rearrange the images into a meaningful order, which should parallel the perceived quality of match. The fuzzy sets (or images, in this case) involved in this process are depicted in Fig. 1.2. Here, the base set E comprises the pixel coordinates which form the shape of Germany. There are two images to be considered. X1 is a fuzzy subset of E which represents Southern Germany. The region of white pixels fully belongs to Southern Germany, and the grey pixels are those which belong to Southern Germany to some degree. X2 is a fuzzy subset which represents a cloudiness situation. White pixels are fully cloudy, grey pixels are cloudy to some degree. (In the right image, the contours of Germany have been added in order to facilitate interpretation.
42
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
Fig. 1.2. Running example of a base set E, the pixels forming Germany. Left. Fuzzy subset X1 : Southern Germany. Right. Fuzzy subset X2 : Cloudy
The lower part fully belongs to Southern Germany, the upper part belongs to Germany, but not to Southern Germany, and the middle part contains the intermediate cases.) The images X1 (southern Germany) and X2 (representing a cloudiness situation) constitute the fuzzy arguments to which a binary will be applied, which determines a quantification proportional quantifier Q 1 , X2 ) ∈ I. Given a suitable choice of the quantifier, Q(X 1 , X2 ) result Q(X will express the degree to which “Almost all of southern Germany is cloudy” in the given situation. The simple scenario so defined provides a testbed for approaches to fuzzy quantification, because each method Z will determine its (2) = Zprp (µQ ). The specific properties of Z will specific choice of quantifier Q (2) express in the numerical scores τ = Zprp (µQ )(X1 , X2 ) which it assigns to the example images. In many cases, an implausible model will not only show up in the numerical scores, but also in the computed ranking of the images, which differs from the expected order by relevance. 1.9.2 The Sigma-Count Approach Let us first review the Σ-count approach proposed by Zadeh [197], a simple model of fuzzy quantification derived from a scalar cardinality measure for fuzzy sets. The model rests on the computation of Σ-counts, also known as the power of a fuzzy set, which are defined as the sum of all membership grades, thus
µX (e) . Σ-Count(X) = e∈E
Zadeh’s basic idea is that Σ-Count(X) “may be used as a numerical summary of the fuzzy cardinality of a fuzzy set” [193, p. 167], and can thus be substituted for the ordinary cardinality of crisp sets in order to evaluate quantifying expressions. For unrestricted absolute statements “There are Q X’s”, then, we get the associated score µQ (Σ-Count(X)), based on the given µQ : R+ −→ I
1.9 The Traditional Modelling Framework
43
and X ∈ P(E). The so-called relative Σ-count [189] is used for modelling proportional quantification, i.e. the interpretation is based on the relative share of those X1 that are X2 . For a statement “Q X1 ’s are X2 ’s” involving a proportional quantifier, we then obtain the score Σ-Count(X1 ∩ X2 ) τ = µQ Σ-Count(X1 ) It is apparent from these formulas where µQ : I −→ I and X1 , X2 ∈ P(E). that the Σ-count approach depends only on very basic arithmetics, which makes it easy to implement and thus attractive for applications. However, there are serious concerns regarding its ability to catch the actual meaning of NL quantifiers. The first line of criticism is concerned with the known deficiency that the Σ-Count approach accumulate membership grades, see Ralescu [132, 134] and Yager [184]. Hence suppose E = {John, Lucas} is a set of persons and bald the fuzzy subset defined by µbald (John) = 0.5,
µbald (Lucas) = 0.5 .
The Σ-count approach then judges the statement “Exactly one person is bald” as being fully true, which is clearly inacceptable. In general terms, the problem is that a number of smaller membership grades can possibly become indiscernible from a single large membership grade. The images depicted in Fig. 1.3 further elucidate this mismatch with the expected semantics of quantifiers in ordinary language. In the center result image there is about ten percent cloud coverage of Southern Germany, and the condition “About ten percent of Southern Germany are cloudy” correctly evaluates true in the Σ-count approach. In the right image, however, all of Southern Germany is almost not cloudy (i.e. cloudy to the low degree of 10%), which clearly does not mean that ten percent of Southern Germany are cloudy.6 In fact, Zadeh anticipated such problems, and therefore remarked that “for some applications, it is necessary to eliminate from the count those elements of F whose grade of membership falls below a specified threshold” [193, p. 167]. However, these thresholds must be considered an ad-hoc device which does not offer explanatory value. The treatment of Type II quantifications in the Σ-count approach is also counterintuitive. To see this, consider an NL quantifier of the precise type, e.g. “more than 30 percent”. Such quantifiers require a membership function µQ which is also two-valued.7 But if µQ is two-valued, then the Σ-count approach produces a ‘degenerate’ model in terms of a fuzzy quantifier which only 6
7
This is apparent because everyone who claims that 10 percent of Southern Germany be cloudy in the right image, must also be able to tell which 10% of Southern Germany are cloudy. However, there is no reasonable choice for that because all pixels which belong to Southern Germany are almost not cloudy. As we shall see below in Sect. A.3 which discusses the issue in some more depth, it is not permissible to replace the two-valued mapping µQ with a smooth choice of quantifier in order to avoid these effects, because the intended semantics of the quantifier would be lost.
44
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
Fig. 1.3. The Σ-count approach accumulates ‘small’ membership grades in an undesirable way. Results of “About 10 percent of Southern Germany are cloudy”. Left: Southern Germany. Result for center image: 1 (plausible). Result for right image: 1 (inadequate)
returns the crisp results 0 or 1 (i.e., fully false or fully true). In particular, the resulting interpretations are discontinuous mappings and very sensitive to slight changes in the membership degrees of the fuzzy sets supplied to the quantifier. The example in Fig. 1.4 illustrates this extreme brittleness of the Σ-count approach. Here, the center image and the right image depict very similar cloudiness situations. However, the results obtained from the Σ-count approach are radically different (0 vs. 1). Obviously, this sensitivity to minor variations in the membership degrees is inacceptable for many real-world applications which depend on sensory data and are thus faced with noise, limited accuracy of sensors, quantization errors etc.8 To sum up, the Σ-count approach is appealing at first sight due to its simple definition, which makes it easy to implement. However, the known deficiencies of the approach do not encourage its use for applications. Its accumulating effects and brittleness can already be observed in the simple case of one-place quantification. This means that the Σ-Count approach does not possess a reasonable ‘core’ which might provide a starting point for a plausible extension to a broader class of
Fig. 1.4. Results of computing “At least 60 percent of Southern Germany are cloudy” with the Σ-count approach. Left: Southern Germany. Result for center image: 0, result for right image: 1 8
Zadeh’s proposal of adding thresholds will produce discontinuous behaviour even for genuine fuzzy quantifiers with smooth membership functions.
1.9 The Traditional Modelling Framework
45
quantifiers. For these reasons, the Σ-count approach will not be considered further in this monograph. 1.9.3 The OWA Approach Next we shall review Yager’s OWA approach to fuzzy quantification [179,180], which is based on ordered weighted averaging (OWA) operators. The basic OWA approach is only concerned with proportional quantifiers, which are further required to be ‘regular nondecreasing’, i.e. µQ : I −→ I has µQ (0) = 0, µQ (1) = 1 and µQ (x) ≤ µQ (y) whenever x ≤ y. Now let m = |E| be the number of elements in the given domain. Then µQ (j/m) − µQ ((j − 1)/m) is the quantifier’s increment if the number of elements j − 1 is increased to j, or proportionally from (j − 1)/m to j/m. By µ[j] (X) we denote the j-th greatest membership grade of X (including duplicates), i.e. the j-th element in the ordered sequence of membership values. Intuitively, µ[j] (X) expresses the degree to which X has at least j elements. Using this notation, Yager’s rule for unrestricted proportional quantification then assigns the following score to “Q things are X”, viz τ=
m
(µQ (j/m) − µQ ((j − 1)/m)) · µ[j] (X) ,
(1.3)
j=1
where µQ : I −→ I must be regular nondecreasing and X ∈ P(E). This ‘core’ OWA approach is defined for unrestricted (one-place) quantification only. Recognizing the need to incorporate importances and thus support twoplace quantification as well, Yager [179, p. 190], [180] suggests to transform the fuzzy arguments X1 , X2 supplied to a two-place quantifier into a single fuzzy set Z to which the original rule for unary quantification is then applied. A restricted quantifying statement “Q X1 ’s are X2 ’s” is then assigned the score of the unrestricted statement “There are Q Z’s”, which can already be handled by (1.3). The fuzzy set Z is defined by µZ (e) = max(µX1 (e), 1 − orness(µQ )) · µX2 (e)
max(µX1 (e),orness(µQ ))
see equation (A.2) in the appendix for the definition of orness(µQ ), the socalled ‘degree of orness’ of µQ . This extension to restricted quantification makes it possible to treat cases like “Almost all young are poor”, which require two arguments young, poor ∈ P(E) to be taken into consideration. There is negative evidence regarding its semantical plausibility, however, because the proposed formula assigns incorrect models to any ‘genuine’ proportional quantifiers (see pp.402+ below). In fact, the only two cases which are handled successfully are the logical quantifiers “all” and “exists”. Here we shall confine ourselves to a suggestive example which demonstrates that the theoretical defects are also potentially detrimental to applications. Hence consider the situation depicted in Fig. 1.5. The two results in the figure were obtained from
46
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
OWA operators with importance qualification. In this case, the task was to assess the truth of the proposition “At least 60 percent of Southern Germany are cloudy”, which is obviously the case in the center image, but not in the right image. The OWA approach, though, ranks the right image higher than the center image. As witnessed by Fig. 1.5, the results of the OWA approach show an undesirable dependency on cloudiness grades in regions III and IV, which do not belong to Southern Germany at all. This kind of behaviour runs counter to the intuitive meaning of the NL quantifier “at least 60 percent”.
Fig. 1.5. Application of the OWA-approach. Results of “At least 60 percent of Southern Germany are cloudy”. Left: Southern Germany. Result for center image: OWA: 0.1, desirable outcome: 1. Result for right image: OWA: 0.6, desirable: 0
Additional difficulties arise when the core OWA approach is extended beyond regular nondecreasing quantifiers. To see this, consider the quantifier “less than 60 percent”, which does not belong to the regular nondecreasing type. In order to make the OWA approach applicable to such quantifiers, we must resort to equivalent NL paraphrases. For example, we can use “at least 60 percent” and evaluate “Less than 60 percent of the X1 ’s are X2 ’s” by negating the result of “At least 60 percent of the X1 ’s are X2 ’s”, which means a reduction to the regular nondecreasing type. Alternatively, we can use “More than 40 percent of the X1 ’s are X2 ’s” and evaluate “Less than 60 percent of the X1 ’s are X2 ’s” by computing “More than 40 percent of the X1 ’s are not X2 ’s”. Intuitively, we should expect that the final results of these computations will coincide because they stem from equivalent paraphrases. Yager’s proposal for binary proportional quantification treats both cases differently, though. A counter-example is shown in Table 1.5. Similar remarks apply to the modelling of unimodal and other more complex quantifiers, see pp.405+ for a thorough discussion. To sum up, the basic OWA approach for regular nondecreasing one-place quantifiers appears to be rather well-behaved, at least no counter-examples are known. Thus the ‘core’ approach (which is too limited for important applications of quantifiers) will be considered a potential starting point for future extensions. These must target at covering non-monotonic and two-place quantifiers (importance qualification) as well. As mentioned above, earlier proposals to affect this extension do not achieve the interpretation quality of the basic method. As was shown
1.9 The Traditional Modelling Framework
47
Table 1.5. “Less than 60 percent of Southern Germany are cloudy”. Results of possible renderings when applying the OWA approach to the images in Fig. 1.5 Quantifier not at least 60 percent more than 40 percent not desired result
center image 0.9 0.4 0
right image 0.4 0 1
by Barro et al. [3] and Delgado et al. [35, p. 32], this negative evidence also generalizes to Yager’s alternative methods for extending the OWA approach to two-place quantification. 1.9.4 The FG-Count Approach Apart from the Σ-count approach, Zadeh [197] proposes another model of fuzzy quantification; see also Yager [174] who uses the same basic formulas in the context of rule-based expert systems. The FG-count approach tries to improve upon the use of Σ-counts by replacing the original scalar measure with a fuzzy measure of the cardinality of fuzzy sets. Hence let FG-count(X) denote the fuzzy set of cardinals defined by µFG-count(X) (j) = µ[j] (X)
(1.4)
for all j ∈ N. (Here and in the following it is convenient to stipulate that µ[0] (X) = 1 and µ[j] (X) = 0 for j > |E|). Intuitively, FG-count(X) captures the degree to which X has at least j elements (for all possible j). The FGcount incorporates a richer base of information compared to the Σ-count, which now becomes a summary of the FG-count defined by Σ-Count(X) =
∞
µFG-count(X) (j) − 1 ,
j=0
see Zadeh [197, p. 157].9 The new measure of fuzzy cardinality can be used to derive a corresponding model of fuzzy quantification. In the ‘basic’ FG-count approach, the score of an absolute quantifying statement “There are Q X’s” becomes 9
It should be apparent from (1.3) and (1.4) that the OWA-approach, too, can be expressed in terms of the fuzzy cardinality measure FG-count(X). The OWA score then becomes τ=
m
(µQ (j/m) − µQ ((j − 1)/m)) · µFG-count(X) (j) .
j=1
This demonstrates that the OWA approach, although not originally introduced in this way, is in fact a cardinality-based model of fuzzy quantification.
48
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
τ = max{min(µQ (j), µFG-count(X) (j)) : j = 0, . . . , |E|} The basic FGwhere µQ : R+ −→ I must be nondecreasing and X ∈ P(E). count approach so defined only covers nondecreasing quantifiers, and it does not incorporate importances. A possible extension towards the ‘hard case’ of two-place, weighted quantification has been described by Yager [180, p.72].10 A proportional quantifying statement “Q X1 ’s are X2 ’s” is then assigned the score, µX1 (v) τ = max min µQ v∈S , HS : S ∈ P(E) v∈E µX1 (v) HS = min{max(1 − µX1 (v), µX2 (v)) : v ∈ S} , where µQ : I −→ I is again assumed to be nondecreasing and X1 , X2 ∈ P(E). In order to get some impression of the resulting interpretations, we consider the situation depicted in Fig. 1.6. There are no clouds at all in (the support of) Southern Germany#1, so we should expect that “At least five percent of Southern Germany are cloudy” be completely false. The actual result, 0.55, thus reveals a defect of the proposed model. The example further demonstrates that the resulting operators can be discontinuous: If we replace Southern Germany#1 with the slightly different Southern Germany#2, then the result jumps to 0.95, although there are still no clouds in the region of interest. Hence some interpretations obtained from the two-place formula are very sensitive to noise, which discourages the use of the model in practical applications. Implausible results will also be obtained when quantifications like “Less than 30 percent of Southern Germany are cloudy”, which cannot be interpreted directly due to their monotonicity pattern, are reduced to NL paraphrases involving nondecreasing quantifiers: (a) “It is not the case that at least 30 percent of Southern Germany are cloudy”; and (b) “More than 70 percent of Southern Germany are not cloudy”. These NL paraphrases are equivalent and should therefore be interchangeable. The proposed extension of the FG-count approach to two-place quantification, though, produces incoherent results in both cases. To sum up, the ‘core’ FG-count approach appears to be wellbehaved and does not have any overt flaws. But to be useful in practice, it must be extended beyond nondecreasing quantifiers, and support for restricted quantification must be added. As witnessed by the above examples, existing attempts at such an extension have not been successful. 1.9.5 The FE-Count Approach Finally we review the method proposed by Ralescu [132]. This method is rather similar to Yager’s proposal [174], i.e. to the basic FG-count approach. 10
Zadeh’s original proposal for two-place proportional quantification in the FGcount setting had to be excluded from consideration. The relationship between the above method for two-place quantification and the basic FG-count approach is explained in Sect. A.5 in the appendix.
1.9 The Traditional Modelling Framework
49
(a) Southern Germany#1 (b) Southern Germany#2 (c) cloudy
Fig. 1.6. Application of the FG-count approach. Results of “At least 5 percent of Southern Germany are cloudy”. Result for Southern Germany#1: 0.55, Result for Southern Germany#2: 0.95. The desired result: 0
However, Ralescu attempts to overcome the restriction to nondecreasing quantifiers by replacing the µ[j] (X) terms with the expression µ[j] (X) ∧ ¬µ[j+1] (X) which not only considers the degree to which X contains at least j elements, i.e. µ[j] (X), but also the degree to which X has less than j + 1 elements, i.e. 1 − µ[j+1] (X). Thus a sentence of the form “There are Q X’s” which involves unrestricted absolute quantification will now be assigned the interpretation τ = max{min{µQ (j), µ[j] (X), 1 − µ[j+1] (X)} : j = 0, . . . , |E|} , where µQ : R+ −→ I and X ∈ P(E). The method will be called ‘FE-count approach’ here because it makes implicit use of the FE-count, a measure of fuzzy cardinality introduced by Zadeh [197]. The FE-count of a fuzzy set X is the fuzzy set FE-count(X) ∈ P(N) with membership function µFE-count(X) (j) = min(µ[j] (X), 1−µ[j+1] (X)). This means that Ralescu’s model can also be written as τ = max{min(µQ (j), µFE-count(X) (j)) : j = 0, . . . , |E|} . This reformulation demonstrates that Ralescu’s method is a fuzzy-cardinality based approach to fuzzy quantification which rests on the FE-count as the cardinality measure. A generalization of the basic FE-count method to the interpretation of twoplace and proportional quantifiers has not been described in the literature. Let us now consider the example shown in Fig. 1.7. The crisp image region shown in left image is clearly nonempty, because it contains several ‘white’ pixels with full degree of membership. The fuzzy image region displayed in the right image is certainly nonempty to the same degree of 1.0, because it embraces the crisp nonempty image region shown on the left. The FE-count approach, however, only rates the left image as nonempty to the desired degree of 1.0. Although the right image displays a larger region, the result drops to 0.5, which is clearly inacceptable. This means that the FE-count model shows unexpected results even in the simplest case of one-place quantification based
50
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
Fig. 1.7. Application of the FE-approach. Results of “The image region X is nonempty”. Left: desired result: 1, FE-count approach: 1. Right: desired result: 1, FEcount approach: 0.5
on a nondecreasing absolute quantifier (in this case, the quantifier ∃ defined by µ∃ (0) = 0, µ∃ (x) = 1 for all x > 0). To sum up, there is no well-behaved ‘core approach’ which might be generalized to a broader class of NL quantifiers.
1.10 Chapter Summary: The Need for a New Framework This chapter has been concerned with the characteristics of NL quantifiers, and the phenomenon of fuzziness – which enters into quantifying propositions through approximate quantifiers and vague arguments – was shown to be an essential component of linguistic quantification. An evaluation of the main approaches to fuzzy quantification as to their linguistic adequacy came out negative, however, and in the literature, one finds serious criticism on all methods that have been proposed. The typical problems of these approaches can be summarized as follows: First of all, non-monotonic quantifiers like “about half”, “around twenty” or “an even number of” appear to be notoriously difficult, and none of the traditional approaches will assign a plausible semantics in this case (this is apparent because only two approaches have a well-behaved ‘core’, i.e. the FGcount and OWA methods, and the core method is restricted to nondecreasing quantifiers in both cases). This problem has implications on restricted proportional quantification, as in “Most X1 ’s are X2 ’s”. This is because proportional quantifiers usually lack special monotonicity properties in their first argument (e.g. “most” is neither nondecreasing nor nonincreasing in X1 ). Thus, we expect that traditional approaches will also fail to assign a meaningful interpretation to binary proportional quantifiers. And indeed, the formulas proposed for restricted proportional quantification, as in “Many tall people are lucky”, where both the restriction “tall” and the scope “lucky” are fuzzy, fall short of the linguistic expectations (as witnessed by the above examples of the main approaches and the pointers to the literature for the remaining methods). This must be regarded a serious defect because for applications, proportional quantifiers with importance qualification are usually the most relevant case.
1.10 Chapter Summary: The Need for a New Framework
51
In principle it would be possible to experiment further with Zadeh’s working scheme and formulate new proposals within the given set-up. However, the framework proposed by Zadeh is far too narrow from a linguistic point of view. Zadeh artificially restricts attention to absolute and proportional quantifiers, thus disregarding many other types of similar importance to natural language. For example, quantifiers of exception like “all except ten”, cardinal comparatives like “many more than” or “twice as many”, and comparatives on relative cardinalities like “a larger proportion” should all be covered by a comprehensive theory. In particular, two-place quantification is not the end of the scale concerning the number of arguments that must be handled. Cardinal comparatives, for example, call for three-place quantification (“Many more X1 ’s than X2 ’s are X3 ’s”), and comparatives on relative cardinalities even demand four-place quantification (“The proportion of X1 ’s which are X2 ’s is much greater than the proportion of X3 ’s which are X4 ’s”). In other words, what we need is a theory of fuzzy multi-place quantification, i.e. a generic solution for arbitrary n-place quantifiers. The problem of restricted proportional quantification or ‘importance qualification’ will then be clarified in passing. It is important to understand that the coverage problem of the traditional framework cannot be solved by simply adding further types of quantifiers. What we need is an analysis which accounts for arbitrary NL quantifiers, and from the representational perspective, this means that a universal specification format be required. Zadeh’s framework, however, does not offer a uniform representation, which suits different kinds of quantifiers. By contrast, each type of quantifier comes with its own specification format, viz µQ : R+ −→ I for absolute quantifiers and µQ : I −→ I for proportional quantifiers. Moreover, each type of quantifier needs its own interpretation method (rule for computing quantification results). For example, Σ-counts are used for interpreting absolute quantifiers while the interpretation of proportional quantifiers will be based on a fraction of Σ-counts. This piecewise definition of the interpretation method makes it tedious to integrate new types of quantifiers and bears a high risk of incoherence. Because there is no formal link between the separate interpretations, it becomes hard to guarantee the expected systematicity. To sum up, the necessary extension to further types of quantifiers would result in a multiplicity of representations and interpretation rules, which is difficult to control and would likely bring about unpredictible interpretations. A generic solution based on a universal interpretation method for arbitrary quantifiers would certainly be preferable. The traditional framework, however, cannot be extended towards such a solution on formal grounds. The problem can be traced to Zadeh’s idea of equating “There are Q X’s” with “Count(X) is Q” (for absolute quantifiers), and of proposing a similar rendering of “Q X1 ’s are X2 ’s” as “Prop(X2 |X1 ) is Q” in the proportional case. This conception of fuzzy quantification suggests that an interpretation method stipulate definitions of Count(X) and Prop(X2 |X1 ) and specify a comparison method for the resulting quantities. It then appears that fuzzy quantification is essen-
52
1 An Introduction to Fuzzy Quantification: Origins and Basic Concepts
tially a matter of discovering the proper cardinality measures and comparison operators, see Barro et al. [3, p. 2]. However, as pointed out in the comparison of logical and linguistic quantifiers on p. 10, not all linguistic quantifiers can be reduced to cardinality assessments, because not all of these quantifiers treat every element in the domain alike and are thus ‘quantitative’. In order to avoid this a priori limitation, a comprehensive solution to fuzzy quantification must be formulated independently of any cardinality measure. It is likely due to the internal problems of existing approaches that applications of fuzzy quantifiers, like fuzzy database querying, tend to support more than one interpretation method. Kacprzyk and Zadro˙zny’s fuzzy querying interface to Microsoft ACCESS [84], for example, supports both the Σ-count and OWA models. And Bosc and Pivert [23], when discussing an extension of the classical SQL language with fuzzy retrieval features, suggest the use of even three methods, i.e. Σ-counts, OWA operators/Choquet integral, and finally FG-counts/Sugeno integral. Thus, the burden of selecting the model which works best in the given situation is shifted to the user of these systems. However, it must be doubted that the users of the database can decide on the proper interpretation when even the system designers feel unable to do that. When viewing the development of fuzzy quantification in retrospect, one recognizes that research in this area has lost much of its original impetus: In the beginning, i.e. starting with Zadeh’s pioneering 1983 publication, there was rapid progress; it did not take longer than 1988 that the main approaches had been proposed. Following that came the current period, which is marked by advances in applications like reasoning with fuzzy quantifiers, multi-criteria decisionmaking, fuzzy querying of databases or linguistic data summarization. However, there was a slow-down on the methodical side and no substantial progress in the modelling of fuzzy quantifiers. The seeming stability of this area, and the spread of fuzzy quantifiers into experimental applications, does not imply that the research efforts have resulted in a theoretically satisfactory analysis. The popular methods still show the same problematic behaviour and the few recent proposals for interpretation methods [33–35, 165] are also linguistically inconsistent [3, 35]. This general slow-down of progress in the theoretical analysis of fuzzy quantification hints at a wrong way of thinking about fuzzy quantification. A novel framework which targets at a reliable semantics for fuzzy quantifiers will need to reconcile fuzzy set theory and linguistics. Zadeh himself concedes that his approach be ‘different’ from (i.e. incompatible with) the linguistic model [197, p. 149]. However, it appears that few people have reflected the consequences of this departure from the competent scientific discipline. In fact, it is not even possible to view the two-valued quantifiers usually considered by linguists [7, 9, 92] as a special case of Zadeh’s fuzzy linguistic quantifiers. In order to cover a substantial class of linguistic quantifiers, the novel framework must give up the use of first-order representations and the reduction to cardinality comparisons. The known approaches to fuzzy quantification are motivated by concepts of fuzzy set theory; they have not been
1.10 Chapter Summary: The Need for a New Framework
53
designed and systematically evaluated from a linguistic perspective. What the numerous counterexamples really demonstrate is that we may not accept a bona fide assumption of linguistic plausibility. Quite the opposite: Linguistic adequacy must be central to the very design of the interpretation methods, and it must be explicitly guaranteed by appropriate means. In the monograph, then, we will be concerned with the development of a new framework to fuzzy quantification which is much closer to linguistics. Special attention will be paid to the formalization of criteria which capture the intuitive expectations on meaningful interpretations of quantifiers. Based on this analysis of the essential plausibility criteria, we can then formulate a theory of natural language quantification which rests on an axiomatic foundation.
2 A Framework for Fuzzy Quantification
2.1 Motivation and Chapter Overview The main objective of this work is that of developing a solution to the modelling problem which can then be elaborated to a comprehensive theory of fuzzy quantification. In this chapter, we will be concerned with the most fundamental aspects of the modelling problem, i.e. how can linguistic quantifiers be specified in a straightforward way, and how can a matching fuzzy quantifier be determinined starting from such a description? In devising a systematic solution to these problems, semantical plausibility must be the central concern. As shown in the introduction, the traditional framework for fuzzy quantification fails in this respect, and this can be attributed in part to the basic representations µQ which are too restrictive and inhomogeneous. Thus, linguistic concerns will affect all components of a solution to the modelling problem. The novel framework for fuzzy quantification to be presented in this chapter should be attractive both from the linguistic and fuzzy sets perspective. The framework should be broad enough to cover all phenomena routinely treated by the linguistic theory of quantification [7, 9, 92], and it should also be structurally compatible with the linguistic analysis. The framework must then extend the established linguistic notions towards approximate quantifiers and fuzzy arguments. It is hoped that the structural affinity to the linguistic analysis will allow a transfer of linguistic concepts for characterizing precise quantifiers which might prove useful for describing fuzzy quantifiers as well. Recalling the components of such framework identified in the introduction, the envisioned skeleton for a theory of fuzzy quantification must fix the range of quantificational phenomena to be modelled, propose a system of specifications (descriptions) and operations (computational quantifiers) for the considered NL quantifiers, and finally establish the correspondence between NL quantifiers (as expressed by the proposed specifications) and matching computational quantifiers. In the following, we will explain the linguistic model of NL quantification that the novel framework is supposed to embed. To this end, we will first I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 55–83 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
56
2 A Framework for Fuzzy Quantification
review some milestones of research into NL quantification from the linguist’s point of view. We will then introduce the concept of a (two-valued) generalized quantifier that has evolved from this research. We have already mentioned Russell’s theory of descriptions which attempts an analysis of the singular definite quantifier “thesg ” (as in “the author of Waverley”) in terms of the logical quantifiers ∀ and ∃. However, the existence of more general linguistic quantifiers was neglected for a long time and a clear conception of the semantical structure of NL quantification was also missing. Thus, R. Montague’s discoveries on the proper analysis of noun phrases in (a fragment of) English [115], which constitute the natural locus of quantification, was nothing less than a linguistic sensation. Based on a powerful logical machinery, his intensional logic IL1 , Montague demonstrated that a compositional treatment of natural language quantifiers in conformance with Frege’s principle is indeed possible (i.e. the meaning of a complex expression can be constructed from the meaning of its subexpressions). Montague was mainly interested in the syntactico-semantical structure of noun phrases; considering a small number of classical quantifiers was sufficient to demonstrate the points he wanted to make. A comprehensive treatment of NL quantifiers has to account for a larger repository, however. Montague’s ground-breaking work encouraged further research into various directions which also covered the analysis of a broader class of quantifiers beyond the simple examples “all” and “some”. The linguistic discourse about quantifiers was also influenced by the development of logical systems that admit non-linear, ‘branching’ modes of quantification, in which the quantifiers operate independently of each other.2 Hintikka [76] initiated a debate among linguists and philosophers of language claiming that branching quantification be required for the semantic description of natural languages. The following example due to Hintikka [76, p. 342] is intended to paraphrase a Henkin-like quantificational structure: “Every writer likes a book of his almost as much as every critic dislikes some book he has reviewed.”
Barwise [6] presented a more conclusive example involving only two quantifiers: “Most philosophers and most linguists agree with each other about branching quantification” [6, p. 60].
For that purpose, Barwise needed generalized quantifiers like “most” because the example collapses to linear quantification for the regular choices ∃ and 1
2
a variant of intensional type theory which supports the λ-operator, operators for forming the extension and intension of given expressenions, and modal tense operators. see e.g. Henkin [74] and Walkoe [166] for systems of (two-valued) logic that support partially ordered structures of quantifiers. We shall return to the issue of branching quantification on p.82 below and in 12, which is exclusively devoted to the issue.
2.1 Motivation and Chapter Overview
57
∀. Together with R. Cooper, Barwise then prepared a seminal work on such generalized quantifiers, which established the modern linguistic analysis of quantification, i.e. the Theory of Generalized Quantifiers (TGQ). Important contributions to TGQ were also made by J. van Benthem [9,10], E. Keenan, J. Stavi and L. Moss [91, 92] and many others. The theory of generalized quantifiers evolved rapidly, mainly during the 1980’s, and its body of results on linguistic quantification still represents the linguistic consensus on the matter. TGQ rests on a simple but expressive model of (generalized) quantifiers, which establishes a uniform representation for all kinds of precise quantifiers including unrestricted and restricted quantification, multi-place quantifiers, composite quantifiers, as well as quantitative and non-quantitative examples. There are several (equivalent) ways how the basic concept of a two-valued generalized quantifier can be introduced in formal terms. The following definition fits best into the later framework; it will be related to alternative options of defining generalized quantifiers in the notes. Definition 2.1. A two-valued (generalized) quantifier on a base set E = ∅ is a mapping n Q : P(E) −→ 2, where n ∈ N is the arity (number of arguments) of Q, and 2 = {0, 1} denotes the set of crisp truth values. Notes 2.2. • Consequently, a two-valued quantifier Q assigns a crisp quantification result Q(Y1 , . . . , Yn ) ∈ 2 to each choice of crisp arguments Y1 , . . . , Yn ∈ P(E). The universe E can be any nonempty set of objects; unless otherwise stated, base sets of infinite cardinality will be permitted as well. In particular, the domains of quantification need not range over concrete objects. In typical applications of fuzzy quantifiers like fuzzy information retrieval [65] or multi-criteria decision making [179], for example, E becomes a set of search expressions, or the set of premises of a fuzzy IF-THEN rule, respectively. Let us now consider the definitional alternatives. • As a matter of fact, it is not very common to define generalized quantifiers the way they were introduced here, i.e. as two-valued mappings. Many authors assume a different but equivalent definition of these generalized quantifiers, which are modelled as n-ary second order relations. The twon valued quantifier Q : P(E) −→ 2 according to the above definition, and n the corresponding generalized quantifiers Q ∈ P(P(E) ) can obviously be transformed into each other, by utilizing the relationship (Y1 , . . . , Yn ) ∈ Q ⇔ Q(Y1 , . . . , Yn ) = 1 for all Y1 , . . . , Yn ∈ P(E). The decision to use the characteristic function rather than the relation itself as a representation of the quantifier anticipates the later use of membership functions in the fuzzy case.
58
2 A Framework for Fuzzy Quantification
Other authors prefer a nested representation Q : P(E) −→ P(P(E)). This notation highlights the ‘scope’ argument of the quantifier, which accepts the interpretation of the verbal phrase (see note below on p. 59). n Again, two-valued quantifiers Q : P(E) −→ 2 as defined by Def. 2.1, and n−1 −→ P(P(E)) are interdefinable, according to the mappings Q : P(E) relationship n−1
Yn ∈ Q (Y1 , . . . , Yn−1 ) ⇔ Q(Y1 , . . . , Yn ) = 1 for all Y1 , . . . , Yn ∈ P(E). This style of expressing generalized quantifiers is usually found in work concerned with the relationship between syntactic analysis and semantical interpretation. In particular, the nested representation is adopted in Barwise and Cooper’s foundational work on TGQ [7]. See [50, p. 228+] for an extensive discussion of the alternative perspectives on the basic notion of a generalized quantifier. • The above definition of two-valued quantifiers includes the case of nullary quantifiers (n = 0). These can be identified with the constants 0 and 1. The decision to permit nullary quantifiers proved to be rather convenient in some cases. TGQ, however, rests on a narrower definition and usually restricts attention to quantifiers of arities n ≥ 1. This is justified from the perspective of natural language description because every NL quantifier must at least offer one slot, which accepts the denotation of the verb phrase (see Sect. 2.7 below for an extended discussion of nullary quantifiers). • Finally it is common practice in TGQ to permit quantifiers to be undefined in certain cases. Specifically, definite quantifiers like “the” call for undefined interpretations or other formal devices, in order to account for the possible failure of presuppositions (for example, “the man” only denotes if there is exactly one man in the current domain or discourse context). For our present purposes, though, it is not necessary to incorporate undefined interpretations into the basic concept of a two-valued quantifier, because the issue will be resolved anyway once we proceed to a richer set of truth values. See p. 62 for alternative definitions of singular and plural “the”, and Sect. 2.8 for more details on the issue of undefined interpretation. The quantifiers that prevail in natural language are of the binary variety, and thus fit the pattern “Q Y1 ’s are Y2 ’s”. We shall now introduce some basic examples of such two-place quantifiers. As usual, the symbol |•| will denote cardinality. The quantifiers will be written with a subscript E here which stands for the base set. However, we will usually drop this subscript when the base set E is understood.
2.1 Motivation and Chapter Overview
59
Definition 2.3. Let E = ∅ be some base set. We define allE (Y1 , Y2 ) = 1 ⇔ Y1 ⊆ Y2 someE (Y1 , Y2 ) = 1 ⇔ Y1 ∩ Y2 = ∅ noE (Y1 , Y2 ) = 1 ⇔ Y1 ∩ Y2 = ∅ at least kE (Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | ≥ k more than kE (Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | > k at most kE (Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | ≤ k less than kE (Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | < k exactly kE (Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | = k all except kE (Y1 , Y2 ) = 1 ⇔ |Y1 \ Y2 | = k between r and sE (Y1 , Y2 ) = 1 ⇔ r ≤ |Y1 ∩ Y2 | ≤ s for all Y1 , Y2 ∈ P(E), k, r, s ∈ N. Notes 2.4. •
To give an example how expressions involving these two-valued quantifiers can be evaluated, suppose that E = {John, Lucas, Mary} is a set of persons, men = {John, Lucas} ∈ P(E) is the set of men in E, and married = {Mary, Lucas} is the set of those persons in E who are married. Then by the above definitions, some(men, married) = some({John, Lucas}, {Mary, Lucas}) = 1 , but all(men, married) = all({John, Lucas}, {Mary, Lucas}) = 0 .
• Natural language quantifiers have a dedicated argument slot occupied by the interpretation of the verbal phrase, e.g. Y2 in “All Y1 ’s are Y2 ’s”, or “sleep” in “All men sleep”. As remarked in the introduction, this particular argument will be called the scope of the quantifier. The scope is the argument exposed in the nested representation of quantifiers presented on p. 58. In the “flat” representation adopted here, the scope will be identified with the n-th (last) argument of an n-place quantifier by convention. For example, in all(Y1 , Y2 ), the second argument is the scope of the quantifier. The first argument of a two-place quantifier will again be called its restriction. Of course, it is also possible to fit the usual logical quantifiers ∀ and ∃ into the TGQ framework, and express these as generalized quantifiers.
60
2 A Framework for Fuzzy Quantification
We then obtain the apparent definitions, ∀(Y ) = 1 ⇔ Y = E
and
∃(Y ) = 1 ⇔ Y = ∅
(2.1)
for all Y ∈ P(E). It is instructive to consider the relationship between unary ∀ and the two-place quantifier “all”, in order to clarify the distinction between the restricted and unrestricted use of NL quantifiers that was already mentioned in the introduction: •
In the case that the NL quantifier, which is implicitly two-place, is employed in a quantifying expression which involves two overt arguments, this corresponds to the restricted use of the quantifier. “All bachelors are men”, for example, finds an interpretation in terms of a two-place quantification all(bachelors, men). The restricted use of the quantifier corresponds to the natural way in which quantifiers are applied in NL. • In some cases, it is also possible to model quantifying NL statements by one-place quantification, although the involved NL quantifier regularly takes two arguments. This is the type of quantification is called unrestricted because the full domain E is substituted for the unspecified first argument, thus filling in the restriction of the quantifier. The logical quantifier ∀ : P(E) −→ 2 for example, which models the unrestricted use of the NL quantifier “all”, can be expressed in terms of two-place all by ∀(Y ) = all(E, Y )
for all Y ∈ P(E).
The example illustrates how the full domain E provides the missing restriction (first argument) required by the two-place base quantifier. In other words, the unrestricted quantification in ∀(Y ) means that “All elements of the domain are Y ” or simply, “All things are Y ”. Clearly the unrestricted use, which is ubiquitous in predicate logic, is of minor importance to the description of natural language. We will see later in Sect. 6.7, p. 171 that two-place, restricted quantification can be reduced to the unrestricted use of quantifiers in certain cases. However, such reduction is only possible for certain types of quantifiers, and it does not generalize to the fuzzy case. Apart from the above examples of absolute quantifiers which depend on absolute counts, and quantifiers of exception like “all except k”, there are many other types of NL quantifiers that deserve interest.3 The most prominent class is certainly that of proportional quantifiers, which depend on the relative share (or ratio) of Y1 ’s that are Y2 ’s. Assuming for convenience that the base set E be finite, we can introduce some generic examples from which a broad range of NL quantifiers can be derived. 3
For a more extensive discussion of absolute quantifiers and quantifiers of exception also covering implementation issues, see Sect. 11.8.
2.1 Motivation and Chapter Overview
61
Definition 2.5. Let E = ∅ be some base set. We define [rate ≥ r](Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | ≥ r |Y1 | [rate > r](Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | > r |Y1 | [rate ≤ r](Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | ≤ r |Y1 | [rate < r](Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | < r |Y1 | [rate = r](Y1 , Y2 ) = 1 ⇔ |Y1 ∩ Y2 | = r |Y1 | [r1 ≤ rate ≤ r2 ] = 1 ⇔ r1 |Y1 | ≤ |Y1 ∩ Y2 | ≤ r2 |Y1 | for r ∈ I, Y1 , Y2 ∈ P(E). In terms of these quantifiers, a statement like “At least 70 percent of the Y1 ’s are Y2 ’s” can now be modelled by at least 70 percent(Y1 , Y2 ) = [rate ≥ 0.7](Y1 , Y2 ) , and “More than 60 percent of the Y1 ’s are Y2 ’s” can be modelled by more than 60 percent(Y1 , Y2 ) = [rate > 0.6](Y1 , Y2 ) . In particular, the quantifier “most” becomes most = [rate > 0.5] .
(2.2)
This definition of “most” covers its technical sense, as in “Most Americans voted for Bush”, where one vote can result in an all-or-nothing decision. In other contexts, a gradual modelling would be more appropriate. By instantiating the generic quantifiers [rate ≤ r] and [rate < r] in the apparent way, we can also cover natural language quantifiers of the type “at most r percent” and “less than r percent”. Similarly, a statement like “Exactly 10 percent of the Y1 ’s are Y2 ’s” can now be interpreted as exactly 10 percent(Y1 , Y2 ) = [rate = 0.1](Y1 , Y2 ) . Finally, the parametric quantifier [r1 ≤ rate ≤ r2 ] is useful for modelling a statement like “Between 10 and 30 percent of the Y1 ’s are Y2 ’s”, which then becomes between 10 and 30 percent(Y1 , Y2 ) = [0.1 ≤ rate ≤ 0.3](Y1 , Y2 ) . A more detailed analysis of proportional quantifiers including approximate examples is given below in Sect. 11.9. Next we shall discuss a couple of further quantifiers taken from different linguistic classes: quantifiers of comparison, definite quantifiers, proper names (which can naturally be modelled through quantification), composite quantifiers, and others. As to quantifiers of comparison, consider a statement like
62
2 A Framework for Fuzzy Quantification
“More Y1 ’s than Y2 ’s are Y3 ’s”. The involved quantifier “more than” can be formalized as more than(Y1 , Y2 , Y2 ) = 1 ⇔ |Y1 ∩ Y3 | > |Y2 ∩ Y3 | . The quantifier “more than” is representative of a class of three-place comparison quantifiers known as cardinal comparatives [92, p. 305]. Despite their obvious practical utility, these quantifiers have attracted few interest in existing work on fuzzy quantification which focussed on the absolute and proportional types.4 In this work we will pay some more attention to these quantifiers. An extensive discussion of cardinal comparatives including implementation details is presented in Sect. 11.10. Now turning to the definite type of quantifiers, we must discern the singular and plural usage, and thus introduce different quantifiers thesg and thepl in order to model “The Y1 is Y2 ” and “The Y1 ’s are Y2 ’s”, respectively. We can try a Russellian analysis and force these quantifiers into the two-valued framework, thesg (Y1 , Y2 ) = 1 ⇔ |Y1 | = 1 ∧ Y1 ⊆ Y2 thepl (Y1 , Y2 ) = 1 ⇔ |Y1 | > 1 ∧ Y1 ⊆ Y2 , for all Y1 , Y2 ∈ P(E). As remarked above, it is this type of quantifiers which motivates the incorporation of undefined interpretations, in order to better express the presupposition that Y1 be a singleton (singular “the”) or contain at least two elements (plural “the”). Denoting ‘undefined’ by ↑, the quantifiers would typically be written as 1 : |Y1 | = 1 ∧ Y1 ⊆ Y2 thesg (Y1 , Y2 ) = 0 : |Y1 | = 1 ∧ Y2 ⊆ Y2 ↑ : |Y1 | = 1 1 : |Y1 | > 1 ∧ Y1 ⊆ Y2 thepl (Y1 , Y2 ) = 0 : |Y1 | > 1 ∧ Y1 ⊆ Y2 ↑ : |Y1 | ≤ 1 This kind of modelling is easily fitted to the framework for fuzzy quantification developed below, by representing ‘↑’ as a third truth value 12 , which then means ‘undefined’ or ‘undecided’. See Barwise and Cooper [7, p. 171] for motivation of a three-valued model in the framework of TGQ. Now advancing to proper names, let us first consider an example which illustrates why proper names can be conceived as quantifiers in the first place. Hence suppose that the given domain E contains some person, say “George”, that we are interested in. We can then model NL statements of the type “George is Y ” as a special case of unary quantification, by resorting to the proper name quantifier 4
with the notable exception of D´ıaz-Hermida et al. whose recent classification of semi-fuzzy quantifiers lists a similar type [30].
2.1 Motivation and Chapter Overview
63
george(Y ) = 1 ⇔ George ∈ Y , for all Y ∈ P(E). A more detailed discussion of this construction and its generalization to the fuzzy case is presented in Sect. 3.3. Finally, it is often useful to assume that more complex types of NL expressions be quantifiers as well, which stem from certain syntactic constructions. (Generally, this strategy will aid a compositional interpretation in the sense of Frege’s principle). Examples which involve such compound quantifiers are “All married Y1 ’s are Y2 ’s”, which results from a construction known as ‘adjectival restriction’, and “Some Y1 ’s are Y2 ’s or Y3 ’s”, which semantically expresses a union of argument sets. The interpretations of the corresponding quantifiers Q1 , Q2 are readily stated, viz Q1 (Y1 , Y2 ) = all(Y1 ∩ married, Y2 ) Q2 (Y1 , Y2 , Y3 ) = some(Y1 , Y2 ∪ Y3 ) for all Y1 , Y2 ∈ P(E), where married ∈ P(E) denotes the set of all married people. As witnessed by the above examples of two-valued quantifiers of various kinds, TGQ has indeed managed to devise a uniform representation, which is capable of expressing a broad range of extensional NL quantifiers. Within this formal framework, linguists and philosophers of language have developed concepts which describe the semantical characteristics of NL quantifiers and apparent relationships between these quantifiers, e.g. negation, antonyms, duality, monotonicity properties, symmetry, having extension, conservativity, adjectival restriction and others (see Gamut [50, pp. 223-256] for explanation. All of these concepts will also be introduced in the Chap.4 and Chap.6 below). Due to its straightforward analysis of two-valued quantifiers, research in TGQ advanced at a rather fast pace and helped to clarify such questions as the classification of linguistic quantifiers in terms of their formal properties; the search for semantic universals that govern the possible meanings of linguistic terms (NPs), the investigation of algebraic properties like symmetry, circularity etc., and the issue of expressive power; see Gamut [50, p. 223-236] for a survey. The development of TGQ has become an integral part of modern theories of NL semantics like DRT [87, 153] or DQL [38] and others, and can thus be said to represent some kind of linguistic consensus.5 Thus TGQ has had its successes and it was stimulating to linguistic research in many regards. But, there are also some drawbacks. Obviously, TGQ is suited to statically represent the meaning of quantifying propositions. However, knowledge processing involving linguistic quantifiers is difficult on formal grounds, knowing that most quantifiers are not first-order definable. This means that an axiomatization of reasoning schemes for these quantifiers will necessarily be imperfect. In addition, the analysis of linguistic quantifiers expressed by noun phrases that was achieved by TGQ still needs refinement, because it only deals 5
Although there are rivaling views like the analysis of quantifiers in MultiNet [73].
64
2 A Framework for Fuzzy Quantification
with non-referring terms. In order to interpret referential terms and to resolve anaphora (e.g. anaphoric pronouns like “he”), an extension to some model of discourse is inevitable. The restriction to isolated NPs or sentences must then be given up in favour of larger units of discourse, or texts. And indeed, there was enormous interest in discourse representation theory [72,87,153] once the basic issues of non-referential NPs (extensional generalized quantifiers) had been clarified. Another peculiarity, which hinders the practical application of TGQ in certain cases, is its lack of vagueness modelling: neither is vagueness permitted in the arguments of a quantifier, nor is the quantifier itself allowed to be vague. This kind of idealization was certainly useful because it permitted research in TGQ to focus on the core issues und achieve rapid progress in these areas. The basic need for a treatment of vagueness is acknowledged by the proponents of TGQ, although they refrain from handling vagueness in their own work. For example, Barwise and Cooper have the following remark on the treatment of vagueness in their seminal publication on TGQ [7, p. 163]: “The fixed context assumption is our way of finessing the vagueness of nonlogical determiners. We think that a theory of vagueness like that given by Kamp” [88] “for other kinds of basic expressions could be superimposed on our theory. We do not do this here, to keep things manageable”.
In other words, Barwise and Cooper concede the necessity of modelling vagueness, but they consider the vagueness of language a phenomenon that can be isolated from the core problem of analyzing linguistic quantification. The proposed separation of quantificational analysis and vagueness modelling is not admissible on logical grounds, though. This is because the vagueness of quantifiers and their arguments will necessarily express in the representation of linguistic quantifiers and in the concepts used for their classification. A generalized quantifier of TGQ, say, only accepts two-valued arguments and produces two-valued results in return to such arguments. Hence generalized quantifiers are neither suited for processing vague terms which form the quantifiers’ arguments, nor are they capable of expressing vague or ‘approximate’ quantification. If the incorporation of vagueness calls for a different notion of generalized quantifiers, however, then the concepts building on this modelling construct, like antonym, dual, adjectival restriction, quantitativity, conservativity etc. must all be adapted as well and redefined in such a way that they make sense in the more general situation (i.e. in the presence of vagueness). Barwise and Cooper are not aware of this need because they apparently sympathize with the epistemic view of vagueness. Thus the vagueness of language is essentially a matter of context dependence or lack of knowledge needed for interpretation (e.g. standards of comparison assumed by a speaker). If this view is correct, then the machinery used for semantical interpretation can be kept two-valued, provided that we have some hypothetical “rich context” which fixes the meaning of all context-dependent expressions including vague terms and quantifiers (i.e. under the ‘fixed context assumption’ of
2.2 Fuzzy Quantifiers
65
Barwise and Cooper [7, p. 163] mentioned above). Other authors in TGQ, specifically Keenan and Stavi, deny the possibility of modelling such contexts. Because the standard of comparison required for interpreting “many” or “few” is not given, they conclude that such “value judgment dets” cannot be assigned a semantical value at all [92, p. 258]. Barwise and Cooper, by contrast, remain agnostic about the issue of actual context modelling and simply delegate the problem to specialized research. It has been mentioned that Barwise and Cooper also encourage an extension of their basic framework to a three-valued model in order to account for undefined interpretations [7, p. 171]. Thus, an incorporation of the supervaluationist or three-valued view of vagueness also seems to fit into their methodological picture. However, as shown in the introduction, the epistemic and supervaluationist or three-valued models do not really give a convincing account of Sorites paradoxes and thus, of vagueness. They appear like attempts to lay the ‘demon’ of vagueness in formal chains, rather than trying to profit from its power in the same way that natural languages do. The only model of vagueness which accomplishes this is the continuous-valued model adopted by the degree theories of vagueness. This is why linguistic vagueness will now be discussed in the fuzzy sets framework. It is hence assumed that fuzzy set theory, which is the best-known degree theory, will provide suitable methods for modelling vague quantification. It must be pointed out that the explicit modelling of vagueness will not eliminate the issue of context dependence. To avoid getting stuck in a treatment of context dependence when our goal is that of developing a theory of fuzzy quantification, we will join Barwise and Cooper in assuming a rich context which uniquely determines the meaning of all basic expressions like “tall”, “young” (predicates) and “many”, “few” (quantifiers), without analysing these contexts any further. In a degree-based setting, this means that “many”, “tall” etc. now find a unique interpretation in terms of continuous membership grades. The resulting gradual model of a quantifier like “many” can no longer be 2 represented by a two-valued mapping Q : P(E) −→ 2. And generalized quantifiers are not capable of interpreting propositions involving vague predicates. For example, we can not yet interpret “All Swedes are blonde”, because the fuzzy set blonde ∈ P(E) is not an admissible argument of the generalized 2 quantifier all : P(E) −→ 2. We therefore need an extension of two-valued generalized quantifiers to fuzzy generalized quantifiers which are capable of expressing Type IV quantifications.
2.2 Fuzzy Quantifiers Let us now introduce the apparent extension of generalized quantifiers which gives a natural account of approximate quantifiers like “about ten” and makes the considered quantifiers applicable to fuzzy arguments like “tall” and “rich”.
66
2 A Framework for Fuzzy Quantification
The definition of generalized quantifiers suitable for Type IV quantifications which we will now state, gives a more realistic picture of linguistic quantifiers compared to the idealizations of TGQ. Definition 2.6. : P(E) n −→ I. An n-ary fuzzy quantifier on a base set E = ∅ is a mapping Q Notes 2.7. : P(E) n −→ I thus assigns to each n-tuple of fuzzy • A fuzzy quantifier Q 1 , . . . , Xn ) ∈ I, argument sets X1 , . . . , Xn ∈ P(E) an interpretation Q(X which is allowed to be gradual. • The above definition corresponds to Zadeh’s [199, pp. 756] alternative view of fuzzy quantifiers as fuzzy second-order predicates. These are modelled as mappings in order to simplify notation. It should be pointed out, though, that unlike Zadeh, we explicitly permit arbitrary arities n ∈ N. The proposal also generalizes Thiele’s [158] notion of a (unary) ‘general fuzzy quantifier’ Q : P(E) −→ I, which it extends beyond the simplest case n = 1. • The notion of fuzzy quantifiers assumed here was first introduced in [53, p. 6]. As mentioned in the introduction, the former term fuzzy determiner has now been changed to ‘fuzzy quantifier’, which probably sounds more familiar to non-linguists. : • Given crisp arguments Y1 , . . . , Yn ∈ P(E) and a fuzzy quantifier Q n P(E) −→ I, the quantifying expression Q(Y1 , . . . , Yn ) should be welldefined as well. Therefore ordinary subsets will be viewed as a special case of fuzzy subsets, i.e. it is understood that P(E) ⊆ P(E), where P(E) is again the set of fuzzy subsets of E. Note that this subsumption relationship does not hold if one identifies fuzzy subsets and their membership functions, i.e. if one stipulates that P(E) = IE , where IE denotes the set of mappings f : E −→ I. In this case, one needs transformations from a crisp subset A ⊆ E to its characteristic function χA ∈ 2E ⊆ IE . In order to avoid such transformations, this monograph will not identify fuzzy sets and their membership functions. As opposed to two-valued quantifiers, fuzzy quantifiers accept fuzzy input and young therefore render it possible to evaluate quantifying statements like “Q people are rich”. This amounts to the computation of the quantification result Q(young, rich) ∈ I, where young, rich ∈ P(E) are the fuzzy subsets of “young” or “rich” people, respectively. In addition, the outputs of fuzzy quantifiers may assume intermediate results in the unit range and are no longer tied to the crisp choices {0, 1}. Because fuzzy quantifiers can express all possible shades of quantification results, they achieve a more natural modelling of the borderline area of an approximate quantifier.
2.2 Fuzzy Quantifiers
67
Let us now consider examples of fuzzy quantifiers. We will stipulate some ad-hoc definitions for the unary quantifiers ∀, ∃ : P(E) −→ I and the binary 2
quantifiers all, s ome : P(E) −→ I, viz ∀(X) = inf{µX (e) : e ∈ E} ∃(X) = sup{µX (e) : e ∈ E}
(2.3)
1 , X2 ) = inf{max(1 − µX (e), µX (e)) : e ∈ E} all(X 1 2
(2.5)
s ome(X1 , X2 ) = sup{min(µX1 (e), µX2 (e)) : e ∈ E}
(2.6)
(2.4)
These simple quantifiers are readily employed for for all X, X1 , X2 ∈ P(E). modelling fuzzy NL quantification. Hence let us assume for the moment that
provide the correct interpretation of the natural language quantifier “all”. all A statement like “All young are poor” can then be evaluated by computing the
quantification result of all(young, poor), where young, poor ∈ P(E) are the fuzzy denotations of “young” and “poor”. Hence let E = {Joan, Lucas, Mary} and suppose that young, rich ∈ P(E) are defined by : e = Joan 1 µyoung (e) = 0.7 : e = Lucas 0.2 : e = Mary 0.9 : e = Joan µrich (e) = 0.8 : e = Lucas 0.6 : e = Mary for all e ∈ E. In Zadeh’s succinct notation for fuzzy sets, i.e.
f (e)/e X= e∈E
symbolizes the fuzzy set X ∈ P(E) with µX = f , these fuzzy sets can be expressed thus: young = 1/Joan + 0.7/Lucas + 0.2/Mary rich = 0.9/Joan + 0.8/Lucas + 0.6/Mary . Referring to this choice of fuzzy arguments, the gradual quantification result of “All young are poor” then becomes
all(young, poor) = min{max(1 − 1, 0.9), max(1 − 0.7, 0.8), max(1 − 0.2, 0.6)} = min{0.9, 0.8, 0.8} = 0.8 .
68
2 A Framework for Fuzzy Quantification
2.3 The Dilemma of Fuzzy Quantifiers The proposed notion of fuzzy quantifiers establishes a class of operators which incorporate the fuzziness of language. We have thus solved one part of the modelling problem by introducing the required operational quantifiers. The modelling problem has another facet, though. We still need to elucidate the relationship between the proposed modelling constructs (fuzzy quantifiers) and the NL quantifiers of interest. For crisp subsets of E, we have clear intuitions about the proper definition of linguistic quantifiers. For example, the definition of ∀ : P(E) −→ 2 in (2.1) is certainly uncontroversial. When turning to the fuzzy case, however, a plenty of possible options enters the scene (only to mention the wealth of conceivable fuzzy conjunctions that exist even in the simple propositional case). Thus, how can we: (a), determine a well-motivated choice of fuzzy quantifier which properly models a given NL quantifier and (b), defend this choice against all possible alternatives, which might appear equally plausible at first sight? Hence let us return to the simple universal quantifier ∀, and make an attempt to extrapolate its meaning to the fuzzy case. Equation (2.3) above introduces a fuzzy analogue of this quantifier. However, what about the following alternative stipulations, which only constitute a small sample of the possible options: µX (e) e∈E ∀ (X) = |E| 1 : X=E ∀ (X) = 0 : else m µX (ai ) : A = {a1 , . . . , am } ∈ P(E) finite, ai = aj if i = j . ∀ (X) = inf i=1
Some of these quantifiers apparently do not qualify as plausible models of ∀ in the fuzzy case, while others are not so easy to judge. The first alternative ∀ for example can be rejected right away, because it differs from two-valued ∀ on crisp arguments. The second example also violates some intuitive postulates. For example, producing crisp interpretations only does not seem very natural for a fuzzy quantifier. Moreover, the fuzzy quantifier ∀ shows an undesirable discontinuity in the membership grades µX (e) which makes it too brittle for applications. In addition, ∀ does not conform to Thiele’s analysis of fuzzy universal quantifiers, see [156–158] and Sect. 4.16 below. As to the third example, ∀ appears to be acceptable as a fuzzy model of universal quantification. In fact, both the original definition of ∀ and the alternative ∀ comply with Thiele’s analysis, and must thus be considered admissible choices of fuzzy universal quantifiers. The examples demonstrate that for elementary quantifiers, some basic intuitions persist when admitting fuzzy arguments. However, these are not nec-
2.3 The Dilemma of Fuzzy Quantifiers
69
essarily strong enough to establish a unique preferred choice. The case-by-case make a plausible model of the verification that a proposed fuzzy quantifier Q given NL quantifier soon becomes too tedious when considering a larger number of quantifiers. Moreover, the manual procedure makes it very difficult for the stipulated correspondences to achieve the desired systematicity and consistency. To ensure coherence, such direct correspondence assertions should be avoided altogether. We need a general mechanism for establishing the desired correspondences especially to treat unclear cases where we cannot justify a particular interpretation intellectually. The scarcity of intuitions concerning the proper modelling of NL quantifiers in the fuzzy case becomes evident once we move from the simplest example ∀ to other common quantifiers like “at least 10 percent”. Given a finite base set E, we can readily define a corresponding two-valued quantifier 2 at least 10 percent : P(E) −→ 2 by at least 10 percent = [rate ≥ 0.1], i.e. 1 : |Y1 ∩ Y2 | ≥ |Y1 |/10 at least 10 percent(Y1 , Y2 ) = (2.7) 0 : else for all Y1 , Y2 ∈ P(E). The equation reveals the dependency of the quantifier on the cardinality |•| of crisp sets. Now let us try to define a corresponding fuzzy 2 −→ I. In this case the arguments quantifier at least 10 percent : P(E) X1 , X2 in at least 10 percent(X1 , X2 ) The familiar cardinality meacan be arbitrary fuzzy subsets X1 , X2 ∈ P(E). sure for ordinary sets, which made possible the above definition (2.7) of the two-valued quantifier, is not applicable to these fuzzy arguments, i.e. it cannot be used to define the fuzzy quantifier. And there is no reliable and uncontroversial notion of the cardinality of fuzzy sets, which we can substitute for |•| in the fuzzy case. To demonstrate this, let us consider the Σ-count as a replacement for |•|. We then obtain the following model of “at least 10 percent”, 1 : Σ-Count(Y1 ∩ Y2 ) ≥ Σ-Count(Y1 )/10 at least 10 percent(X1 , X2 ) = 0 : else This proposal is representative of approaches to fuzzy quantification which simply replace the crisp cardinality in (2.7) with some generalized cardinality for fuzzy sets. This parallelism with the formula for crisp sets might look suggestive, but it does not guarantee plausible interpretations. The proposed interpretation of “at least 10 percent” for example, can show abrupt changes in response to slight variations of the membership grades. Moreover the general criticism of the Σ-count approach exemplified by Fig. 1.3, also applies to the quantifier “at least 10 percent”. These difficulties in defining “at least 10 percent” for fuzzy arguments hint at an intriguing problem because the class of fuzzy quantifiers is certainly rich
70
2 A Framework for Fuzzy Quantification
enough to contain the intended model – it is only we cannot identify it. It appears that either we do not have the expressive power available which is necessary to model the quantifiers of interest, or that we do have the required expressive power, but cannot control the tool that offers it: •
The simple two-valued quantifiers of TGQ fail to give a natural account of approximate quantification. However, it is usually easy to define quantifiers which fit into the two-valued framework; • The proposed fuzzy quantifiers achieve a natural modelling of a broad range of NL quantifers including the approximate variety, but we are unable to locate the proper model within the available choices.
The dilemma of fuzzy quantifiers consists in this conflict between expressiveness and ease of specification. In order to identify a solution to this problem, let us recall the two ways in which fuzziness interacts with NL quantification. According to the classification of Liu and Kerre, fuzziness can enter into a quantifier either through its inputs (when supplied with fuzzy arguments), or it can be inherent to the approximate quantifier itself, which then produces gradual outputs even when the arguments are crisp. In order to capture both phenomena the proposed fuzzy quantifiers had to account for a. continuous quantification results (for modelling approximate quantifiers) b. the fuzziness of NL concepts inserted into the argument slots, i.e. the quantifiers had to accept fuzzy arguments. As witnessed by the precise quantifier “at least 10 percent” used to explain the dilemma, the difficulty of establishing correspondences is not an artifact introduced by approximate quantifiers: It is essentially connected to the treatment of fuzzy arguments. This observation suggests an obvious middle course: Why worry about the details of processing fuzzy arguments when it is only the precise description of an NL quantifier that is at issue? Although the framework for fuzzy quantification must ultimately assign a fuzzy quantifier to each NL quantifier in order to support Type IV quantifications, it seems methodically preferable to start from simplified specifications from which the fuzzy target quantifier will then be determined. We shall therefore separate the ’hard’ part, that of handling fuzziness in the arguments, from the less complicated part, i.e. that of introducing a specification medium for gradual quantifications.
2.4 Semi-fuzzy Quantifiers In the last section, we were concerned with the basic conflict between expressive power and ease of specification which must be solved in the new framework for fuzzy quantification. Fuzzy quantifiers – the target models for quantifying NL expressions – turned out too complex to define the desired correspondence between NL quantifiers and their models in a straightforward
2.4 Semi-fuzzy Quantifiers
71
way. We shall therefore narrow the scope of our descriptions to quantification based on crisp arguments (as is done in TGQ) but resulting in gradual outputs (similar to fuzzy quantifiers). To express such Type III quantifications, we introduce the following specification medium. Definition 2.8. An n-ary semi-fuzzy quantifier on a base set E = ∅ is a mapping Q : n P(E) −→ I. Notes 2.9. • Q thus assigns a gradual interpretation Q(Y1 , . . . , Yn ) ∈ I to each n-tuple of crisp subsets Y1 , . . . , Yn ∈ P(E). • Semi-fuzzy quantifiers (originally dubbed ‘fuzzy pre-determiners’) have been introduced in [53, p. 7]. The new term ‘semi-fuzzy quantifier’ is intended to sound more familiar to non-linguists. The proposed semi-fuzzy quantifiers are half-way between two-valued quantifiers and fuzzy quantifiers because they have crisp input and fuzzy (gradual) output. By supporting quantification results in the continuous range I = [0, 1], semi-fuzzy quantifiers are also suited for modelling approximate quantification. However, semi-fuzzy quantifiers accept crisp arguments only. This eliminates the need to specify the intended responses for arbitrary fuzzy arguments, and thus avoids the difficulties of defining fuzzy quantifiers. The concept of semi-fuzzy quantifiers (i.e. crisp inputs, gradual quantifications) is rich enough to embed all two-valued quantifiers of TGQ, because the set of all mappings n n Q : P(E) −→ I comprises the two-valued quantifiers Q : P(E) −→ {0, 1}. Thus, the methodical goal of incorporating two-valued generalized quantifiers into the new class of operators has been achieved. For every precise NL quantifier, the known specification as a two-valued generalized quantifier will still work in the semi-fuzzy quantifier framework. In perspective, the vicinity to TGQ lets us profit from the existing body of knowledge on NL quantification that has already been gathered by linguists. The limitation to crisp inputs makes it easy to determine the cardinality of the involved arguments. Consequently, semi-fuzzy quantifiers can be expressed in terms of the cardinality of their arguments or Boolean combinations thereof. Consider “at least 10 percent”, for example. The definition for crisp arguments is certainly uncontroversal, and it is easy to understand how the quantification “At least 10 percent of the Y1 ’s are Y2 ’s” must be expressed in terms of |Y1 | and |Y1 ∩ Y2 | given crisp arguments, as stated in (2.7). Thus, the familiar concept of crisp cardinality is sufficient for defining quantitative semi-fuzzy quantifiers. The specification in terms of semi-fuzzy quantifiers is uniform across quantifier types. For example, both absolute quantifiers like “about ten” and proportional quantifiers like “about ten percent” turn into binary quantifiers 2 Q : P(E) −→ I, which abstract from the common pattern “Q Y1 ’s are Y2 ’s” (see also examples below). Contrast this with existing approaches to fuzzy
72
2 A Framework for Fuzzy Quantification
quantification whose representations of quantifiers are taylored (and limited) to absolute and proportional quantifiers. Let us now consider some examples of semi-fuzzy quantifiers. First of all, the two-valued quantifiers introduced in Def. 2.3 and Def. 2.5, as well as the special cases presented above (i.e. “at least k”, “all except k”, “at least p percent” etc.), are all instances of semi-fuzzy quantifiers, because the notion of a semi-fuzzy quantifier embeds all quantifiers of TGQ. Hence let us turn to examples of semi-fuzzy quantifiers proper, i.e. approximate quantifiers like “almost all”. There are proposals for defining many of such quantifiers in the traditional framework, i.e. in terms of a membership function µQ : R+ −→ I or µQ : I −→ I. These membership functions are easily translated into corresponding semi-fuzzy quantifiers. To see how this conversion looks like in practice, consider the proportional quantifiers “almost all”. A possible model of “almost all” in terms of a membership function µalmost all has already been presented in (1.1). In order to define a corresponding semi-fuzzy quantifier, we only need to make explicit the argument structure which is suppressed in the membership function µalmost all . The semi-fuzzy quantifier 2 almost all : P(E) −→ I then becomes 1 ∩Y2 | µalmost all ( |Y|Y ) : Y1 = ∅ 1| almost all(Y1 , Y2 ) = (2.8) 1 : else for all Y1 , Y2 ∈ P(E). We can get a rough picture of how the quantifier behaves from Fig. 1.1, which displays the plot of the membership function. We let almost all(∅, Y ) = 1 because all(∅, Y ) = 1 as well, and “almost all” should express a weaker condition than “all”. In the following we will define parametric quantifiers from which concrete instances such as “many”, “almost all”, “often”, “almost everywhere” etc. can be derived. For purpose of illustration, let us consider two parametrized examples. 2
abs manyρ, τ : P(E) −→ I “Many X1 ’s are X2 compared to an absolute expected value ρ ∈ R with sharpness parameter τ ∈ [0, ρ]”; and 2 rel manyρ, τ : P(E) −→ I “The proportion of X1 ’s that are X2 is large compared to an expected proportion ρ ∈ [0, 1] with sharpness parameter τ ∈ [0, ρ]”. Formally, we define these generic quantifiers as follows (assuming an arbitrary finite domain E = ∅): abs manyρ, τ (Y1 , Y2 ) = S(|Y1 ∩ Y2 |, ρ − τ, ρ + τ ) 1 : |Y1 | = 0 rel manyρ, τ (Y1 , Y2 ) = 1 ∩Y2 | S( |Y|Y , ρ − τ, ρ + τ ) : |Y1 | = 0 1|
2.4 Semi-fuzzy Quantifiers
73
for all Y1 , Y2 ∈ P(E), where S again denotes Zadeh’s S-function [197, pp. 183+], see also 1.2 above. The proposed definitions of abs manyρ, τ and rel manyρ, τ cover various meanings of “many” and similar quantifying expressions such as “often”, “relatively often”, etc. In particular the proposed modelling of “almost all” in terms of equation (2.8) is just an instance of the general pattern captured by rel many and can now be expressed as almost all = rel many0.8, 0.1 . The choice of parameters obviously depends on the quantifier to be modelled, but it is also application-specific. (We will comment on the issue of contextdependence and the choice of plausible membership functions when discussing the proposed analysis of fuzzy quantification in Chap.13). The following unparametrized quantifier will also be of interest: 2
as many as possible : P(E) −→ I which might be interpreted as denoting “The relative share of X1 ’s that are X2 ”. We can give a (very rough) account of the quantifier by defining 1 : |Y1 | = 0 as many as possible(Y1 , Y2 ) = |Y1 ∩Y2 | : |Y1 | = 0 |Y1 |
(2.9)
for all Y1 , Y2 ∈ P(E). Thus “the more, the better”. The above stipulation for empty Y1 , i.e. as many as possible(∅, Y ) = 1, is again motivated by the observation that intuitively, “all” poses a stronger condition, whence as many as possible(∅, Y ) ≥ all(∅, Y ) = 1. We shall see applications of the quantifier later in the book, which demonstrate that the model, albeit simplistic, can still be useful. The chosen interpretation of “as many as possible” corresponds to Yager’s ‘pure averaging’ quantifier Qmean described in [179, p. 187]. The above examples of semi-fuzzy quantifiers demonstrate that the concept is indeed useful for specifying natural language quantifiers. These quantifiers are simple functions of crisp cardinalities and the relationship between the arguments of the quantifier and the resulting interpretation is easily grasped. Although the specification of a quantifier remains context-dependent (just as in natural language), the simplicity of semi-fuzzy quantifiers facilitates the identification of a plausible model. Because semi-fuzzy quantifiers show a clear input-output behaviour, a constructive discussion about the proper interpretation of a quantifier in the application of interest becomes possible. As opposed to the traditional µQ -based representations, semi-fuzzy quantifiers can express a wealth of examples including quantifiers of exception, cardinal comparatives and other quantifiers of linguistic interest. This extended coverage is possible because the quantificational structure, i.e. the dependency of quantifiers on a particular combination of the arguments like the absolute
74
2 A Framework for Fuzzy Quantification
count, relative share, difference of cardinalities etc., is considered part of the specifications and thus explicitly encoded by each semi-fuzzy quantifier as an integral part of its definition. Existing approaches, by contrast, isolate this kind of argument structure from a skeleton specification (fuzzy number). The dependency on absolute counts, a ratio of cardinalities etc. must then be imposed by the interpretation mechanism. This difference in the assumed responsibility for argument structure makes semi-fuzzy quantifiers a universal specification medium for extensional quantifiers.
2.5 Quantifier Fuzzification Mechanisms At this point, the proposed framework comprises fuzzy quantifiers and semifuzzy quantifiers. We need fuzzy quantifiers as target operators for interpreting natural language quantifiers but they make a poor specification medium. Semifuzzy quantifiers, by contrast, are well-suited for specifying NL quantifiers because they only need to be defined for the clear cases. However, semi-fuzzy quantifiers are not suitable for interpreting Type IV quantifications and thus make a poor operational medium. For example, we cannot assign a meaning to “Almost all rich people are tall” solely based on the description of “almost all” as a semi-fuzzy quantifier. This suggests that semi-fuzzy and fuzzy quantifiers will serve complementary roles in a theory of fuzzy quantification. It is the responsibility of a model of fuzzy quantification to establish the relationship between specifications (semi-fuzzy quantifiers) and corresponding operations (fuzzy quantifiers). In formal terms, these models are defined as follows. Definition 2.10. A quantifier fuzzification mechanism (QFM) F assigns to each semi-fuzzy n n −→ quantifier Q : P(E) −→ I a corresponding fuzzy quantifier F(Q) : P(E) I of the same arity n ∈ N and on the same base set E.6 QFMs close the gap between the specification/interpretation layers and thus complete the new framework for fuzzy quantification. The proposed framework now lets us specify the intended quantifier Q : P(E) −→ I by describing its behaviour for crisp arguments, determine the matching operation F(Q), and then apply this operation to interpret Type IV quantifications by eval Consider uating F(Q)(X1 , . . . , Xn ) for given arguments X1 , . . . , Xn ∈ P(E). universal quantification, for example. Starting from the two-valued quantifier ∀ : P(E) −→ 2 defined by (2.1) and a given QFM F, we automatically obtain the fuzzy analogue F(∀) : P(E) −→ I. Examples of prototypical QFMs which can be substituted for F will be presented in Chaps.7–10 below. In all of these models, we obtain F(∀)(X) = ∀(X) = inf{µX (e) : e ∈ E}, i.e. the fuzzy universal quantifier coincides with the former proposal (2.3). The mechanism can 6
We avoid calling F a ‘function’ or ‘mapping’ because the collection of semi-fuzzy quantifiers might form a proper class.
2.6 The Quantification Framework Assumption
75
also be applied to the semi-fuzzy quantifier for “at least 10 percent” defined in Def. 2.5. The problem of modelling “at least 10 percent” as a fuzzy quantifier is then given a unique answer relative to F, namely F(at least 10 percent). Given a plausible choice of F, we can directly fetch the desired model of “at least 10 percent”, without ever having to think about the proper specification of the quantifier for fuzzy arguments. It should be remarked that the sole purpose of QFMs is that of introducing variables which range over arbitrary correspondence assignments. Thus, the definition of candidate models per se is separated from the subsequent task of characterizing those interpretations which are linguistically valid. Compared to the ad hoc procedure of defining interpretations on a caseby-case basis the QFM approach eliminates the need to justify each individual choice of quantifier. This is because the fuzzification mechanism itself (as opposed to the instances of fuzzification) can now be subjected to a critical evaluation. Starting from the full class of unrestricted models, which comprises all meaningful examples but also the incoherent assignments, we can now formulate explicit plausibility criteria for which constrain the admissible interpretations. This means that the proposed theoretical framework no longer needs a ‘bona fide’ assumption of linguistic adequacy. By contrast, the catalogue of formal semantic criteria will guarantee that the interpretations of individual quantifiers be restricted to the plausible choices.
2.6 The Quantification Framework Assumption The proposed framework for fuzzy quantification comprises the triple of specifications (semi-fuzzy quantifiers), operations (fuzzy quantifiers) and correspondence assignments (QFMs), i.e. models of fuzzy quantification to which the processing of fuzzy arguments will be delegated. Let us now reflect the commitments of the new framework. We will be interested in latent assumptions which decide on the expressiveness of the framework and on its theoretical limits. In fact, there is an implicit assumption which we silently made when introducing semi-fuzzy quantifiers and QFMs. The condition can be described succinctly in terms of the following construction of underlying semifuzzy quantifiers: Definition 2.11. : P(E) n −→ I be a fuzzy quantifier. The underlying semi-fuzzy quantiLet Q : P(E)n −→ I is defined by fier U(Q) 1 , . . . , Yn ) = Q(Y 1 , . . . , Yn ) , U(Q)(Y simply ‘forfor all n-tuples of crisp subsets Y1 , . . . , Yn ∈ P(E). Hence U(Q) gets’ that Q can be applied to fuzzy sets, and only considers its behaviour on crisp arguments.
76
2 A Framework for Fuzzy Quantification
Based on this concept, we can now express the following quantification framework assumption which is implied by the proposed framework: Quantification framework assumption (QFA): If two base quantifiers of interest (i.e. NL quantifiers to be defined = Q as fuzzy quantifiers, directly) have distinct interpretations Q = U(Q ). then they are already distinct on crisp arguments, i.e. U(Q) The QFA condition ensures the applicability of the QFM framework because Q by Q = U(Q) and Q = U(Q ), without comwe can then represent Q, = F(Q) and Q to promising the existence of a QFM F which takes Q to Q = F(Q ). If the QFA is violated by Q and Q , however, then it is impossible Q = U(Q ) entails that for any QFM to separate the quantifiers, because U(Q) the same interpretation F(U(Q)) = F(U(Q )) be assigned to both quantifiers. The QFA expresses the linguistic postulate (universal principle) that there is no pair of base quantifiers in any natural language which coincide on twovalued arguments, but differ for fuzzy arguments. It should be pointed out that the linguistic theory of quantification, TGQ, depends on the same assumption, because it restricts attention to two-valued arguments only. In doing this, it is assumed that crisp arguments will reveal sufficient structure to discuss the phenomenon of interest. Starting from its basic notion of a generalized quantifier, TGQ has investigated various aspects of NL quantification like semantic universals, algebraic properties, expressive power and logical definability, as well as purely linguistic matters like ‘there-insertion’ or ‘negative polarity’ items.7 The success of TGQ in describing these phenomena demonstrates that the relevant properties of NL quantifiers are indeed visible on the level of crisp arguments (i.e. of two-valued quantifiers and semi-fuzzy quantifiers). At first sight, the fundamental assumption underlying the quantification framework appears uncritical because it makes a very elementary requirement. However, the condition had to be restricted to ‘base quantifiers’ because it is indeed possible to construct a few examples which must be treated with different methods. Hence let us discuss the notion of base or ‘simplex’ quantifier which must be directly expressed in terms of semi-fuzzy quantifiers, i.e. as Q, = F(Q). Q If we treat all conceivable linguistic quantifiers as base quantifiers, then it is clear in advance that the QFA will be violated. This is because a continuousvalued logic cannot validate all axioms of Boolean algebra on formal grounds. Natural languages, however, permit us to express arbitrary Boolean combinations of a quantifier’s arguments. Starting from a pair of Boolean expressions which are equivalent in the two-valued, but not in the fuzzy case, we can then construct a pair of quantifiers whose argument structure parallels these Boolean expressions. These quantifiers will coincide for crisp arguments, but 7
see e.g. [7] (universals), [9] (algebraic properties), [91] (expressiveness), [10] (definability) and [79] (linguistic issues).
2.6 The Quantification Framework Assumption
77
not necessarily for fuzzy arguments, thus violating the QFA criterion. A sim ple example is Q(X) = ∀(X ∪ ¬X), or “Everything is X or not X”, versus Q (X) = 1 const, which paraphrases a constantly true proposition like “Everything belongs to the base set”, which by cylindrical extension (i.e. adding a vacuous variable) becomes a unary quantifier. The standard choice of fuzzy universal quantifier (infimum) and the standard set of connectives then results = U(Q ) = 1 coincide. = Q , although U(Q) in Q The example demonstrates that Q(X) = ∀(X ∪ ¬X) is not a base quantifier, but rather a non-simplex quantifier whose interpretation must be constructed from the interpretations of available base quantifiers (in this case, from ∀ = F(∀)). Another instructive example of non-simplex quantifiers are the manyvalued quantifiers of Rescher [137, p. 199], see also p. 24 in the introduction. By identifying all intermediate truth values z ∈ (0, 1) with the third truth value of Rescher’s original definition, we extend Rescher’s quantifiers ∃I , ∀T , M T and M I to unary fuzzy quantifiers defined for all fuzzy arguments. Rescher’s quantifiers can also be realized linguistically, as shown by the following examples from ordinary English: 1. Many borderline-cases of bald men use hair restorers. 2. All clear cases of insanity are kept under strict observation. 3. All stationary patients are clear cases of dementia. In order to demonstrate that Rescher’s quantifiers are non-simplex, let us consider the quantifier ∃I : P(E) −→ I defined by ∃I (X) = 1 if µX (e) ∈ (0, 1) ∃I (P ) for some e ∈ E, and ∃I (X) = 0 otherwise. The Type IV quantification then expresses the linguistic pattern “There are borderline cases of P ’s”. In order to check the validity of the QFA criterion in this case, let us suppose (X) = 0 which corresponds = that Q ∃I , and the constantly false quantifier Q to a false proposition like “The base set is empty”, both be base quantifiers. ) = 0 is the constantly false quantifier We then observe that U( ∃I ) = U(Q because for crisp arguments, one generally finds no borderline cases. Hence, = 0 expressible in NL = there are two distinct quantifiers Q ∃I and Q which coincide for crisp arguments. This demonstrates that ∃I is not a base quantifier, because its interpretation must be different from the constantly false quantifier. Linguists have postulated a universal principle of variety which states that ‘trivial’ or non-informative quantifiers like U( ∃I ) = 0 never occur as basic quantifiers of a natural language [9, p. 452]. This provides further evidence that ∃I must be considered a constructed, rather than simplex quantifier. Obviously, this does not mean that ∃I cannot be handled by the framework at all, it just means that it cannot be expressed directly in the form ∃I to the interpretation of a base quantifier, ∃I = F(Q). In order to reduce we simply decompose it into “There are R’s”, where the proposition R abbreviates “borderline case of P ”. Thus, “borderline case of” is now viewed as part
78
2 A Framework for Fuzzy Quantification
of the argument and no longer attached to the quantifier. Let h : I −→ I be the mapping defined by h(0) = h(1) = 0 and h(z) = 1 otherwise. Then R can be computed from P by applying the ‘linguistic hedge’ h, i.e. µR (e) = h(µP (e)) ∃I (P ) = F(∃)(R), where ∃ is the for all e ∈ E. We can then express ∃I as usual existential quantifier. Generally speaking, it appears that quantifying constructions which explicitly refer to the fuzziness, vagueness or determinacy of their arguments violate the QFA and must thus be viewed as constructed quantifiers. In particular, linguistic hedges can only be applied to the arguments of a fuzzy quantifier but not semi-fuzzy quantifiers because modifications of membership grades are meaningless for crisp arguments. Thus quantifiers constructed from hedging are not base quantifiers. However, hedging quantifiers like those of Rescher are pretty exotic cases and all common quantifiers like “most”, “many” etc. can be treated as simplex.
2.7 A Note on Nullary Quantifiers Before closing the chapter, we will discuss the status of nullary quantifiers (this section) and the treatment of undefined quantifications in the proposed framework (next section). The chosen definitions of semi-fuzzy and fuzzy quantifiers admit the case of quantifiers with arity n = 0. These nullary quantifiers are artificial constructs (or boundary cases of quantifiers) not observed in natural language. In fact, every NL quantifier must at least support one argument slot, which accepts the interpretation of the verb phrase. For a minimal example, consider “Joan sleeps”, where “Joan” is mapped to a unary quantifier joan : P(E) −→ 2 and “sleep” denotes a set sleep ∈ P(E). The example thus translates into joan(sleeps), which determines the resulting truth value from the known identity of “Joan” and the known set of sleeping people. This minimal example illustrates that the slot for the interpretation of the verb phrase is mandatory in natural language, i.e. we always have arities n ≥ 1. Nevertheless, it can be pretty useful for theoretical investigations to have nullary quantifiers available. For example, they naturally arise from repeated argument insertions, to be discussed in Sect. 4.10. Moreover, admitting nullary quantifiers will permit a succinct presentation of the axiom system for plausible models in Def 3.28, which will refer to nullary quantifiers in its first and most fundamental axiom (Z-1). In order to elucidate the concept of nullary quantifiers and motivate the later notation for these quantifiers, we must recall some very basic mathematics. If A is some set and n ∈ N, then An is commonly taken to denote set of mappings from n = {0, . . . , n − 1} to A, i.e. An = A{0, ..., n−1} . Conse0 quently, A0 = A∅ = {∅}. A nullary semi-fuzzy quantifier Q : P(E) −→ I, i.e. Q : {∅} −→ I, is therefore uniquely determined by the value Q(∅) ∈ I which it assumes for the empty tuple, written as ∅. In other words, there is a one0 to-one correspondence between nullary semi-fuzzy quantifiers Q : P(E) −→ I
2.8 A Note on Undefined Interpretation
79
and elements Q(∅) = y ∈ I in the unit interval. We can therefore speak of nullary quantifiers as ‘constant quantifiers’. To give an example, the quantifier Q : {∅} −→ I defined by Q(∅) = 12 , is a nullary semi-fuzzy quantifier, and true, false : {∅} −→ I, defined by true(∅) = 1 and false(∅) = 0, are of course also constant semi-fuzzy quantifiers. : P(E) 0 −→ I. In Similar considerations apply to nullary fuzzy quantifiers Q 0 = P(E) ∅ = {∅}, which again establishes a one-to-one correthis case, P(E) : {∅} −→ I, and spondence between nullary or ‘constant’ fuzzy quantifiers Q 0 (P(E) ) {∅} elements Q(∅) ∈ I. In particular, the set I = I of constant semifuzzy quantifiers over E coincides with the set of constant fuzzy quantifiers 0 I(P(E) ) = I{∅} , for any considered base set E = ∅.
2.8 A Note on Undefined Interpretation In TGQ, the concept of two-valued quantifiers is often extended by admitting undefined interpretations. Thus quantifiers like the definite “the” already mentioned above, need not denote in certain cases. To achieve this kind of modelling, the definition of two-valued quantifiers could be weakened to partial (incompletely defined) mappings. Barwise and Cooper have suggested a threevalued modelling, where the ‘undefined’ or ‘undecided’ case is represented by a third truth value [7, p. 171]. In this book, the base concept of two-valued quantifiers has not been extended by undefined interpretations, though. One reason for this decision is that the formal apparatus of TGQ is fully developed only for total quantifiers. For example, van Benthem [9, 10] explicitly restricts attention to the simple case of totally defined quantifiers. Other authors like Keenan and Stavi [92, p. 277] or Hamm [69] avoid undefined interpretations altogether and even stick to the Russelian analysis of definite quantifiers. Barwise and Cooper [7, p. 171] suggest that Kleene’s three-valued logic might provide the blueprint for a refinement to undefined interpretations, but they do not detail this proposal.8 From the perspective of fuzzy quantification, the refinement to partial or three-valued quantifiers is still too coarse. The proposed notions of semi-fuzzy and fuzzy quantifiers, by contrast, can model all possible shades of a quantification result between 0 (totally false) and 1 (fully true). In this continuousvalued setting, the ‘undefined’ or ‘undecided’ result can then be represented by the intermediate value 12 . In the following chapters, all relevant concepts of TGQ will be generalized to semi-fuzzy and fuzzy quantifiers. The basic concept of a generalized quantifier (i.e. totally defined, two-valued quantifiers) will be sufficient to introduce these concepts, which will then reformulated for the continuous-valued case. (As a by-product of this, the concepts of TGQ will also be fully developed for three-valued evaluations). 8
Interestingly, all standard models of fuzzy quantification developed below will comply with this strategy, see p.156
80
2 A Framework for Fuzzy Quantification
To sum up, we only need two-valued quantifiers here for motivating continuous-valued abstractions, and the central concepts of semi-fuzzy and fuzzy quantifiers already account for three-valued interpretations. Consequently, the proposed analysis would not profit from the additional use of partial mappings as a second mechanism for expressing indeterminate quantification results.
2.9 Chapter Summary In the introduction, a framework for fuzzy quantification was described as comprising the following parts: A description of its scope, which fixes the range of considered quantifiers; a definition of operational target quantifiers; a definition of specifications for such quantifiers; and finally models of fuzzy quantification which connect the specifications of quantifiers to their operational forms. Noticing that the traditional framework for fuzzy quantification does not achieve a coherent interpretation for a broad class of NL quantifiers, the present chapter was concerned with the development of a new theoretical skeleton which avoids these problems. The novel framework was supposed to cover all quantifiers of linguistic relevance. Recalling the characteristics of linguistic quantifiers discussed in the introduction, this means that it had to support general multi-place quantifiers including non-monotonic and nonquantitative examples. The most useful point of departure for devising such a framework we considered the theory of generalized quantifiers (TGQ), which offers a suitable notion of (crisp) generalized quantifiers that only needs to be extended towards fuzzy quantification. To this end, we first defined a suitable notion of fuzzy quantifiers which support fuzzy arguments and gradual quantification results. These are the target operations used for interpreting quantifying NL expressions in the new framework. However, it is difficult to establish a correspondence between NL quantifiers and matching fuzzy quantifiers because the familiar crisp cardinality can not be used to define a fuzzy quantifier. In order to simplify the specification of the target quantifier, the framework therefore introduces the novel concept of a semi-fuzzy quantifier. Semi-fuzzy quantifiers support gradual quantification results, but they only target at Type III quantifications in the sense of Liu and Kerre [108, p. 2] where all arguments are crisp. The novel quantifiers offer a uniform representation which is independent of particular types of quantifiers. The absolute quantifiers “about ten” and the proportional quantifier “most”, for example, both map to two-place semi-fuzzy quantifiers 2 Q : P(E) −→ I. Semi-fuzzy quantifiers include the classical generalized quantifiers of TGQ and thus embed all quantifiers of linguistic relevance. Owing to the restriction to crisp arguments, semi-fuzzy quantifiers admit suggestive definitions in terms of crisp cardinalities. This is of crucial importance to their usefulness as a specification medium. By taking an intermediate course between two-valued
2.9 Chapter Summary
81
quantifiers (which are too weak) and fuzzy quantifiers (which are too expressive for specification purposes), semi-fuzzy quantifiers solve the ‘dilemma of fuzzy quantifers’ concerning the conflict between expressive power and ease of specification. It is the responsibility of a model of fuzzy quantification to describe the relationship between specifications (semi-fuzzy quantifiers) and the associated fuzzy quantifiers (target operations). The framework was therefore completed by introducing the concept of a quantifier fuzzification mechanism (QFM) which expresses these correspondences. The notion of QFM is very general and comprises all conceivable assignments of fuzzy quantifiers to the given semi-fuzzy quantifiers. Hence, it will be the main concern of fuzzy quantification theory to precisely describe those QFMs which achieve a plausible interpretation of linguistic quantifiers (and of course, it must also present concrete examples of such models). To identify the intended models, the theory must develop formal criteria for those interpretations of fuzzy quantifiers which are semantically conclusive. These criteria can be expressed as requirements on the admissible fuzzification mechanisms because all details of the mapping from specifications to target quantifiers are encapsulated by the QFMs. The proposed framework spans both the generalized quantifiers of TGQ and the Type IV quantifications of fuzzy set theory. Due to its vicinity to the linguistic theory, the later translation of the conceptual apparatus of TGQ to fuzzy quantifiers will become a straightforward task. As to the coverage of quantificational phenomena, the framework includes a variety of extensional quantifiers which are permitted to be multi-place and non-quantitative (i.e. not necessarily definable in terms of cardinalities). In addition, there is no a priory limitation to finite domains. When discussing the quantification framework assumption, we already met a few ‘exotic’ quantifiers which cannot be reduced to a semi-fuzzy quantifier because they then become meaningless. The prime example are Rescher’s quantifiers or more generally, complex quantifiers built from a hedging construction. In the following, we will discuss some other quantificational phenomena not yet covered by the proposed framework. It should be remarked in advance that these cases are also not handled by the linguistic theory and of course, they cannot be treated by existing work on fuzzy quantification. The necessary extensions to cover these cases are sometimes obvious from the usual treatment of these phenomena in logic and linguistics. However, we must concentrate on the core issues of fuzzy NL quantification in this work, and the phenomena not treated do not seem to be intrinsically connected to linguistic quantification as such. Firstly, the proposed framework does not account for higher-order quantifiers. As opposed to a first-order proposition which describes individuals, higher-order propositions refer to sets of individuals, sets of such sets etc. The corresponding higher-order logics, and even the full theory of finite types [1, 27, 143], were found to be valuable tools for linguistic description. To keep things simple, however, the current framework only formalizes first-
82
2 A Framework for Fuzzy Quantification
order quantification. Most linguistic quantifiers are first-order but there are also some examples of higher-order quantification. As remarked by Keenan and Stavi, these express in ‘higher-order’ NPs like “two sets of dishes” or “a pride of lions” [92, p. 256]. The basic framework further abstracts from so-called branching quantification, in which the linear sequence of nested quantifiers is replaced with a partial order. The corresponding branching quantifiers can then operate in parallel and independently of each other [74, 166]. As already mentioned, the relevance of branching quantification to the analysis of language has been a matter of linguistic debate [6, 76, 130, 167] and Barwise [6] made a convincing point that branching quantification is needed to describe certain reciprocal constructions. To support such cases, Chap. 12 will extend the basic framework by fuzzy branching quantifiers. This is also a first step towards fuzzy higher-order quantification because branching quantification, as noted by Barwise, is just a human language technique to conceal the use of certain higher-order constructions. The quantifiers of TGQ are extensional , i.e. they operate on sets of individuals (extensions) rather than ‘meanings’ or ‘intensions’ (usually represented as mappings from states of affairs into extensions), so there is no indexing by some parameter which corresponds to states of affairs. As pointed out by Barwise and Cooper [7, p. 203], they deliberately restricted themselves to an extensional fragment of English because they wanted to focus on the central issues. For similar reasons, the proposed framework for fuzzy quantification also assumes extensional quantifiers. The following characterization of extensional determiners due to Keenan and Stavi also generalizes to fuzzy quantifiers: “To say that a det d is extensional is to say, for example, that whenever the doc[t]ors and the lawyers are the same individuals then d doctors and d lawyers have the same properties, e.g., d doctors attended the meeting necessarily has the same truth value as d lawyers attended the meeting. Thus every is extensional, since if the doctors and the lawyers are the same then, every doctor attended iff every lawyer did” [92, p. 257].
A logic built ‘on top’ of these quantifiers9 will then be extensional as well, according to the usual ‘principle of extensionality’ [50, p. 5], χ ↔ χ |= ϕ ↔ [χ /χ]ϕ i.e. substitutivity of expressions with identical reference. Following Frege [48], we will discern the sense of an NL expression (‘Sinn’ in Frege’s terms, i.e. meaning, intension) and its reference (or ‘Bedeutung’ in Frege’s terms, i.e. extension). Quine [129] has shown that there are certain positions in linguistic expressions called ‘opaque contexts’ in which the substitution of coextensional terms is no longer admissible but rather alters the overall meaning. Thus 9
such a logic with generalized quantifiers has been developed by Barwise and Feferman [8, Ch. 2].
2.9 Chapter Summary
83
a complete model of linguistic quantification will likely have to deal with intensional arguments. And the quantifier itself might also be intensional, like “all alleged” [9, p. 448] or “an undisclosed number of” [92, p. 257]: “It is not hard to imagine a situation in which the doctors and the lawyers are the same and but [sic] an undisclosed number of lawyers attended the meeting is true and an undisclosed number of doctors attended the meeting is false. (Imagine a meeting of medical personnel. The chairman announces the number of doctors in attendance but not the number of lawyers)” [92,
p. 257]. This means that the principle of extensionality fails in this case. However, as stated in Gamut [50, p. 228], “there are not many really intensional NPs or determiners”. In addition, it is relatively clear how to integrate intensional features into a given system of logic. The standard method of accomplishing this was developed by S. Kripke [101,102], i.e. one must add a primitive notion of a possible state of affairs (or ‘possible world’ in modal logic), an accessibility relation on states, and define the semantics in terms of these states and their accessibility. It should be possible to incorporate intensional features into the proposed framework along the same lines. However, this direction will not be pursued further here in order to keep things simple. Finally the Theory of Generalized Quantifiers, and the proposed extensions to fuzzy quantification as well, specifically target at ‘count NPs’ [92, p. 256] or ‘discrete quantifiers’ [9, p. 448] like “a lot of” or “many” rather than so-called ‘mass quantifiers’ or ‘continuous quantifiers’ like “much” or “little”. However, mass quantifiers are important as well, especially for modelling temporal or spatial quantification. In this case, the underlying regions in time and space are possibly best modelled as atomless masses. Hence a comprehensive model will need to support some form of ‘mass quantification’. The framework provides a good starting point for developing a theory of mass quantification, because its core concepts are defined for base sets of infinite cardinality as well. It is not the goal of this book to develop a theory of fuzzy mass quantification, however. To this end, it would certainly be necessary to incorporate additional elements from the theory of fuzzy measures.
3 The Axiomatic Class of Plausible Models
3.1 Motivation and Chapter Overview The proposed framework for fuzzy quantification rests on a division of responsibilities. In order to adequately model a given NL quantifier, we need 1. a precise description of the quantifier of interest in terms of a semi-fuzzy quantifier, which must be supplied by the system user or programmer; 2. a plausible translation of this specification into the corresponding target quantifier, which is under the responsibility of the fuzzification mechanism. While the first aspect is considered application-specific (or even idiosyncratic) and not easily susceptible to systematic study, the second aspect of a meaningpreserving translation will be the central subject of our theory of fuzzy quantification. Basically, we expect a model of fuzzy quantification to be internally coherent and plausible from a linguistic perspective. The proposed QFMs, however, can represent all possible correspondences between specifications and interpretations, including both plausible and meaningless assignments. Thus, the theory of fuzzy quantification must further constrain the admissible choices of QFMs by making explicit the criteria for meaningful interpretations. In other words, the theory must identify the class of plausible models, i.e. of those QFMs which are semantically conclusive. In order to solve these problems with the desired generality, the pursued strategy will be essentially algebraic: Rather than making any claims on ‘the meaning’ of a natural language quantifier and its intended model as a fuzzy quantifier, we will assume that the important aspects of the meaning of a quantifier express in its observable behaviour and can thus be described in mathematical terms. We will then require that those aspects of the meaning of semi-fuzzy quantifiers captured by this formal description be preserved when applying the QFM F. Hence if P is a property of semi-fuzzy quantifiers, and P is the corresponding property of fuzzy quantifiers, then we can demand that F preserve I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 85–109 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
86
3 The Axiomatic Class of Plausible Models
P in the sense that whenever P (Q) holds for a semi-fuzzy quantifier Q, we also have P (F(Q)), i.e. the corresponding property holds for the fuzzy quantifier F(Q) associated with Q. A plausible model of fuzzy quantification should preserve all properties of linguistic relevance. For example, if Q shows a certain monotonicity property, then F(Q) will also be monotonic. Moreover, a plausible model should also preserve important relationships between quantifiers. The prime example are functional relationships between quantifiers which are established by certain constructions (like dualisation or negation). Hence if C is a construction on semi-fuzzy quantifiers which builds a new semi-fuzzy quantifier C(Q) given Q, and C is the corresponding construction on fuzzy quantifiers, then we can require that F be compatible with the construction in the sense that we always have F(C(Q)) = C (F(Q)) , i.e. it does not matter whether we first apply the construction and then ‘fuzzify’ using F, or whether we first apply F and then perform the construction. Compatibility with such constructions corresponds to the well-known mathematical concept of a homomorphism (structure-preserving mapping). The particular properties of quantifiers, relationships and constructions to be considered will usually be adopted from the linguistic theory of natural language quantification, TGQ. The reformulation of these notions will usually be easy due to the similarity between semi-fuzzy or fuzzy quantifiers and the original two-valued quantifiers. The criteria that evolve from this procedure form a catalogue which is open to refinement: It can be developed incrementally and will not collapse entirely should individual criteria need to be altered. It is of course hoped that a careful analysis of meaningful interpretations and their formalization into axiomatic requirements will ultimately converge and form a stable body of criteria which identifies the class of plausible models. In this way, the research into semantical characteristics of NL quantifiers will make it possible to develop a theory of fuzzy quantification which rests on an axiomatic foundation. Postulating another ad-hoc rule for interpretation (i.e. mathematical formula, algorithm) will only achieve modest progress, by contrast – should the formula prove defective, not too much insight would have been gained. For this reason, we will clearly separate the theoretical analysis (What are plausible models? ) and the later identification and implementation of conforming models in this monograph.
3.2 Correct Generalization Perhaps the most elementary condition on a quantifier fuzzification mechanism is that it properly generalize the original semi-fuzzy quantifier. We can express this succinctly if we recall the notion of an underlying semi-fuzzy quan which simply restricts the fuzzy quantifier Q to crisp arguments, tifier U(Q)
3.2 Correct Generalization
87
see Def. 2.11. It is natural to assume that a model of fuzzy quantification satisfy U(F(Q)) = Q ,
(3.1)
for all choices of semi-fuzzy quantifier Q, which means that F(Q) properly generalizes Q in the sense that F(Q)(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn ) whenever all arguments are crisp, i.e. given that Y1 , . . . , Yn ∈ P(E). The requirement of correct generalization is absolutely fundamental because it decides on the internal coherence of the QFM. The criterion expresses the ‘downward compatibility’ of the model with respect to to the original specification in terms of crisp arguments. In other words, we need correct generalisation to properly set up the ‘fuzzification pattern’ (i.e. use of a fuzzification mechanism) which underlies the quantification framework [59, p. 419+]. Due to this enabling role for the proposed framework, the above equation (3.1) will constitute the first requirement imposed on meaningful models of fuzzy quantification. We should note in advance, however, that the corresponding criterion (Z-1) in the final axiom system (stated in Def. 3.28) will be restricted nullary and unary quantifiers (n ≤ 1). This serves the purpose of keeping the axioms simple. When taken together, these axioms will of course entail the original criterion, and thus guarantee that equality (3.1) holds for quantifiers of arbitrary arities n ∈ N. Let us now consider some examples involving such quantifiers of arities n ≤ 1. As concerns unary (one-place) quantifiers, let us assume that E = ∅ is a given domain of persons. We shall further assume that there are some men among the persons in E, but no children. Hence the extensions of “men” and “children” in E, men, children ∈ P(E), satisfy men = ∅ and children = ∅, respectively. Now consider the existential quantifier ∃ : P(E) −→ 2, which determines the quantification results of ∃(men) = 1 and ∃(children) = 0. We certainly expect the fuzzification process to preserve these interpretations, and application of the associated fuzzy quantifier F(∃) : P(E) −→ I should result in F(∃)(men) = 1 and F(∃)(children) = 0. Let us now turn to the case of nullary quantifiers (n = 0). It has already been remarked in Sect. 2.7 that the sets of nullary semi-fuzzy quantifiers and nullary fuzzy quantifiers coincide, regardless of the chosen base set E = ∅. 0 Hence for every nullary semi-fuzzy Q : P(E) −→ I, the considered quantifier itself already qualifies as a fuzzy quantifier, and there is nothing to generalize or to fuzzify. It is therefore essential that these quantifiers be mapped to themselves by every plausible choice of a QFM. Now returning to the above equation (3.1), it is easily observed that this is precisely what U(F(Q)) = Q states in the case of nullary Q. This is because both the nullary semifuzzy quantifier Q, as well as the nullary fuzzy quantifier F(Q), are uniquely determined by their value obtained at the empty tuple ∅. By Def. 2.11, then,
88
3 The Axiomatic Class of Plausible Models
we obtain from U(F(Q)) = Q and the fact that the empty tuple is crisp that these values coincide, i.e. F(Q)(∅) = U(F(Q))(∅) = Q(∅). Hence Q and F(Q) coincide, as desired. To give an example, consider the nullary quantifier true defined by true(∅) = 1. A conforming choice of F should result in F(true) = true, in order to catch the semantics of the constant quantifier. This completes the discussion of the first requirement which must be fulfilled by all models of fuzzy quantification.
3.3 Membership Assessment Membership assessment (crisp or fuzzy) can be modelled through quantification. For an element e of the given base set, we can define a two-valued quantifier πe which checks if e is present in its argument. Similarly, we can define a fuzzy quantifier π e which returns the degree to which e is contained in its argument. It is natural to require that the crisp quantifier πe be mapped to π e , which plays the same role in the fuzzy case. In order to express this precisely on the formal level, we will now introduce this special type of semi-fuzzy quantifiers (and a corresponding type of fuzzy quantifiers) that allow us to treat membership assessments as a special case of quantification. Let us first recall the concept of a characteristic function and agree on the following notation. Definition 3.1. Suppose E is a given set and A ∈ P(E) a (crisp) subset of E. By χA : E −→ 2 we denote the characteristic function of A, i.e. the mapping defined by 1 : e∈A χA (e) = 0 : e∈ /A for all e ∈ E. Building on this concept, let us state the following definition of projection quantifiers, which covers the case of crisp arguments: Definition 3.2. Suppose E is a base set and e ∈ E. The projection quantifier πe : P(E) −→ 2 is defined by πe (Y ) = χY (e) , for all Y ∈ P(E). Notes 3.3. •
The use of the term ‘projection quantifier’ is motivated by the observation that we can view πe as the projection of the ‘E-tuple’ χA : E −→ 2 onto that component χA (e), which is indexed by e.
3.3 Membership Assessment
•
89
To present an example, suppose E = {Joan, Lucas, Mary} is a set of persons and married ∈ P(E) the subset of married persons in E. Then joan = πJoan : P(E) −→ 2 is a projection quantifier, and joan(married) = πJoan (married) 1 : Joan ∈ married = 0 : Joan ∈ / married The example illustrates that πJoan can indeed be used to evaluate statements of the type “Is Joan X?”, where X is a crisp predicate. In particular, it is suitable for modelling the interpretation of the proper name “Joan” as a two-valued quantifier (provided that the individual Joan is contained in the considered domain). See [7, p. 164+] for a linguistic case that proper names be treated as a special type of NL quantifiers.
A corresponding definition of fuzzy projection quantifiers is straightforward. Definition 3.4. Let a base set E be given and e ∈ E. The fuzzy projection quantifier π e : P(E) −→ I is defined by π e (X) = µX (e) for all X ∈ P(E). Notes 3.5. •
The term ‘fuzzy projection quantifier’ has been chosen because the map ping π e : P(E) −→ I can be viewed as the projection of the ‘E-tuple’ µX : E −→ I onto its component µX (e), which is indexed by e. • Fuzzy projection quantifiers provide the missing class of operators which are suited to model gradual membership assessments like “To which grade is Joan X?” through fuzzy quantification. For example, we simply need to evaluate π Joan (tall) in order to assess the grade to which Joan is tall, and we can compute π Joan (rich) to determine µrich (Joan), the degree to which Joan is rich. In particular, the fuzzy projection quantifier π Joan : P(E) −→ I is suitable for interpreting the proper name “Joan” as a fuzzy quantifier (again assuming that Joan be present in the given base set). It is apparent from the above reasoning that crisp and fuzzy projection quantifiers play the same basic role of crisp/fuzzy membership assessment. Intuitively, a plausible model of fuzzy quantification should be compatible with the fundamental operation of membership assessment, and thus assign to each crisp projection quantifier πe its obvious fuzzy counterpart, the corresponding fuzzy projection quantifier π e . In other words, we expect that F(πe ) = π e , regardless of the chosen base set E = ∅ and considered element e ∈ E. This makes the second requirement to be imposed on all models of fuzzy quantification.
90
3 The Axiomatic Class of Plausible Models
3.4 The Induced Propositional Logic The linguistic theory of quantification knows various constructions on natural language quantifiers which involve the use of a Boolean connective (like negation) or corresponding set-theoretic operation (e.g. complementation). Examples comprise the negation of quantifiers, antonyms, duals, intersections and unions in the arguments of a quantifier, conjunctions and disjunctions of quantifiers, and others.1 In order to generalize these constructions on twovalued quantifiers to semi-fuzzy and fuzzy quantifiers, the Boolean connectives must be replaced with suitable continuous-valued counterparts. Similarly, the move from two-valued quantifiers to fuzzy quantifiers necessitates the use of set-theoretic operations on fuzzy sets, and thus forces us to select a particular fuzzy complement, intersection and union, in terms of which the generalized constructions will then be expressed. Of course, one could simply resort to the standard choices, like standard negation 1 − x, conjunction min and disjunction max, but this would make too strong a commitment and foreclose the theoretical analysis of broader classes of models, which rely on general fuzzy negations, conjunctions and disjunctions. The intent to cover such general models forces us to admit for general fuzzy set operations (rather than the standard choices), which will be used to define the constructions on quantifiers. We will permit the choice of connectives to depend on the considered QFM because these constructions must match the the model. In addition to associating this canonical choice of fuzzy operations to the QFM of interest, we should show that this choice is well-motivated. Again, it would be tedious and obstructive to formal analysis if the required decisions were made on an individual case basis. As in the case of QFMs, we need a general solution which controls the transfer from propositional functions to corresponding fuzzy truth functions under a given QFM. This construction must be fully general and hence applicable to arbitrary QFMs. In order to devise a construction parametrized by the QFM which assigns canonical choices of ‘induced’ fuzzy truth functions and induced fuzzy set operations to the corresponding crisp concepts, we need a natural embedding of propositional functions into semi-fuzzy quantifiers to which the QFM of interest can then be applied. In turn, we also need an inverse construction which determines the target fuzzy truth function from the resulting fuzzy quantifier. The required operations on fuzzy sets can then be defined from the resulting fuzzy connectives in the apparent way, i.e. by applying the fuzzy connectives to the observed membership grades. We have investigated two alternative schemes motivated by independent considerations which rest on distinct embeddings and corresponding inverse constructions. We shall see later in theorem Th. 4.12 that in all intended models, both constructions establish the same canonical choice of induced truth functions. This indicates that the construction of induced fuzzy truth functions, 1
All mentioned constructions will be formally defined and discussed later on.
3.4 The Induced Propositional Logic
91
to be introduced now, indeed results in the appropriate choice of connectives and fuzzy set operations for the given QFM (more details on the alternative construction can be found in Sect. 4.4). In order to establish the link between logical connectives and quantifiers, we first observe that 2n ∼ = P({1, . . . , n}), using the bijection η : 2n −→ P({1, . . . , n}) defined by η(x1 , . . . , xn ) = {k ∈ {1, . . . , n} : xk = 1} ,
(3.2)
for all x1 , . . . , xn ∈ 2. We can use an analogous construction in the fuzzy . . . , n}), based on the bijection η : In −→ case. We then have In ∼ = P({1, P({1, . . . , n}) defined by µη(x1 ,...,xn ) (k) = xk
(3.3)
for all x1 , . . . , xn ∈ I and k ∈ {1, . . . , n}. These bijections can be utilized for a translation between semi-fuzzy truth functions (i.e. mappings f : 2n −→ I) and corresponding semi-fuzzy quantifiers Qf : P({1, . . . , n}) −→ I, which embeds propositional functions into semi-fuzzy quantifiers, and similarly the : P({1, translation from fuzzy quantifiers Q . . . , n}) −→ I into fuzzy truth functions f : In −→ I, which implements the required inverse embedding. Definition 3.6. Suppose F is a QFM and f : 2n −→ I is a mapping (i.e. a ‘semi-fuzzy truth function’) for some n ∈ N. The semi-fuzzy quantifier Qf : P({1, . . . , n}) −→ I is defined by Qf (Y ) = f (η −1 (Y )) for all Y ∈ P({1, . . . , n}). In terms of Qf , the induced fuzzy truth function ) : In −→ I is then defined by F(f )(x1 , . . . , xn ) = F(Qf )( F(f η (x1 , . . . , xn )) , for all x1 , . . . , xn ∈ I. Notes 3.7. •
If f : 20 −→ I is a nullary semi-fuzzy truth function (i.e., a constant), then Qf : P(∅) −→ I turns into a nullary semi-fuzzy quantifier on the universe 0 E = {∅}, noticing that P(∅) = P({∅}) = {∅}. Hence in this case, the ) : I0 −→ I can be expressed as F(f )(∅) = fuzzy truth function F(f 0 F(c)(∅), where c : P({∅}) −→ I is the constant c(∅) = f (∅). • We shall not impose restrictions on the induced connectives directly; these will be entailed by the remaining axioms. ) as f. For • Whenever F is clear from the context, we shall abbreviate F(f . example, the induced disjunction will be written ∨
92
3 The Axiomatic Class of Plausible Models
Induced operations on fuzzy sets, i.e. fuzzy complement ¬ : P(E) −→ P(E), 2 2 : P(E) −→ P(E) and fuzzy union ∪ : P(E) −→ P(E), fuzzy intersection ∩ can be defined element-wise in terms of the induced negation ¬ : I −→ I, : I×I −→ I or disjunction ∨ : I×I −→ I, respectively. For conjunction ∧ example, the induced complement ¬ X ∈ P(E) of X ∈ P(E) is defined by µX (e) , µ¬ X (e) = ¬ for all X ∈ P(E) and e ∈ E. In the following, we will assume that an arbitary but fixed choice of these connectives and fuzzy set operations is given (usually provided by a QFM). We shall now consider those constructions on semi-fuzzy and fuzzy quantifiers which either involve a fuzzy truth function (like negation), or a corresponding fuzzy set-theoretic operation (like complementation). The considered constructions therefore depend on the underlying fuzzy truth function or operation on fuzzy sets that has been chosen as a generalisation of the corresponding crisp operation. By introducing a canonical construction of fuzzy truth functions that are induced by a QFM, and by then defining the induced operations on fuzzy sets based on these truth functions in the apparent way, we have solved the problem of selecting a suitable fuzzy negation, fuzzy disjunction etc. from the endless choices of possible fuzzy negations, disjunctions etc. that could be considered for each QFM of interest. It is now expected that each model of fuzzy quantification be ‘self-consistent’, i.e. compatible with its own set of induced connectives.
3.5 The Aristotelian Square Based on the induced fuzzy negation and complement, we can now express important constructions on quantifiers like negation, formation of antonyms, and dualisation. These constructions are well-known from logics and linguistics because they express on the linguistic surface, and it is therefore essential for models of fuzzy quantification to be compatible with these constructions (even though this requirement proved to be notoriously difficult for earlier approaches to fuzzy quantification). Let us now consider those constructions in turn that depend on the induced negation or complementation in some way; constructions that build on other truth functions or set-theoretic operations will be considered later on. n In analogy to the external negation ¬Q fuzzy quantifier Q : P(E) −→ 2 based on two-valued negation ¬ : 2 −→ 2 of a two-valued quantifier n Q : P(E) −→ 2 in TGQ [7, p. 186] and [50, p. 236], we shall first introduce the external negation of (semi-) fuzzy quantifiers. In natural language, this operation corresponds to the negation of a whole sentence, rather than negation of the noun phrase.
3.5 The Aristotelian Square
93
Definition 3.8. n The external negation ¬ Q : P(E) −→ I of a semi-fuzzy quantifier Q : n P(E) −→ I is defined by (Q(Y1 , . . . , Yn )) , ( ¬ Q)(Y1 , . . . , Yn ) = ¬ : P(E) n −→ I in the case of Q for all Y1 , . . . , Yn ∈ P(E). The definition of ¬ : P(E) n −→ I is analogous. fuzzy quantifiers Q For example, “no” is the negation of “some”. Hence no(women, men) = ¬ some(women, men) which formally expresses that “No women are men” can be paraphrased as “It is not the case that some women are men”.2 In addition to the external negation ¬Q of a two-valued quantifier, TGQ discerns another type of negation, which corresponds to the antonym or internal negation Q¬ of a two-valued quantifier [7, p. 186] and [50, p. 237]. The term ‘internal complementation’ (rather than ‘internal negation’) has been chosen because the construction involves the complementation of one of the argument sets. Definition 3.9. n Let a semi-fuzzy quantifier Q : P(E) −→ I of arity n > 0 be given. The n antonym Q¬ : P(E) −→ I of Q is defined by Q¬(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn−1 , ¬Yn ) , n
¬ : P(E) −→ I of a fuzzy quantifier for all Y1 , . . . , Yn ∈ P(E). The antonym Q n Q : P(E) −→ I is defined analogously, based on the given fuzzy complement ¬ . Notes 3.10. • For example, “no” is the antonym of “all”. Hence no(women, men) = all(women, ¬men) which formally captures that “No women are men” can be paraphrased as “All women are not men”. • The antonym is constructed by (crisp or fuzzy) complementation in the last argument of the quantifier. This conforms to linguistics expectations because by convention, we have used the last argument to accept the interpretation of the verbal phrase; e.g. the NL expression “No women are 2
Here and in the following examples We will assume that the considered negation ¬ satisfies ¬ 0 = 1 and ¬ 1 = 0. This property of the induced negation will be ensured by the axioms stated at the end of the chapter.
94
3 The Axiomatic Class of Plausible Models
men” is interpreted by inserting the interpretation men ∈ P(E) of the verbal phrase “are men” into the second argument slot. Compatibility with internal complementation should not be artificially restricted to the n-th argument, though, and the particular indexing of the arguments should be inessential to the outcome of fuzzy quantification. Plausible approaches to fuzzy quantification should therefore respect complementation in all arguments. By means of permutations of arguments, to be discussed in Sect. 4.5, we will be able to reduce the general compatibility condition to the base condition on complementation in the last argument. • Zadeh [197, p. 165] has proposed a notion of antonymy for proportional quantifiers, which is defined on their representations in terms of membership functions µQ : I −→ I. TGQ also knows the concept of the dual of a two-valued quantifier (written in our notation), which is the antonym of the negation of a quantifier as Q (or equivalently, the negation of the antonym) [7, p. 197], [50, p. 238]: Definition 3.11. : P(E)n −→ I of a semi-fuzzy quantifier Q : P(E)n −→ I, n > 0 The dual Q is defined by 1 , . . . , Yn ) = ¬ Q(Y Q(Y1 , . . . , Yn−1 , ¬Yn ) , ¬ of a fuzzy quantifier Q is = ¬ for all Y1 , . . . , Yn ∈ P(E). The dual Q Q defined analogously. For example, “some” is the dual of “all”. Again resorting to crisp concepts like “men” and “mammals”, we may therefore assert that all(men, mammals) = ¬ some(men, ¬mammals) . The latter relationship ensures that “All men are mammals” can be paraphrased as “It is not the case that some men are not mammals”. Acknowledging the significance of these constructions to the description of natural language, we expect plausible models of fuzzy quantification to be homomorphic with respect to these constructions on quantifiers. Hence F(no) should be the negation of F(some), F(no) should be the antonym of F(all), and F(some) should be the dual of F(all). Concerning external negation, this will ensure, for example, that “No rich is young” can be paraphrased as “It is not the case that some rich is young”, which is justified because F(no)(rich, young) = ¬ F(some)(rich, young). As to internal complementation, we obtain that F(no)(rich, young) = F(all)(rich, ¬ young), and thus “No rich are young” can also be phrased as “All rich are not young”. Finally in the case of the dual, we conclude from F(all)(rich, old) = ¬ F(some)(rich, ¬ old) that “All rich are old” means the same as “It is not the case that some rich are not old”.
3.5 The Aristotelian Square
95
The interdependencies of external negation, internal complementation (formation of antonyms) and dualisation are summarized in the Aristotelian square.3 The Aristotelian square of the quantifier “all” is displayed in Fig. 3.1.
all
dual
some
antonym
no
dual
external negation
antonym
not all
Fig. 3.1. The Aristotelian square of “all”
The Aristotelian square expresses in graphical form that the operators (dualisa(•) (identity), ¬ (•) (external negation), (•)¬ (antonym) and (•) tion) constitute a Klein group structure on semi-fuzzy quantifiers under the operation ◦ of composition: ◦ (•) (•) (•) (•) ¬ (•) ¬ (•)¬ (•)¬ (•) (•)
¬ (•) (•)¬ ¬ (•) (•)¬ (•) (•) (•) (•) (•)¬ ¬ (•)
(•) (•)
¬ (•) (•) ¬ ¬ (•) (•) ¬ (•) (•) (•) (•) (•) ¬ ¬ (•)
(•) (•)
(•)¬ ¬ (•) (•)
Or in the case of fuzzy quantifiers, ◦ (•) (•) (•) (•) ¬ (•) ¬ ¬ (•) ¬ (•) (•) (•)
(•) ¬ ¬ (•) (•)
The square can be properly set up for arbitrary semi-fuzzy quantifiers Q : n : P(E) n −→ I provided that ¬ P(E) −→ I and fuzzy quantifiers Q : I −→ I 3
Sometimes also referred to as the square of opposition, see e.g. Gamut [50, p.238].
96
3 The Axiomatic Class of Plausible Models
is involutive and satisfies ¬ 1 = 0, ¬ 0 = 1. These conditions will of course be entailed by the DFS axioms stated below. The requirement that F be compatible with external negation, internal complementation (formation of antonyms) and dualisation, can then be summarized into the condition that the considered model of fuzzy quantification preserve Aristotelian squares (for more details, see Sect. 4.7). In order to warrant this, there is no need to explicitly require the compliance of F with all three constructions involved in the square. As will later be shown in Th. 4.19 and Th. 4.20, the targeted conformance to these constructions can be distilled into a single representative requirement, that of preserving duals of quantifiers. It is hence sufficient to demand that = F(Q) F(Q) n
for all semi-fuzzy quantifiers Q : P(E) −→ I of arity n > 0. The latter condition comprises the third requirement, which governs plausible choices of models.
3.6 Unions of Argument Sets Apart from complementation, we can also perform other set-theoretic operations on the arguments of a quantifier. Due to De Morgan’s law, and the known compatiblity of the considered models to complementation, which has been ensured by the previous requirement, it is sufficient to consider unions of arguments, in order to ensure that the given model of fuzzy quantification fully preserves the Boolean argument structure that can be expressed in NL. Let us hence introduce the construction which builds new quantifiers from given ones by means of forming the union of arguments: Definition 3.12. n Let a semi-fuzzy quantifier Q : P(E) −→ I of arity n > 0 be given. We define n+1 −→ I by the semi-fuzzy quantifier Q∪ : P(E) Q∪(Y1 , . . . , Yn+1 ) = Q(Y1 , . . . , Yn−1 , Yn ∪ Yn+1 ) ∪ is defined for all Y1 , . . . , Yn+1 ∈ P(E). In the case of fuzzy quantifiers, Q . analogously, based on the given fuzzy set operation ∪ Notes 3.13. •
To see how this construction expresses on the level of NL surface, consider the example that “Most men drink or smoke”, and its model as a quantifying expression, most(men, drink ∪ smoke), where we have assumed for the sake of argument that the extensions smoke, drink ∈ P(E) of “smoke” and “drink” be crisp. The above quantifying expression can be decomposed into the constructed quantifier “Most Y1 are Y2 or Y3 ”, i.e. Q = most∪, which is applied to the argument triple (men, drink, smoke).
3.7 Monotonicity in the Arguments
97
•
The construction is also underlying the definition of the two-place NL quantifier “all” in terms of the unary universal quantifier ∀ known from logics, because all(Y1 , Y2 ) = ∀((¬Y1 ) ∪ Y2 ). • The dual construction of intersections in arguments can be defined along the same lines, see Sect. 4.9.
In order to ensure the full preservation of Boolean argument structure, conforming models of fuzzy quantification can be expected to comply with the above construction of unions in arguments. We therefore require that F preserves the union of arguments, which is formally captured by the equality F(Q∪) = F(Q)∪
(3.4) n
which must be valid for arbitrary semi-fuzzy quantifiers Q : P(E) −→ I of arity n > 0. Hence consider the example that “Most rich are old or lucky”, which is lucky) modelled by the fuzzy quantifying expression F(most)(rich, old ∪ built from the fuzzy extensions rich, old, lucky ∈ P(E) of “rich”, “old”, and “lucky” persons, respectively. The compliance of F with equation (3.4) can be propthen ensures that the involved composite quantifier F(most)∪ erly represented by its underlying semi-fuzzy quantifier Q = most∪. Hence it does not matter whether “Many rich are old or lucky” is computed by lucky) or by resorting to the decomposition evaluating F(many)(rich, old ∪ F(Q )(rich, old, lucky) based on Q (Y1 , Y2 , Y3 ) = many(Y1 , Y2 ∪ Y3 ). As mentioned above, the requirement (3.4) imposed on unions of arguments can be combined with other conditions like compatibility with internal complementation, in order to achieve full preservation of Boolean argument structure.4 In particular, this will ensure that F(all)(X1 , X2 ) = F(∀)((¬X1 )∪X2 ), i.e. the two-place NL quantifier “all” can be determined from the unary universal quantifier ∀ even when there is fuzziness in the arguments. This completes the discussion of Boolean argument structure, and the corresponding condition of preserving unions in arguments, which formalizes the fourth requirement on models of fuzzy quantification.
3.7 Monotonicity in the Arguments It is essential for a model of fuzzy quantification to to comply with the expected entailment relationships. The fifth requirement on plausible models, to be introduced now, captures those entailments that stem from monotonicity properties of the involved quantifiers. In order to express the relevant monotonicity properties of semi-fuzzy and fuzzy quantifiers, let us first recall the definition of the fuzzy inclusion relation. 4
Assuming that none of the Boolean variables Xi occurs more than once.
98
3 The Axiomatic Class of Plausible Models
Definition 3.14 (Fuzzy inclusion relation). are fuzzy subsets of E. We say Suppose E is some set and X1 , X2 ∈ P(E) that X1 is contained in X2 (in symbols, X1 ⊆ X2 ) if µX1 (e) ≤ µX2 (e) for all e ∈ E. Based on this concept, we can now state precisely what it means for a (semi)fuzzy quantifier to be monotonic in one of its arguments. Definition 3.15 (Monotonicity in the i-th argument). n A semi-fuzzy quantifier Q : P(E) −→ I is said to be nondecreasing in its i-th argument, i ∈ {1, . . . , n}, if Q(Y1 , . . . , Yn ) ≤ Q(Y1 , . . . , Yi−1 , Yi , Yi+1 , . . . , Yn ) whenever the involved arguments Y1 , . . . , Yn , Yi ∈ P(E) satisfy Yi ⊆ Yi . Q is said to be nonincreasing in the i-th argument if under the same conditions, it always holds that Q(Y1 , . . . , Yn ) ≥ Q(Y1 , . . . , Yi−1 , Yi , Yi+1 , . . . , Yn ) . : P(E) n −→ I are enThe corresponding definitions for fuzzy quantifiers Q tirely analogous. In this case, the arguments range over P(E), and ‘⊆’ is the fuzzy inclusion relation. Notes 3.16. • To present an example, “all” is nonincreasing in the first argument and nondecreasing in the second argument. From the former property, we then know that all(married ∩ men, have children) ≥ all(men, have children), which expresses that “All married men have children” makes a weaker requirement than “All men have children”. As to the latter property, we may conclude from the nondecreasing monotonicity of “all” in its second argument that all(men, have daughters) ≤ all(men, have children) . Consequently “All men have daughters” poses a stronger condition than “All men have children”. • In TGQ, nondecreasing monotonicity in the scope argument (i.e. in the argument slot which accepts the interpretation of the verb phrase) is usually termed upward monotonicity, while a quantifier which is nonincreasing in
3.7 Monotonicity in the Arguments
99
this position is called downward monotonic, see e.g. [50, p. 232+].5 Recalling the convention of reserving the last argument for denotation of the verb phrase, this means that positive monotonic quantifiers correspond to those quantifiers which are nondecreasing in the n-th argument, and downward monotonicity captures those quantifiers that are nonincreasing in their n-th argument. For example, “all” is upward monotonic and “no” is downward monotonic. The property of upward/downward monotonicity is rather typical of NL quantifiers. • In the common case of two-place quantification, TGQ has also coined special terms for monotonicity in the first argument. In TGQ, those twoplace quantifiers that are nondecreasing in their restriction are dubbed persistent, while those that are nonincreasing in the first argument are called antipersistent, see e.g. [7, p. 193] and [50, p. 242+]. Simple examples are “some” (persistent) and “all” (antipersistent). Compared to upward/downward monotonicity, there are fewer instances of NL quantifiers which are persistent or antipersistent [50, p. 243]. For example, proportional quantifiers like “most” typically lack both persistence and antipersistence. • Zadeh [197, p. 164] has coined the similar notions of monotone nondecreasing and monotone nonincreasing quantifiers, which refer to the membership functions µQ : I −→ I used to define the proportional type. Let us now discusss the close relationship between monotonicity properties of a quantifier and valid patterns of reasoning, and show how the monotonicity type of a quantifier constrains the inventory of admissible syllogisms. Hence consider the quantifier “more than ten”, which is nondecreasing in its first argument. It then holds that more than ten(married ∩ men, have children) ≤ more than ten(men, have children) . It is this inequality which justifies the syllogism More than ten married men have children All married men are men More than ten men have children Other quantifiers like “some” can be substituted for “more than ten” here, which are also nondecreasing in the first argument. In the case of a quantifier which is nonincreasing in the first argument, like “no” or “all”, the above pattern is no longer valid, and must be replaced with a pattern which fits the quantifier type. In this case, we obtain the converse inequality no(men, have children) ≤ no(married ∩ men, have children) and a corresponding pattern of reasoning, 5
Barwise and Cooper [7, p. 184+], however, refer to positive and monotonic kinds as monotone increasing and monotone decreasing, respectively.
100
3 The Axiomatic Class of Plausible Models
No men have children All married men are men No married men have children As concerns monotonicity in the second argument, let us consider the quantifier “most” which is nondecreasing in its second argument. We then obtain that most(men, married) ≤ most(men, married ∪ divorced) , i.e. “Most men are married” expresses a stronger condition than “Most men are married or divorced”. Again, the above inequality justifies the pattern of reasoning, Most men are married All married are married or divorced Most men are married or divorced Let us now turn to the fuzzy case. Even when there is fuzziness in the arguments, we would certainly expect the above patterns of reasoning to remain applicable. Because the above syllogisms are justified by the underlying inequality that express the monotonicity properties of the considered quantifiers, this means that we must enforce the preservation of monotonicity properties in order to maintain the applicable patterns of reasoning. For example, it should hold in a plausible model of fuzzy quantification that men, rich) F(more than ten)(young ∩ persons, rich) ≤ F(more than ten)(young ∩ assuming that men ⊆ persons. It is this inequality which permits us to conclude from “More than ten young men are rich” to the entailed “More than ten young persons are rich”, regardless of the fuzziness in young, rich ∈ P(E). Similarly, it should hold that F(most)(young, very poor) ≤ F(most)(young, poor) , satisfy very poor ⊆ assuming that the fuzzy subsets very poor, poor ∈ P(E) poor. We can then conclude from “Most young are very poor” that indeed “Most young are poor”. These examples illustrate that the considered entailment relationships will transfer to the general case of fuzzy arguments, provided that the chosen model of quantification F preserve the underlying monotonicity properties. The relevant adequacy condition on QFMs can then be phrased as follows. Definition 3.17 (Preservation of monotonicity in the arguments). A QFM F is said to preserve monotonicity in the arguments if semi-fuzzy n quantifiers Q : P(E) −→ I which are nondecreasing (nonincreasing) in their i-th argument, i ∈ {1, . . . , n}, are mapped to fuzzy quantifiers F(Q) which are also nondecreasing (nonincreasing) in their i-th argument.
3.8 The Induced Extension Principle
101
Notes 3.18. •
For example, F(all) should be nonincreasing in the first and nondecreasing in the second argument. The above motivating cases based on F(more than ten) and F(most) are also instances of the general preservation property, of course. • When combined with the other requirements on models of fuzzy quantification, the preservation condition can be restricted to the case that Q is nonincreasing in its last argument. Again, this restriction will serve the purpose of simplifying the axiom system and ensuring its independence. The fifth requirement on plausible models will be stated in terms of nonincreasing (rather than nondecreasing) monotonicity for technical purposes; when presented in this way, the criterion facilitates the proof that F(¬) is a strong negation operator. To give an example where the restricted property applies directly, consider the two-valued quantifier “no”, which is nonincreasing in its last argument. By the restricted preservation condition, then, F(no) is nonincreasing in its last argument also. For instance, this ensures that F(no)(young, rich) ≤ F(no)(young, very rich) , i.e. “No young are very rich” is entailed by “No young are rich”.
To sum up, every plausible model of fuzzy quantification should preserve the relevant entailment relationships, which are often tied to the monotonicity type of the involved quantifier. It is thus crucial for the theory of fuzzy quantification to consider the underlying monotonicity properties of quantifiers which show up in these entailments. The fifth requirement on models of fuzzy quantification therefore enforces a criterion which is essential to the preservation of general monotonicity properties, and in turn ensures that all corresponding patterns of reasoning remain valid in the presence of fuzziness.
3.8 The Induced Extension Principle The final requirement on plausible models of fuzzy quantification is concerned with the problem of establishing a systematic relationship between the interpretations of quantifiers on different base sets. To this end, a homomorphism condition will be introduced based on powerset mappings which connect the behaviour of the fuzzification mechanism across domains: Definition 3.19 (Powerset mapping). To each mapping f : E −→ E , we associate a mapping f : P(E) −→ P(E ) (the powerset mapping of f ) which is defined by f(Y ) = {f (e) : e ∈ Y } , for all Y ∈ P(E).
102
3 The Axiomatic Class of Plausible Models
Notes 3.20. •
Often the same symbol f is used to denote both the original mapping and its extension to powersets. In the present context, however, it is important to discern the base mapping f from its associated powerset mapping, and we will therefore use the f-notation throughout. • There is a closely related concept, namely that of the inverse image mapping f −1 : P(E ) −→ P(E) of a given f : E −→ E , which is (as usual) defined by f −1 (V ) = {e ∈ E : f (e) ∈ V } ,
(3.5)
for all V ∈ P(E ). Often if V is a singleton, i.e. V = {v} for some v ∈ E , we will simply write f −1 (v). The underlying mechanism which transports f to f can be generalized to the case of fuzzy sets; such a mechanism is then called an extension principle. Formally, we define (a pretty general class of) extension principles as follows. Definition 3.21 (Extension principle). An extension principle E assigns to each mapping f : E −→ E a correspond ). For convenience, we shall assume that ing mapping E(f ) : P(E) −→ P(E E, E = ∅. Notes 3.22. •
Extension principles thus provide the desired mechanism which associates ) with given base mappings fuzzy powerset mappings E(f ) : P(E) −→ P(E f : E −→ E . We do not impose any a priori restrictions on the wellbehavedness of extension principles. These will later result from the axioms on plausible models of fuzzy quantification. A survey on the intuitive expectations on reasonable choices, along with the theorems that these semantical conditions are fulfilled in the models, is given below in Sect. 4.12. These results and the analysis of standard quantifiers in Sect. 4.16 will further reveal that plausible extension principles can be expressed in terms of existential quantification, i.e. they can be represented by a possibly infinitary formula built from an underlying s-norm. • The case that E = ∅ or E = ∅ has been excluded in order to allow a simpler definition of induced extension principles that will be associated with QFMs (to be defined below in Def. 3.25). After all, the case of base mappings f with an empty domain or range is irrelevant to our present purposes anyway. However, the issue is likely to be judged differently when discussing extension principles in a broader context. The restriction to nonempty sets is just a matter of convenience because it will simplify the later definition of the induced extension principle. Every extension principle (in the sense of the above definition) can easily
3.8 The Induced Extension Principle
103
be completed into a ‘full’ extension principle, which is also defined in the case of E = ∅ or E = ∅. Hence consider f : ∅ −→ E . In this case, we stipulate that E(f ) = c∅ ,
(3.6)
) is the constant which assigns the set c∅ (∅) = where c∅ : P(∅) −→ P(E ∅ ∈ P(E ) to ∅ ∈ P(∅) = {∅}. In the remaining case that E = ∅, we simply observe from the definition of a function that the considered f : E −→ ∅ only qualifies as a mapping if E = ∅ as well. However, the case that E = ∅ is already covered by equation (3.6). The prototypical example of an extension principle has been suggested by Zadeh [189]. Recast in our notation, this ‘standard extension principle’ is defined as follows. Definition 3.23 (Standard extension principle). Let f : E −→ E a mapping. The standard extension principle assigns to f ˆ ) defined by the fuzzy powerset mapping fˆ : P(E) −→ P(E µ ˆˆ
f (X)
(y) = sup{µX (e) : e ∈ f −1 (y)}
for all y ∈ E , where f −1 (y) = {e ∈ E : y = f (e)}. Note 3.24. The extension principle can also be generalized to n-ary mappings f : E1 × · · · × En −→ E which it takes to n-ary fuzzy powerset mappings ˆ fˆ : P(E 1 ) × · · · × P(En ) −→ P(E ), defined by n
µ ˆˆ
f (X1 ,...,Xn )
(y) = sup{min µXi (ei ) : (e1 , . . . , en ) ∈ f −1 (y)} , i=1
1 ), . . . , Xn ∈ P(E n ) and y ∈ E ; see Yager [178] for details. for all X1 ∈ P(E This generalized version of the extension principle will be of no importance for the following. However, it is apparent that the general extension principles E considered here can be generalized to n-place mappings along the same lines. With each QFM, we can associate a corresponding extension principle through a canonical construction. Definition 3.25. Every QFM F induces an extension principle F which to each f : E −→ E ) : P(E) ) defined by (where E, E = ∅) assigns the mapping F(f −→ P(E µF (f )(X) (e ) = F(χf(•) (e ))(X) , for all X ∈ P(E), e ∈ E ; or more succinctly: µF (f )(X) (e ) = F(πe ◦ f)(X)
104
3 The Axiomatic Class of Plausible Models
The equivalence of the first and second form is obvious from the definition of projection quantifiers, see Def. 3.2. The powerset functions f : P(E) −→ ) : P(E) ) P(E ) and the corresponding fuzzy powerset functions F(f −→ P(E obtained from the induced extension principle are important in our context because they can be applied to the argument sets of semi-fuzzy quantifiers )). (crisp case, f) and fuzzy quantifiers (fuzzy case, using F(f We require that every ‘reasonable’ choice of F be compatible with its induced extension principle in the following sense. n Suppose that Q : P(E) −→ I is a semi-fuzzy quantifier and f1 , . . . , fn : E −→ E are given mappings, E = ∅. We can construct a semi-fuzzy quantin fier Q : P(E ) −→ I by composing Q with the powerset mappings fi , . . . , f n, i.e. Q (Y1 , . . . , Yn ) = Q(f1 (Y1 ), . . . , fn (Yn )) , for all Y1 , . . . , Yn ∈ P(E). This can be expressed more compactly if we recall the concept of product mapping. If g1 , . . . , gm : A −→ B are any mappings, n then × gi : An −→ B n is defined by i=1
n
( × gi )(x1 , . . . , xn ) = (g1 (x1 ), . . . , gn (xn )) . i=1
n
By using the product, the above definition of Q then becomes Q = Q ◦ × fi , i=1
because n
(Q ◦ × fi )(Y1 , . . . , Yn ) = Q(f1 (Y1 ), . . . , f n (Yn ))
(3.7)
i=1
for all Y1 , . . . , Yn ∈ P(E ); ‘◦’ denotes functional composition. By utilizing the induced extension principle F of a QFM, we can perform a simi : P(E) n −→ I with lar construction on fuzzy quantifiers, thus composing Q n 1 ), . . . , F(f i ) : P(E n ) to form the fuzzy quantifier Q ◦ × F(f )n −→ I F(f i=1
defined by n
i ))(X1 , . . . , Xn ) = Q( F(f 1 )(X1 ), . . . , F(f n )(Xn )) , ◦ × F(f (Q i=1
). We require that a plausible choice of QFM F for all X1 , . . . , Xn ∈ P(E comply with this construction, and hence impose the following homomorphism condition with respect to the application of (crisp or fuzzy) powerset functions: Definition 3.26 (Compatibility with functional application). Let F be a given QFM. We say that F is compatible with functional application if the equality
3.8 The Induced Extension Principle n
n
i=1
i=1
i) F(Q ◦ × fi ) = F(Q) ◦ × F(f n
is valid for all semi-fuzzy quantifiers Q : P(E) f1 , . . . , fn : E −→ E with domain E = ∅.
105
(3.8) −→ I and mappings
Notes 3.27. • The term ‘functional application’ has been chosen because the constructed quantifier is obtained from applying the extended functions f1 , . . . , fn to the corresponding arguments. • Let us notice in advance that it is not possible to span the intended class i ) are replaced with the of models if the induced powerset mappings F(f ˆ standard choice of fuzzy powerset mappings fˆi obtained from the standard extension principle. As will be shown below in theorem Th. 4.62, the use of the standard extension principle would restrict the admissible models of fuzzy quantification to those that induce the standard disjunction F(∨) = max. This is too restrictive because we would like to have models for arbitrary t- and s-norms. It was the intent to cover such general models which motivated the development of the induced extension principle because any reference to the standard extension principle would result in a restriction to the limited class of ‘standard models’. (Of course, the standard models are the preferred choice in most applications, but it deepens the knowledge of fuzzy quantification if other models can also be studied). As mentioned above, the induced extension principle F enables us to construct n i ) on a base set E from a given fuzzy fuzzy quantifiers Q = Q ◦ × F(f i=1
quantifier Q on another base set E . It is this cross-domain characteristic which makes the compliance condition expressed by (3.8) especially important to the theory of fuzzy quantification. In fact, the required compatibility with functional application will constitute the only criterion in the proposed axiom system which controls the behaviour of F on different base sets E, E . For n example, if β : E −→ E is a bijection, Q : P(E ) −→ I is a semi-fuzzy n quantifier, and Q : P(E) −→ I is defined by 1 ), . . . , β(Y n )) Q (Y1 , . . . , Yn ) = Q(β(Y for all Y1 , . . . , Yn ∈ P(E), then F(Q )(X1 , . . . , Xn ) = F(Q)(F(β)(X 1 ), . . . , F(β)(Xn )) for all X1 , . . . , Xn ∈ P(E), and −1 )(X1 ), . . . , F(β −1 )(Xn )) F(Q)(X1 , . . . , Xn ) = F(Q )(F(β ), which shows that F may not depend on any parfor all X1 , . . . , Xn ∈ P(E ticular properties of elements of a base set E.
106
3 The Axiomatic Class of Plausible Models
The sixth and last requirement that will be imposed on models of fuzzy quantification, i.e. the compatibility with functional application, thus enforces a coherent behaviour of F across domains.
3.9 Determiner Fuzzification Schemes: The DFS Axioms The requirements on plausible models of fuzzy quantification are summarized in the following axiom system. Definition 3.28 (Determiner fuzzification schemes). A QFM F is called a determiner fuzzification scheme (DFS) if the following n conditions are satisfied for all semi-fuzzy quantifiers Q : P(E) −→ I. Correct generalisation
U(F(Q)) = Q
if n ≤ 1
Projection quantifiers
(Z-2)
Dualisation
F(Q) = π e if Q = πe for some e ∈ E = F(Q) n>0 F(Q)
Internal joins
F(Q∪) = F(Q)∪
(Z-4)
Preservation of monotonicity
If Q is nonincreasing in the n-th arg, then (Z-5) F(Q) is nonincreasing the in n-th arg, n > 0
Functional application
i) F(Q ◦ × fi ) = F(Q) ◦ × F(f
(Z-1)
n>0
n
n
i=1
i=1
(Z-3)
(Z-6)
where f1 , . . . , fn : E −→ E, E = ∅. Notes 3.29. •
The term ‘determiner fuzzification scheme’ (DFS) rather than ‘quantifier fuzzification scheme’ has been chosen to reduce the risk of confusion with quantifier fuzzification mechanisms (QFMs). However, a DFS is nothing but a special kind of QFM which satisfies the above conditions. • These conditions simply combine the six requirements introduced above which capture important expectations on models of fuzzy quantification. Some of the requirements have been restricted to special cases. This will reduce the effort of proving that a given candidate QFM F is indeed a model of the theory. Most importantly, the restriction to special cases avoids any overlap of the criteria. This is necessary to obtain a system of independent axioms. This consideration motivated the restriction of ‘correct generalisation’ (Z-1) to quantifiers of arities n ≤ 1, and the restriction of the monotonicity requirement to quantifiers which are nonincreasing in their last argument. The original, unrestricted requirements are of course entailed by the resulting axiom system; see Th. 4.1 and Th. 4.28 in the next chapter.
3.10 Chapter Summary
107
•
We have taken pains to avoid any reference to the standard connectives and the standard extension principle in the DFS axioms (in favor of the induced connectives and induced extension principle of a DFS) because the use of the standard choices would have excluded all models which induce F(∨) = max and F(∧) = min (see Th. 5.25 below). In order to maintain the possibility of ‘non-standard models’ based on fuzzy conjunctions different from min, we associated a canonical choice of fuzzy truth functions and general extension principle to each QFM of interest. The constructions on quantifiers which depend on a fuzzy truth function or the extension principle have then been formulated in terms of these induced constructions. This strategy targets at a coherent or ‘self-consistent’ system compatible with its own induced constructions. • The original presentation of determiner fuzzification schemes in [53] was based on a set of nine DFS axioms. These axioms contained some interdependencies, though, which motivated the refinement of the original axioms into the equivalent axiom system presented above. See [55] for details on both axiom systems and the proof of their equivalence. The proposed axioms might appear rather abstract at first sight. This is because these axioms have resulted from a process of simplification and compression. The development of the axiom system started with the formalization of a large number of linguistic plausibility criteria (the full list will be discussed in the next three chapters). By eliminating redundancy and interdependencies, a generating system of basic criteria was then identified. This reduction was necessary to obtain a minimal (i.e. irreducible) axiom system of elementary criteria which can be easily verified for every QFM of interest. The remaining plausibility criteria which are not part of the axiom system must then be entailed by these core axioms. Let us now state that the final axiom system is indeed independent, i.e. the axioms (Z-1)–(Z-6) achieve a minimal characterization of the intended class of models: Theorem 3.30. The DFS axioms (Z-1)–(Z-6) are independent, i.e. none of the conditions is entailed by the remaining conditions. Thus each criterion identifies a dimension in the space of possible models which is significant with respect to their semantic plausibility.
3.10 Chapter Summary The novel quantification framework proposed in this book has introduced a rich class of potential models. In this chapter, we made an effort to identify the plausible models of fuzzy quantification within the proposed framework. In order to develop a general solution to the problem of systematic and coherent interpretation, we pursued a strategy which is essentially algebraic, i.e. the
108
3 The Axiomatic Class of Plausible Models
plausible models were characterized in terms of observable properties of the considered QFM. It is thus assumed that all important aspects of plausible interpretations can be expressed by formal criteria which constrain the admissible choices of QFMs. The catalogue of formal semantic criteria will then guarantee that the interpretations of quantifiers be restricted to the plausible choices. Embarking on this general strategy, we considered various intuitive requirements on the models, which were then compiled into ‘DFS axioms’, the axiomatic framework for plausible models of fuzzy quantification. The relevance of the six basic postulates (Z-1)–(Z-6) can be sketched as follows. •
•
•
•
•
•
The first condition of ‘correct generalisation’ (Z-1) is mandatory to ensure the internal coherence of the quantification framework. Generally speaking, one will always require that the fuzzifications results determined by a fuzzification mechanism properly generalize the original constructs for crisp sets to which the fuzzification mechanism was applied. The semantical postulate (Z-2) which demands the proper interpretation of projection quantifiers makes explicit the relationship of crisp/fuzzy membership assessments and quantification. This requirement is essential from a linguistic perspective because it warrants the correct modelling of proper names like “Joan”, which are generally viewed as a special type of quantifiers in TGQ. The requirement of compatibility with dualisation (Z-3) is also important from a logical and linguistic viewpoint: When combined with the other axioms, it enforces the full compliance of the models with the important constructions of the dual, external negation and antonym of a quantifier. The condition on ‘internal joins’ (Z-4), which demands the conformance of the models to unions of arguments, can be exemplified by NL sentences like “Most men drink or smoke”. Apart from its linguistic motivation, the condition is of key relevance to the DFS axioms, because it is the only requirement which mediates the behaviour of a considered QFM for quantifiers of different arities. It is this condition which will achieve the desired cross-arity coherence of the model, thus contributing to the proper modelling of fuzzy multi-place quantification. The requirement of ‘monotonicity in arguments’ (Z-5) makes sure that the monotonicity type of a base quantifier translate to the fuzzy case. This is especially important in order to preserve the valid entailments which pertain to these monotonicity properties. The final criterion (Z-6) which demands the compatibility with functional application is not directly inspired from linguistics. Due to the fact that all other axioms refer to quantifiers on a single base set E only, it was necessary to add a requirement which links the behaviour of F across domains. The criterion of functional application is solely concerned with this issue of cross-domain coherence, which it traces back to the obvious desideratum of compositionality with respect to powerset mappings.
3.10 Chapter Summary
109
It should be apparent from this summary that the DFS axioms are wellmotivated and linguistically significant. As stated in Th. 3.30, the axioms are also mutually independent, i.e. irreducible to a smaller axiom set. In addition, the proposed axioms are consistent. It will be shown in the later Chaps. 7 to 10 that the axioms admit rich and interesting classes of models. However, an axiom set cannot be appropriately judged when looking at the axioms in isolation. The axiom system had to be rather compressed in order to achieve the desired minimality. This means that most semantical properties of interest are not included in the list of axioms, but rather entailed by the proposed axiom system. In the following chapters, we will therefore consider a larger catalogue of linguistic requirements and check the validity of each criterion in the models. In this way, we can validate the completeness of the axioms, i.e. their coverage of linguistic expectations on the interpretation of quantifiers. As opposed to the bona fide assumption of linguistic plausibility made by existing approaches, the proposal will then be subjected to a systematic linguistic evaluation, which substantiates that it indeed captures the plausible models of fuzzy quantification.
4 Semantic Properties of the Models
4.1 Motivation and Chapter Overview In the last chapter, an axiom system for models of fuzzy quantification has been proposed. The system comprises six basic criteria for plausible interpretations of fuzzy quantifiers. The description of these models in terms of the postulates (Z-1)–(Z-6) is rather condensed, though. On the one hand, this offers the advantage of a minimal, non-redundant definition. However, the succinct description of the models makes it difficult to judge whether the requirements capture all important aspects of systematic and coherent interpretations. The following three chapters will provide evidence that this is indeed the case. The present chapter will be concerned with semantic desiderata which must be considered absolutely basic, i.e. mandatory for all models. Most of these criteria originate from the Theory of Generalized Quantifiers and thus reflect linguistic considerations on the models. Some other postulates will originate from logic. For example, we will discuss the proper interpretation of the universal and existential quantifiers. We will also be interested in constraints on the expected effects of fuzziness on the quantification results, which have something to say about about our understanding of fuzziness in natural language. The present chapter will formalize all of these criteria and prove that they are valid in arbitrary models of the theory. The other two chapters in the series on semantical properties will develop concepts which apply to models in a given subclass (Chap. 5) and discuss advanced topics including more specialized properties of models and some problematic cases ((Chap. 6)).
4.2 Correct Generalisation Let us start with the most basic requirement on any type of fuzzification mechanism, that of correct generalization. It is essential to the use of the I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 111–148 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
112
4 Semantic Properties of the Models
fuzzification pattern that the generalized model which results from the fuzzification mechanism consistently extend the crisp base model from which it is built [59, p. 419+]. In the case of quantifier fuzzification mechanisms, we would thus like the fuzzy quantifiers F(Q) that result from the mechanism to coincide with the original semi-fuzzy quantifiers Q on arbitrary crisp arguments. It was already shown on p. 87 how this requirement can be succinctly and it then expressed in terms of the ‘underlying semi-fuzzy quantifiers’ U(Q), becomes U(F(Q)) = Q. Acknowledging its importance, correct generalisation has been made an integral part of the DFS axioms, and We have required in (Z-1) that the equality U(F(Q)) = Q be valid for all nullary or unary quantifiers The restriction to quantifiers of arities n ≤ 1 helped to simplify the axiom system, thus shortening the required proofs that certain QFMs of interest are indeed models of the theory. Let us now overcome the restriction to n ≤ 1 and establish that F(Q) consistently extends Q, regardless of the n arity n ∈ N of the considered quantifier Q : P(E) −→ I. Theorem 4.1. n Suppose that F is a DFS and Q : P(E) −→ I is an n-ary semi-fuzzy quantifier. Then U(F(Q)) = Q, i.e. for all crisp subsets Y1 , . . . , Yn ∈ P(E), F(Q)(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn ) . For example, if E is a set of persons, and women, married ∈ P(E) are the crisp sets of “women” and “married persons” in E, then F(some)(women, married) = some(women, married) , i.e. the ‘fuzzy some’ obtained by applying F coincides with the (original) ‘crisp some’ whenever the latter is defined, which is of course highly desirable.
4.3 Properties of the Induced Truth Functions Let us now turn to the fuzzy truth functions induced by a DFS, which constitute the propositional part of its induced logic. The first theorem is concerned with the identity truth function id2 : 2 −→ 2 defined by id2 (0) = 0, id2 (1) = 1. Theorem 4.2 (Identity truth function). 2 ) = idI . In every DFS F, F(id The identity truth function is hence generalised to the fuzzy identity truth function idI : I −→ I which maps every gradual truth value x ∈ I to itself, which is quite satisfactory. As for negation, the standard choice in fuzzy logic is certainly ¬ : I −→ I, defined by ¬x = 1 − x for all x ∈ I. The essential properties of this prototypical choice of negation are captured by the following definition, which guides reasonable choices of negation operators.
4.3 Properties of the Induced Truth Functions
113
Definition 4.3 (Strong negation). ¬ : I −→ I is called a strong negation operator if it satisfies a. ¬ 0 = 1 (boundary condition) b. ¬ x1 ≥ ¬ x2 for all x1 , x2 ∈ I such that x1 ≤ x2 (i.e. ¬ is monotonically nonincreasing) is involutive). c. ¬ ◦¬ = idI (i.e. ¬ Notes 4.4. • Whenever the standard negation ¬x = 1−x is being assumed, we will drop the ‘tilde’-notation. Hence the standard fuzzy complement is denoted ¬X, where µ¬X (e) = 1 − µX (e). Similarly, the external negation of a (semi) fuzzy quantifier with respect to the standard negation is written ¬Q, and the antonym of a fuzzy quantifier with respect to the standard fuzzy complement is written as Q¬. • Apart from the standard choice, further examples of these negation operators are e.g. Sugeno’s λ-complements [154]. As witnessed by the following theorem, the fuzzy negation induced by a DFS F is plausible in the sense of belonging to the class of strong negation operators. Theorem 4.5 (Negation). In every DFS F, ¬ = F(¬) is a strong negation operator. With conjunction, there are several common choices in fuzzy logic (although the standard is certainly ∧ = min). All of these belong to the class of tnorms, which capture the expectations on reasonable conjunction operators, cf. Schweizer/Sklar [148], Klir/Yuan [99, p. 61+]. Definition 4.6 (t-norms). : I × I −→ I is called a t-norm if it satisfies the following conditions. ∧ 0 = 0, for all x ∈ I x∧ x ∧ 1 = x, for all x ∈ I x2 = x2 ∧ x1 for all x1 , x2 ∈ I (commutativity) x1 ∧ is x2 ≤ x1 ∧ x2 , for all x1 , x1 , x2 ∈ I (i.e. ∧ If x1 ≤ x1 , then x1 ∧ monotonically nondecreasing) x3 = x1 ∧ x2 ) ∧ (x2 ∧ x3 ), for all x1 , x2 , x3 ∈ I (associativity). e. (x1 ∧
a. b. c. d.
The prototypical t-norm is the standard fuzzy conjunction ∧ = min (it is the largest t-norm). Other well-known examples of t-norms are the algebraic a and the bounded product ∧ b , which are defined by product ∧ a x2 = x1 · x2 x1 ∧ b x2 = max(0, x1 + x2 − 1) x1 ∧ m , defined by for all x1 , x2 ∈ I. For a less well-known examplar, consider ∧
114
4 Semantic Properties of the Models
2x1 x2 m x2 = 1 + 2x1 x2 − x1 − x2 x1 ∧ min(x1 , x2 )
: max(x1 , x2 ) < 12 : min(x1 , x2 ) ≥ 12 : else
(4.1)
for all x1 , x2 ∈ I, see [53, p. 30]. Of course, there are many other t-norms; for a recent discussion of advanced topics and pointers to other publications, see e.g. [96, 113]. Let us now state that the fuzzy conjunction induced by a DFS is ‘reasonable’, in the sense of belonging to the class of t-norms. Theorem 4.7 (Conjunction). = F(∧) In every DFS F, ∧ is a t-norm. The dual concept of t-norm is that of an s-norm or t-co-norm, which expresses the essential properties of fuzzy disjunction operators. Definition 4.8 (s-norms). : I × I −→ I is called an s-norm if it satisfies the following conditions. ∨ 1 = 1, for all x ∈ I x∨ 0 = x, for all x ∈ I x∨ x2 = x2 ∨ x1 for all x1 , x2 ∈ I (commutativity) x1 ∨ is x2 ≤ x1 ∨ x2 , for all x1 , x1 , x2 ∈ I (i.e. ∨ If x1 ≤ x1 , then x1 ∨ monotonically nondecreasing) x3 = x1 ∨ x2 ) ∨ (x2 ∨ x3 ), for all x1 , x2 , x3 ∈ I (associativity). e. (x1 ∨
a. b. c. d.
Examples of s-norms are the standard choice ∨ = max (it is the smallest b defined by a and the bounded sum ∨ s-norm), the algebraic sum ∨ a x2 = x1 + x2 − x1 · x2 x1 ∨ b x2 = min(1, x1 + x2 ) x1 ∨ for all x1 , x2 ∈ I. Theorem 4.9 (Disjunction). is the dual s-norm of ∧ under x2 = ¬ ¬ In every DFS, x1 ∨ ( ¬ x1 ∧ x2 ), i.e. ∨ ¬ . Hence the fuzzy disjunction induced by a DFS is also plausible, and it is de and ¬ finable in terms of ∧ . A similar point can be made about the other two-place logical connectives, see [53, p. 29 and p. 32]. For example, the in and ¬ duced implication of a DFS can be expressed in terms of ∨ (and hence and ¬ also in terms of ∧ ): Theorem 4.10 (Implication). x2 . In every DFS, x1 → x2 = ¬ x1 ∨ and ¬ The only connectives which are not apparently reducible to ∧ , because their construction involves a subtle dependency between variables, are the antivalence xor and the equivalence ↔. We will consider these connectives later because they require more effort (see remarks on pp. 155 and 166).
4.4 A Different View of the Induced Propositional Logic
115
4.4 A Different View of the Induced Propositional Logic The definition of induced fuzzy truth functions presented in Def. 3.6 is not the only possible choice, and it has already been mentioned that an alternative construction exists which is equally straightforward. In fact, the first publication on DFS theory [53] relied on the construction which has now become ‘alternative’. It was only the desire to obtain an independent axioms system which later guided the decision in [55] to introduce a new construction. This new construction has now become standard, because it helped to eliminate some subtle interdependencies in the initial definition of the DFS axioms. In the following, we will review the principles that underly the ‘old’ construction of induced connectives. It will then be shown that the ‘new’ and ‘old’ constructions coincide in every DFS, which makes an additional point that these models are well-motivated. Hence both constructions provide different views of the induced propositional logic from their specific perspective. The alternative construction of induced truth function, which we define now, rests on the observation that (a) the set of crisp truth values 2 = {0, 1} and the powerset P({∗}) are isomorphic; and (b) the set of continuous truth values I = [0, 1] and the fuzzy powerset P({∗}) are also isomorphic, where {∗} is an arbitrary singleton set, e.g. {∗} = {∅}. Suitable mappings which establish these isomorphisms are the apparent bijections (a) π∗ : P({∗}) −→ 2; and (b) π ∗ : P({∗}) −→ I. The basic idea is to utilize the former bijection for a transfer of the original truth function to a semi-fuzzy quantifier, to which the considered QFM F can be applied. Now utilizing the latter bijection, the resulting fuzzy quantifier can then be translated into the desired fuzzy truth function. The alternative construction can thus be described as follows. Definition 4.11. Let a mapping f : 2n −→ I be given. We can view f as a semi-fuzzy quantifier n f ∗ : P({∗}) −→ I by defining f ∗ (Y1 , . . . , Yn ) = f (π∗ (Y1 ), . . . , π∗ (Yn )) .
for all Y1 , . . . , Yn ∈ P({∗}). By applying the considered QFM F, f ∗ is genn −→ I, from which we obtain a eralized to a fuzzy quantifier F(f ∗ ) : P({∗}) n fuzzy truth function F(f ) : I −→ I, F(f )(x1 , . . . xn ) = F(f ∗ )( π∗−1 (x1 ), . . . , π ∗−1 (xn )) for all x1 , . . . , xn ∈ I. More details on this construction can be found in [53]. As already remarked above, it results in the same canonical choice of fuzzy truth functions, if the considered F is a DFS. Theorem 4.12. In every DFS, F = F.
116
4 Semantic Properties of the Models
F can be distinct from F if F is not a DFS, though, and indeed the construc tion used in F provided better support for developing DFS theory further and extracting an independent system of axioms. The alternative definition F was less suited to accomplish this task because the construction of F(∧) and F(∨) using the ‘old’ construction involves multi-place quantification (n = 2), while the computation of F(∧) and F(∨) using the ‘new’ method is only based on one-place quantification. It is this simplification which made it possible to formalize DFS theory in terms of independent conditions and develop the current set of DFS axioms (Z-1)–(Z-6). Let us now discuss homomorphism properties of every DFS with respect to operations on the argument sets.
4.5 Argument Permutations Definition 4.13 (Argument permutations). n Let Q : P(E) −→ I be a semi-fuzzy quantifier and β : {1, . . . , n} −→ n {1, . . . , n} a permutation. By Qβ : P(E) −→ I we denote the semi-fuzzy quantifier defined by Qβ(Y1 , . . . , Yn ) = Q(Yβ(1) , . . . , Yβ(n) ) , : P(E) n −→ I, for all Y1 , . . . , Yn ∈ P(E). In the case of fuzzy quantifiers Q n : P(E) −→ I is defined analogously. the quantifier Qβ It is well-known that every permutation can be decomposed into a composition of transpositions. These express a single permutation step which simply swaps two selected elements in a given finite sequence: Definition 4.14 (Transpositions). For all n ∈ N (n > 0) and i, j ∈ {1, {1, . . . , n} −→ {1, . . . , n} is defined by i : τi, j (k) = j : k :
. . . , n}, the transposition τi, j : k=j k=i else
for all k ∈ {1, . . . , n}. Let us further stipulate a succinct notation for τi, n , which is abbreviated by τi = τi, n . The transposition τi has the effect of exchanging positions i and n. Notes 4.15. • The restricted class of transpositions τi , i ∈ {1, . . . , n}, is still capable of composing arbitrary permutations, because every transposition τi, j can be expressed as τi, j = τi ◦ τj ◦ τi .
4.5 Argument Permutations
•
117
In the case of these simple transpositions, Qτi (X1 , . . . , Xn ) becomes Qτi (X1 , . . . , Xn ) = Q(X1 , . . . , Xi−1 , Xn , Xi+1 , . . . , Xn−1 , Xi ) .
Due to the conceptual and notational simplicity of the transpositions τi , these offer the preferred representation of permutations that will be used throughout the monograph. The importance of argument permutations/transpositions is witnessed by the following examples. 2
1. There is a meaning of only : P(E) −→ 2 where only = allτ1 , i.e. only(Y1 , Y2 ) = all(Y2 , Y1 ) for all Y1 , Y2 ∈ P(E). For example, if E is a set of persons, men ∈ P(E) the set of those v ∈ E which are men, and smokers ∈ P(E) is the set1 of persons who are smokers, then only(men, smokers) = allτ1 (men, smokers) = all(smokers, men) , i.e. the meaning of “Only men are smokers” coincides with that of “All smokers are men”. 2. Argument transpositions render it possible to express symmetry proper2 ties of quantifiers. For example, the quantifier some : P(E) −→ 2 is symmetrical in its arguments, which can be stated as some = someτ1 . This warrants that in the above domain of persons/smokers, some(men, smokers) = someτ1 (men, smokers) = some(smokers, men) ,
which expresses our intuition that the meanings of “Some men are smokers” and “Some smokers are men” coincide. As stated by the following theorem, the DFS axioms ensure that all considered models of fuzzy quantification commute with argument transpositions, i.e. the corresponding constructions based on fuzzy quantifiers and fuzzy arguments will carry the desired semantics. Theorem 4.16. Every DFS F is compatible with argument transpositions, i.e. for every a n semi-fuzzy quantifier Q : P(E) −→ I i ∈ {1, . . . , n}, F(Qτi ) = F(Q)τi .
1
we shall assume that this set be crisp for the sake of argument
118
4 Semantic Properties of the Models
Recalling that permutations can be expressed as a sequence of transpositions, Th. 4.16 actually ensures that F commutes with arbitrary permutations of the arguments of a quantifier. In particular, symmetry properties of a quantifier Q carry over to its fuzzified analogue F(Q). For example, it holds in every DFS that F(some) = F(some)τ1 and thus F(some)(rich, young) = F(some)(young, rich) , i.e. the meaning of “Some rich people are young” and “Some young people are rich” coincide. Of course, we also obtain that F(only)(old, rich) = F(all)τ1 (old, rich) = F(all)(rich, old) , i.e. “Only old people are rich” means that “All rich people are old”, where the extensions old, rich ∈ P(E) of old and rich people are now assumed to be fuzzy.
4.6 Cylindrical Extensions Let us now consider another property related to the argument structure of quantifiers. In the case that an argument is ‘redundant’ or ‘vacuous’, by having no effect on the actual outcome of a semi-fuzzy quantifier Q, we would certainly expect that the argument will also have no effect on the quantification results obtained from the associated fuzzy quantifier F(Q). This intuitive criterion can be formalized as follows. Definition 4.17. Let F be a QFM. We say that F is compatible with cylindrical extensions if n the following condition holds for every semi-fuzzy quantifier Q : P(E) −→ I. n If Q : P(E) −→ I is defined by Q (Y1 , . . . , Yn ) = Q(Yi1 , . . . , Yin ) for all Y1 , . . . , Yn ∈ P(E), where n ∈ N, n ≥ n and i1 , . . . , in ∈ {1, . . . , n } such that 1 ≤ i1 < i2 < · · · < in ≤ n , then F(Q )(X1 , . . . , Xn ) = F(Q)(Xi1 , . . . , Xin ) , for all X1 , . . . , Xn ∈ P(E). This property of being compatible with cylindrical extensions is very fundamental. It simply states that vacuous argument positions of a quantifier 4 can be eliminated. For example, if Q : P(E) −→ I is a semi-fuzzy quantifier and if there exists a semi-fuzzy quantifier Q : P(E) −→ I such that Q (Y1 , Y2 , Y3 , Y4 ) = Q(Y3 ) for all Y1 , . . . , Y4 ∈ P(E), then we know that Q does not really depend on all arguments; it is apparent that the choice of
4.7 Negation and Antonyms
119
Y1 , Y2 and Y4 has no effect on the quantification result. In this case, it is straightforward to require that F(Q )(X1 , X2 , X3 , X4 ) = F(Q)(X3 ) for all X1 , . . . , X4 ∈ P(E), i.e. F(Q ) is also independent of X1 , X2 , X4 , and it can be computed from F(Q). Theorem 4.18. Every DFS F is compatible with cylindrical extensions. Hence every model of DFS theory fulfills a property which is vital to the plausible analysis of multi-place quantification.
4.7 Negation and Antonyms So far we have considered the elementary adequacy properties of QFMs under the semantical constructions of permuting and vacuously extending arguments. Despite the rather abstract appearence of these criteria, they are essential for the approach to be internally consistent. We will now turn to properties that also express on the linguistic surface. We first consider those constructions that involve a negation step, which can either be applied inside the quantifying expression (internal complementation of arguments, antonyms), or applied from ‘outside’ to the quantifying expression as a whole (external negation). The compatibility of the models of DFS theory to these semantical constructions is in each case ensured by their known compatibility with dualisation (Z-3) and by the elementary properties underlying the framework that were discussed above. Theorem 4.19. Every DFS F is compatible with the formation of antonyms. Thus if Q : n ¬. P(E) −→ I is a semi-fuzzy quantifier of arity n > 0, then F(Q¬) = F(Q) The theorem guarantees e.g. that F(all)(rich, ¬ lucky) = F(no)(rich, lucky). Let us notice that by Th. 4.16, the theorem generalises to arbitrary argument positions. Hence every DFS is fully compatible to the complementation of arguments. Theorem 4.20. Every DFS F is compatible with the negation of quantifiers. Hence if Q : n ¬ Q) = ¬ F(Q). P(E) −→ I is a semi-fuzzy quantifier, then F( For example, the theorem ensures that F(at most 10)(young, rich) = ¬ F(more than 10)(young, rich) because at most 10 = ¬more than 10, i.e. “at most 10” is the negation of “more than 10”.
120
4 Semantic Properties of the Models
We can summarize (Z-3), Th. 4.19 and Th. 4.20 as ensuring that every n DFS preserves Aristotelian squares.2 I.e. if Q : P(E) −→ I is an arbitrary semi-fuzzy quantifier, then the square antonym
Q 6
-Q¬
@ I @
6
@
@
@
@ external negation
dual
dual
@ @
@
@
? Q
@ R ? @ - ¬ Q
antonym
is transported by F to the corresponding square on fuzzy quantifiers, antonym
F(Q) 6 I @ @ @
- F(Q¬) 6
@
@
dual
@ external negation
dual
@ @
@
@
? F(Q)
antonym
@ R ? @ - F( ¬ Q)
where the indicated relations of antonymy, negation and duality are still valid, i.e. which coincides with
2
Because all relations displayed in the Aristotelian square are bidirectional, it is presumed that ¬ = F (¬) be involutive. However, we already know from Th. 4.5 that ¬ is even a strong negation operator.
4.8 Symmetrical Difference
F(Q) 6 I @ @ @
antonym
- F(Q) ¬ 6
@ @
@ external negation
dual
121
dual
@ @
@
@
? F(Q)
antonym
@ R ? @ - ¬ F(Q)
Every DFS also commutes with the Piaget group of transformations, the relevance of which stems from empirical findings in developmental psychology. (For a note on the importance of the Piaget group of transformations to fuzzy logic, see Dubois & Prade [41, p. 158+]). The Piaget group corresponds to the formation of (identity)
I(Q) = Q
(negation) (reciprocity) (correlativity)
N(Q) = ¬ Q R(Q) = Qτ1 ¬τ1 . . . τn ¬τn C(Q) = ¬ R(Q) ,
n
where Q : P(E) −→ I is a semi-fuzzy quantifier (analogously for fuzzy quantifiers).3 This is obvious from the definability of I, N, R, C in terms of external negation, internal complementation and argument transpositions, with which every DFS is compatible by Th. 4.20, Th. 4.19 and Th. 4.16.
4.8 Symmetrical Difference We already know from Th. 4.19 that every DFS is compatible with argumentwise complementation. Let us now establish that F respects even more finegrained application of the negation operator.
3
The operator-based definition of R(Q) might look complicated. Given a choice of arguments X1 , . . . , Xn ∈ P(E), this expression becomes R(Q)(X1 , . . . , Xn ) = Q(¬X1 , . . . , ¬Xn ) .
122
4 Semantic Properties of the Models
Definition 4.21. n Let Q : P(E) −→ I be a semi-fuzzy quantifier (n > 0) and A ∈ P(E) a n crisp subset of E. By QA : P(E) −→ I we denote the semi-fuzzy quantifier defined by QA(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn−1 , Yn A) for all Y1 , . . . , Yn ∈ P(E), where denotes the symmetrical set difference. For analogously, where the fuzzy symmetrical we define Q A fuzzy quantifiers Q,
µX2 (e) for difference X1 X2 ∈ P(E) is defined by µX1 X 2 (e) = µX1 (e) xor all e ∈ E. By using the symmetrical difference, we can negate the membership values of only part of the elements of an argument set (namely of those which are n contained in A). Hence if Q : P(E) −→ I is a semi-fuzzy quantifier, then QA(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn−1 , Z) where
χZ (e) =
¬χYn (e) χYn (e)
: e∈A : e∈ /A n
−→ I is a fuzzy for all Y1 , . . . , Yn , A ∈ P(E) and e ∈ E. Likewise if Q : P(E) quantifier and A ∈ P(E) is crisp, then QA(X 1 , . . . , Xn ) = Q(X1 , . . . , Xn−1 , Z) where
µZ (e) =
¬ µXn (e) µXn (e)
: :
e∈A e∈ /A
e ∈ E. Let us now state that every DFS is comfor all X1 , . . . , Xn ∈ P(E), patible even with this more fine-structured type of negation. Theorem 4.22. n Suppose F is a DFS. Then for every semi-fuzzy quantifier Q : P(E) −→ I A. (n > 0) and every crisp subset A ∈ P(E), F(QA) = F(Q)
4.9 Intersections of Arguments The DFS axiom (Z-4) explicitly requires that a plausible model of fuzzy quantification be compatible with unions of arguments. In addition, Th. 4.19 has shown that the models of DFS theory preserve antonymy, and are thus compatible with the complementation of arguments as well. Now we will turn to the construction of intersecting arguments which has not been considered so far. It is again convenient to introduce an operator-based notation.
4.10 Argument Insertion
123
Definition 4.23 (Internal meets). n Suppose Q : P(E) −→ I is a semi-fuzzy quantifier, n > 0. The semi-fuzzy n+1 −→ I is defined by quantifier Q∩ : P(E) Q∩(Y1 , . . . , Yn+1 ) = Q(Y1 , . . . , Yn−1 , Yn ∩ Yn+1 ) , : P(E) n −→ for all Y1 , . . . , Yn+1 ∈ P(E). In the case of a fuzzy quantifiers Q ∩ n+1 −→ I is defined analogously, based on the induced intersec : P(E) I, Q . tion ∩ The compatibility of a QFM with intersections in the arguments of a quantifier is then expressed in the apparent way, i.e. F should satisfy F(Q∩) = F(Q)∩ n for all Q : P(E) −→ I of arity n > 0. Knowing that every DFS commutes with unions of arguments, it is not surprising that we also obtain a positive result in the dual case of intersections: Theorem 4.24. Every DFS F is compatible with the intersection of arguments. , because the two-place quantifier some can For example, F(some) = F(∃)∩ be expressed as some = ∃∩. By Th. 4.16, then, every DFS is also known to commute with intersections in arbitrary argument positions. For example, consider the NL statement “Most of the young and rich are tall”. In order to interpret this statement and construct its meaning from the fuzzy extensions young, rich and tall ∈ P(E), we use a decomposition into the quantifier Q defined by Q (Y1 , Y2 , Y3 ) = most(Y1 ∩ Y2 , Y3 ), which expresses “Most Y1 ’s and Y2 ’s are Y3 ’s”. It is then reasonable to expect that the above statement can be interpreted by applying the resulting fuzzy quantifier F(Q ) to the arguments young, rich and tall. This is exactly where the considered property applies, and we conclude that rich, tall), as desired. indeed F(Q )(young, rich, tall) = F(many)(young ∩
4.10 Argument Insertion Let us now consider the operation of inserting an argument into a quantifier. By isolating the insertion of a single argument, we obtain the following construction on semi-fuzzy quantifiers: Definition 4.25 (Argument insertion). n Let Q : P(E) −→ I be a semi-fuzzy quantifier of arity n > 0 and A ∈ P(E). n−1 −→ I we denote the semi-fuzzy quantifier defined by By QA : P(E) QA(Y1 , . . . , Yn−1 ) = Q(Y1 , . . . , Yn−1 , A) for all Y1 , . . . , Yn−1 ∈ P(E). (An analogous definition of QA is assumed for fuzzy quantifiers).
124
4 Semantic Properties of the Models
Note 4.26. As usual, attention is placed on the last argument. The insertion of other arguments can be modelled by a preceeding argument transposition. For example, we can insert A into the (n−1)-th argument slot by constructing Qτn−1 A. The insertion of multiple arguments can then be decomposed into a sequence of single argument insertions. The significance of argument insertion mainly stems from its ability to model an important natural language construction known as adjectival restriction, see e.g. [9, p. 448] and [50, p. 247]. To present an example, an NL sentence like “Many married men have children” can be interpreted by inserting the arguments married, have children ∈ P(E), which represent the extensions of the NL concepts “married” and “have children”, respectively, into the composite quantifier “Many married Y1 ’s are Y2 ’s”. The involved composite quantifier is then said to be constructed from the base quantifier “many” by ‘adjectival restriction’, in this case based on the crisp adjective “married”. To see how the above construction of argument insertion supports the modelling of adjectival restriction, let us simply notice that the construction of the composite quantifier Q corresponds to an intersection with the denotation of the adjective, in this case Q (Y1 , Y2 ) = many(Y1 ∩ married, Y2 ). Adjectival restriction can hence be decomposed into the constructions of intersecting argument sets and the insertion of constant arguments like “married”. In terms of the relevant operators on quantifiers, we can then express the composite quantifier Q as Q = Qτi ∩Aτi . This has the desired effect of restricting the i-th argument of the semi-fuzzy quantifier Q to a considered subset A ∈ P(E), which usually results from the interpretation of a crisp extensional adjective like “married”. The concept of argument insertion is also important because it allows an incremental interpretation of complex expressions in the sense of Frege’s principle of compositionality (see e.g. Gamut [50, p.140]), which states that the meaning of a complex expression is a function of the meanings of its subexpressions. To provide an example (which again illustrates the mechanism of restricting an argument), let us consider the sentence “Most male persons are married”. Suppose that E is a base set, and that person, male, married ∈ P(E) are the extensions of “person”, “male” and “married” in E, respectively. Informally, we can think of this sentence as being interpreted in the following steps.4 Q = Most X1 ’s are X2 ’s Q = Most male Y1 ’s are Y2 ’s Q = Most male persons are Z’s Q = Most male persons are married 4
This decomposition only serves illustrative purposes. A formal linguist is likely to prefer a different decomposition of the above sentence.
4.10 Argument Insertion
125
Ever since R. Montague published his influential work on ‘the proper treatment of quantification in ordinary English’ [115] (reprinted in [163]), formal linguists would typically model the incremental interpretation process in some variant of the typed λ-calculus. This might look roughly5 as follows: Q = λX1 X2 .most(X1 , X2 ) = most Q = λY1 Y2 .most(Y1 ∩ male, Y2 ) Q = λZ.Q (person, Z) Q = Q (married) . By means of the operators defined on semi-fuzzy quantifiers we can recast this, Q = most Q = mostτ1 ∩maleτ1 Q = Q τ1 person Q = Q married . The following theorem establishes the compatibility of a DFS to the insertion of arguments like “married”. Theorem 4.27. Every F is compatible with argument insertions, i.e. F(QA) = F(Q)A for n all semi-fuzzy quantifiers Q : P(E) −→ I of arity n > 0 and all crisp subsets A ∈ P(E). The theorem thus states that every DFS commutes with the insertion of crisp arguments. It is limited in scope to the case of crisp arguments because, given n QA a semi-fuzzy quantifier Q : P(E) −→ I and a fuzzy subset A ∈ P(E), would be undefined. (Q is a semi -fuzzy quantifier and hence defined on crisp arguments only!) Nevertheless, the axiom ensures that men, lucky) F(allτ1 ∩marriedτ1 )(men, lucky) = F(all)(married ∩ provided that married ∈ P(E) is crisp, i.e. we obtain the same result when applying F to the composite quantifier Q = “All married Y1 ’s are Y2 ” and then evaluating F(Q)(men, lucky), or when first applying F to all and then inserting the extensions of “married men” and “lucky”. The theorem is also useful because it ensures that boundary conditions (with respect to the crisp case) are valid. For example, it is apparent from the theorem that 5
For convenience, we will use a variant of the typed λ-calculus which offers product types. In monadic type theory, the expressions might look somewhat different.
126
4 Semantic Properties of the Models
1 = x1 x1 ∧ for all x1 ∈ I, which is one of the defining conditions of t-norms. A direct modelling of adjectival restriction with a fuzzy adjective is not possible because fuzzy arguments cannot be inserted into a semi-fuzzy quantifier. Therefore a construction different from QA is needed to handle fuzzy adjectival restriction, which will depend on the chosen model F. It will be shown later in Sect.6.8 how the insertion of fuzzy arguments into semi-fuzzy quantifiers can be modelled in the framework of DFS theory. The latter construction will then permit a compositional interpretation of fuzzy adjectival restriction in dedicated models as well.
4.11 Monotonicity Properties In this section, we will discuss monotonicity properties of quantifiers, i.e. semantical characteristics that can be expressed through a comparison of gradual quantification results under the natural order ‘≤’. Such comparisons are of obvious relevance to logic because small results (close to ‘false’) generally reflect stronger conditions, while a weakening of conditions expresses in larger membership grades (close to ‘true’). Theorem 4.28. n Suppose F is a DFS and Q : P(E) −→ I. Then Q is monotonically nondecreasing (nonincreasing) in its i-th argument (i ≤ n) if and only if F(Q) is monotonically nondecreasing (nonincreasing) in its i-th argument. Compared to the original postulate (Z-5), which is restricted to nonincreasing monotonicity in the last argument, the theorem now covers both types of 2 monotonicity in arbitrary argument positions. For example, some : P(E) −→ 2 is monotonically nondecreasing in both arguments. By the theorem, then, 2 −→ I is nondecreasing in both arguments also. In particular, F(some) : P(E) F(some)(young men, very tall) ≤ F(some)(men, tall) , i.e. “Some young men are very tall” entails “Some men are tall”, if young men ⊆ men and very tall ⊆ tall. So far, we have only considered global monotonicity properties, i.e. monotonicity properties which hold unconditionally and for arbitrary choices of argument sets. In some cases, it can also be instructive to consider monotonicity properties which hold only locally, in a specified range of argument sets. Definition 4.29. n n Let Q : P(E) −→ I and U, V ∈ P(E) be given. We say that Q is locally nondecreasing in the range (U, V ) if for all Y1 , . . . , Yn , Y1 , . . . , Yn ∈ P(E) such that Ui ⊆ Xi ⊆ Xi ⊆ Vi (i = 1, . . . , n), we have Q(Y1 , . . . , Yn ) ≤
4.11 Monotonicity Properties
127
Q(Y1 , . . . , Yn ). We will say that Q is locally nonincreasing in the range (U, V ) if under the same conditions, Q(Y1 , . . . , Yn ) ≤ Q(Y1 , . . . , Yn ). On fuzzy quantifiers, local monotonicity is defined analogously, but the arguments are taken from P(E), and ‘⊆’ is the fuzzy inclusion relation. To present an example, consider the proportional quantifier more than 10 percent = [rate > 0.1] , which is neither nonincreasing nor nondecreasing in its first argument. Nevertheless, some characteristics of the quantifier express themselves in its local monotonicity properties. For example, suppose that A, B ∈ P(E) are subsets of E, where A is assumed to be nonempty. Then more than 10 percent is locally nonincreasing in the range ((A, B), (A ∪ ¬B, B)), and it is locally nondecreasing in the range ((A, B), (A ∪ B, B)). A frequently observed case of local monotonicity properties is that of a quantifier which is locally constant in some range (U, V ), i.e. both nondecreasing and nonincreasing. Hence let again E = ∅ be some finite base set, e ∈ E an arbitrary element of E, and r ∈ (0, 1]. Then, [rate ≥ r] is locally constant in the range (U, V ), where U = ({e}, ∅) and V = (E, ∅). If {e} ⊆ Y1 ⊆ E and ∅ ⊆ Y2 ⊆ ∅, i.e. Y2 = ∅, we always have |Y1 ∩ ∅| |∅| 0 = = = 0, |Y1 | |Y1 | |Y1 | and hence [rate ≥ r](Y1 , ∅) = 0 . It is natural to require that F preserves such local monotonicity properties, n i.e. if Q : P(E) −→ I is locally nondecreasing (nonincreasing) in some range n −→ I to be nondecreasing (nonincreas(U, V ), then we expect F(Q) : P(E) ing) in that range as well. Let us now state that every DFS preserves monotonicity properties of semifuzzy quantifiers even if these hold only locally, i.e. all considered models of fuzzy quantification comply with this requirement: Theorem 4.30. n Suppose F is a DFS, Q : P(E) −→ I a semi-fuzzy quantifier and U, V ∈ n P(E) . Then Q is locally nondecreasing (nonincreasing) in the range (U, V ) if and only if F(Q) is locally nondecreasing (nonincreasing) in the range (U, V ). The theorem therefore ensures that those characteristics of a quantifier which become visible through its local monotonicity properties be preserved when applying a DFS. Hence in the second example above, we obtain that F([rate ≥ r])(X1 , ∅) = 0 for all fuzzy subsets with nonempty core, which is quite satisfying. The models can also be shown to be monotonic in the sense of preserving inequalities between quantifiers. Let us firstly define a partial order ≤ on (semi-)fuzzy quantifiers.
128
4 Semantic Properties of the Models
Definition 4.31. n Suppose Q, Q : P(E) −→ I are semi-fuzzy quantifiers. Let us write Q ≤ Q if for all Y1 , . . . , Yn ∈ P(E), Q(Y1 , . . . , Yn ) ≤ Q (Y1 , . . . , Yn ). On fuzzy quantifiers, we define ≤ analogously, based on arguments in P(E). For example, [rate > 0.5] ≤ [rate > 0.2], which reflects our intuition that “More than 50 percent of the Y1 ’s are Y2 ’s” is a stronger condition than “More than 20 percent of the Y1 ’s are Y2 ’s”. Theorem 4.32. n Suppose that F is a DFS and let Q, Q : P(E) −→ I be semi-fuzzy quantifiers. Then Q ≤ Q if and only if F(Q) ≤ F(Q ) . The theorem ensures that inequalities between quantifiers carry over to the corresponding fuzzy quantifiers. Hence F (more than 50 percent)(blonde, tall) ≤ F (more than 20 percent)(blonde, tall) ,
as desired. Let us also consider another example, which illustrates the utility of cylindrical extensions. We firstly observe that by Def. 4.31, Q ≤ Q is defined only if Q and Q have the same arity n. But it may be useful to express inequalities also in the case of quantifiers involving a different number of arguments. For 3 2 example, all∩ : P(E) −→ 2 can be said to be smaller than all : P(E) −→ 2 in the sense that all∩(Y1 , Y2 , Y3 ) = all(Y1 , Y2 ∩ Y3 ) ≤ all(Y1 , Y2 ) , for all Y1 , Y2 , Y3 ∈ P(E). Let us now establish that this kind of inequality is preserved by every DFS F. We define the following cylindrical extension 3 2 Q : P(E) −→ 2 of all : P(E) −→ 2, viz Q(Y1 , Y2 , Y3 ) = all(Y1 , Y2 ) , for all Y1 , Y2 , Y3 ∈ P(E). This derived semi-fuzzy quantifier Q has the same arity as all∩, which means that the above theorem Th. 4.32 is now applicable. We can further utilize the earlier theorem Th. 4.18 and conclude that F(all)(X1 , X2 ) = F(Q)(X1 , X2 , X3 ) ≤ F(all∩)(X1 , X2 , X3 ) X3 ) = F(all)(X1 , X2 ∩
by Th. 4.18 by Th. 4.32
for all X1 , X2 , X3 ∈ P(E). In particular, if old, bald, rich ∈ P(E) are the (fuzzy) extensions of “old”, “bald” and “rich”, respectively, in our example universe E, then rich) ≤ F(all)(old, bald) , F(all)(old, bald ∩ i.e. “All old are bald and rich” expresses a stronger condition than “All old are bald”.
4.12 Properties of the Induced Extension Principle
129
Definition 4.33. n n Suppose Q1 , Q2 : P(E) −→ I are semi-fuzzy quantifiers and U, V ∈ P(E) . We say that Q1 is (not necessarily strictly) smaller than Q2 in the range (U, V ), in symbols: Q1 ≤(U, V ) Q2 , if for all Y1 , . . . , Yn ∈ P(E) such that U1 ⊆ Y1 ⊆ V1 , . . . , Un ⊆ Yn ⊆ Vn , Q1 (X1 , . . . , Xn ) ≤ Q2 (X1 , . . . , Xn ) . 1 ≤(U, V ) Q 2 is defined analogously, based on arguments On fuzzy quantifiers, Q in P(E) and the fuzzy inclusion relation ‘⊆’. For example, the two-place quantifier “all” is smaller than “some” whenever the first argument is nonempty, i.e. all ≤(({e},E}), (E,E)) some , for all e ∈ E. As we will now state, every model preserves inequalities between quantifiers even if these hold only locally. Theorem 4.34. n n Let F be a DFS, Q1 , Q2 : P(E) −→ I and U, V ∈ P(E) . Then Q1 ≤(U, V ) Q2 ⇔ F(Q1 ) ≤(U, V ) F(Q2 ) . The theorem thus ensures that local inequalities, like those observed in the case of “some” and “all”, are preserved when applying a DFS. In particular, if tall, lucky ∈ P(E) are fuzzy subsets of E and tall has nonempty support, then F(all)(tall, lucky) ≤ F(some)(tall, lucky) , as desired.
4.12 Properties of the Induced Extension Principle Let us now investigate the formal properties of the induced extension principle of F. This analysis will substantiate that the induced extension principle determines a plausible assignment of fuzzy powerset mappings to given base mappings. Building on this foundation, it becomes possible to show that all considered models of fuzzy quantification preserve the important semantical properties of quantitativity and extensionality, which are closely tied to the induced extension principle. In addition, every DFS has the property of being ‘contextual’, i.e. it captures an intuitive requirement on the intended effects of fuzziness in a quantifier’s arguments. Let us now state the formal results that underly the later theorems of linguistic relevance.
130
4 Semantic Properties of the Models
Theorem 4.35. Suppose F is a DFS and F the extension principle induced by F. Then for all f : E −→ E , g : E −→ E (where E = ∅, E = ∅, E = ∅), ◦ f ) = F(g) ) a. F(g ◦ F(f E ) = id b. F(id P(E) Note 4.36. A reader familiar with category theory will recognize this as the statement that F is a covariant functor from the category of non-empty sets to the category of fuzzy power sets, provided that on objects E, we define 6 F(E) = P(E). It should be apparent from the remarks on p. 102 how to dispense with the restriction to non-empty sets if so desired. F is a faithful )|P(E) = f = g = F(g)| functor because f = g implies F(f P(E) , i.e. F(f ) = F(g). F is also injective on objects; E = E implies P(E) = P(E ). The induced extension principles of all DFSes coincide on injective mappings with the apparent plausible definition: Theorem 4.37. Suppose that F is a DFS and f : E −→ E is an injection. Then for all X ∈ P(E), e ∈ E , µX (f −1 (e )) : e ∈ Im f µF (f )(X) (e ) = 0 : e ∈ / Im f where f −1 (e ) = {e ∈ E : f (e) = e }. This result on the extension of injective mappings turned out to be rather useful. For example, it can be used to show that a DFS is compatible with exactly one extension principle. Knowing this might facilitate the proof that a given DFS induces a certain extension principle. Theorem 4.38. Suppose F is a DFS and E an extension principle such that for every semin fuzzy quantifier Q : P(E ) −→ I and all f1 : E −→ E , . . . , fn : E −→ E , n n F(Q ◦ × fi ) = F(Q) ◦ × E(fi ) . Then F = E. i=1
i=1
The extension principle F of a DFS F is uniquely determined by the fuzzy existential quantifiers F(∃) = F(∃E ) : P(E) −→ I induced by F. Theorem 4.39. Suppose F is a given DFS. For every mapping f : E −→ E and all e ∈ E , f −1 (e ) . µF (f )(•) (e ) = F(∃)∩ 6
By the category of fuzzy power sets we mean the category in which the objects are ) which fuzzy power sets P(E), the morphisms are mappings f : P(E) −→ P(E to each fuzzy subset X ∈ P(E) assign a fuzzy subset f (X) ∈ P(E ), and ◦ is the usual composition of functions.
4.13 Quantitativity
131
The converse can also be shown: the fuzzy existential quantifiers obtained from a DFS F are uniquely determined by its extension principle F. Theorem 4.40. Let a DFS F be given. If E = ∅ and ∃ = ∃E : P(E) −→ 2, then F(∃) = , where ! : E −→ {∅} is the mapping defined by !(e) = ∅ for all π ∅ ◦ F(!) e ∈ E. This completes the analysis of the formal characteristics underlying the extension principles that are induced by the models of DFS theory. Some applications in which this analysis supported the proof of semantical properties of the models are stated in Sects. 4.13 (quantitativity), 4.14 (extensionality) and 4.15 (contextuality). The analysis of the extension principle presented here is complemented with an analysis of the interpretation of the standard quantifiers, which is presented below in Sect. 4.16. The subsequent investigation of ∀ and ∃ is instructive because the induced extension principle is closely related to the interpretation of existential quantifiers. Its precise structure becomes apparent once we combine the above Th. 4.39 with the later explicit formula for existential quantification in the models, see Th. 4.61.
4.13 Quantitativity Many quantifiers of interest, like “almost all”, “most” etc., do not depend on any particular characteristics of the elements in the base set. It is not the specific choice of elements that determines the quantification result, but only the quantitative aspects. In the finite case, such quantifiers can be defined from the cardinalities of the arguments (or cardinalities of Boolean combinations). Other quantifiers, like “John” (proper names) or “Most married X1 ’s are X2 ’s” (adjectival restriction), though, are closer tied to the domain and its particular elements. Cardinality information about combinations of the arguments is not sufficient to determine the semantical interpretation of these quantifiers in any given situation, and these quantifiers are therefore called ‘non-quantitative’.7 In TGQ, the notion of quantitativity is commonly given an elegant and convincing definition in terms of automorphism invariance, which traces back to Mostowski [116]. This formalisation is easily adapted to the case of semi-fuzzy quantifiers and even fuzzy quantifiers: Definition 4.41 (Quantitative semi-fuzzy quantifier). n A semi-fuzzy quantifier Q : P(E) −→ I is called quantitative if for all auto8 morphisms β : E −→ E and all Y1 , . . . , Yn ∈ P(E), 1 ), . . . , β(Y n )) . Q(Y1 , . . . , Yn ) = Q(β(Y 7 8
Non-quantitative quantifiers are sometimes also dubbed ‘qualitative’. i.e. bijections of E into itself
132
4 Semantic Properties of the Models
Similarly Definition 4.42 (Quantitative fuzzy quantifier). : P(E) n −→ I is said to be quantitative if for all autoA fuzzy quantifier Q morphisms β : E −→ E and all X1 , . . . , Xn ∈ P(E), ˆˆ ˆˆ 1 , . . . , Xn ) = Q( β(X Q(X 1 ), . . . , β(Xn )) , ˆ where βˆ : P(E) −→ P(E) is obtained by applying the standard extension principle. By Th. 4.37, the induced extension principles of all DFSes coincide on injective mappings. Therefore, the explicit mention of the standard extension principle in the above definition does not tie its applicability to any particular choice of extension principle. The definition in terms of automorphism invariance formalizes the expectation that a quantitative quantifier cannot rely on any specific properties of the individual elements gathered in the arguments. By contrast, the quantification results must remain invariant when the elements are consistently renamed or exchanged with others, provided that distinct elements are kept separate. To give an example, consider the base set E = {John, Lucas, Mary} and the automorphism β defined by β(John) = Lucas,
β(Lucas) = Mary,
β(Mary) = John .
In the case of the quantifier “all”, which is quantitative, we then obtain that all({John}, {John, Lucas}) =1 = all({Lucas}, {Lucas, Mary}) = all(β({John}), β({John, Lucas})) , as expected. The example also witnesses that the quantifier john = πJohn is non-quantitative, because πJohn ({John}) = 1 = 0 = πJohn ({Lucas}) = πJohn (β({John})) . As stated by the following theorem, the quantitativity aspect of quantifiers is recognized by every model of DFS theory: Theorem 4.43. n Suppose F is a DFS. For all semi-fuzzy quantifiers Q : P(E) −→ I, Q is quantitative if and only if F(Q) is quantitative. For example, the quantitative quantifiers all, some and at least k are mapped to quantitative fuzzy quantifiers F(all), F(some) and F(at least k), respectively. On the other hand, the non-quantitative projection quantifier John , john = πJohn is mapped to the fuzzy projection quantifier F(john) = π which is also non-quantitative.
4.14 Extensionality
133
4.14 Extensionality One of the characteristic properties of natural language quantifiers that has been discovered by TGQ is that of having extension: If E ⊆ E are base sets, n the interpretation of the quantifier of interest in E is QE : P(E) −→ I, and n its interpretation in E is QE : P(E ) −→ I, then QE (Y1 , . . . , Yn ) = QE (Y1 , . . . , Yn )
(4.2)
for all Y1 , . . . , Yn ∈ P(E), see e.g. [9, p. 453], [50, p. 250]. Having extension is a pervasive phenomenon. The reader may wish to check that all two-valued or semi-fuzzy quantifiers introduced so far possess this property.9 For example, suppose E is a set of men and that married, have children ∈ P(E) are subsets of E. Further suppose that we extend E to a larger base set E which, in addition to men, also contains, say, their shoes. We should then expect that mostE (married, have children) = mostE (married, have children) , because the shoes we have added to E are neither men, nor do they have children. The cross-domain property of having extension expresses some kind of context insensitivity: given Y1 , . . . , Yn ∈ P(E), we can add an arbitrary number of objects to our original domain E without altering the quantification result. Alternatively, we can drop elements of E which are irrelevant to all argument sets (i.e. not contained in the union of Y1 , . . . , Yn ). In other words: if a quantifier has extension, then QE (Y1 , . . . , Yn ) = QEmin (Y1 , . . . , Yn ) , where Emin = Y1 ∪· · ·∪Yn , i.e. QE (Y1 , . . . , Yn ) depends only on those elements e ∈ E which are contained in at least one of the Yi ’s; the choice of the total domain E has no impact on the quantification result as long as it is large enough to contain the argument sets Y1 , . . . , Yn of interest. Thus, having extension is a robustness property of NL quantifiers with respect to the choice of the full domain E, which is often to some degree arbitrary. An analogous definition of having extension for fuzzy quantifiers is easily obtained from
9
With the possible exception of “many” in its absolute sense. However, absolute “many” can be modelled by a parametrized family of quantifiers that have extension, where the choice of the parameter is made from the context.
134
4 Semantic Properties of the Models
10,11 (4.2); in this case, the property must hold for all X1 , . . . , Xn ∈ P(E). It is natural to require that the fuzzy quantifiers corresponding to given semi-fuzzy quantifiers which have extension also possess this property.
Definition 4.44 (Extensionality). A QFM F is said to be extensional if it preserves extension, i.e. if each pair of n n semi-fuzzy quantifiers Q : P(E) −→ I, Q : P(E ) −→ I such that E ⊆ E and Q |P(E)n = Q, i.e. Q(Y1 , . . . , Yn ) = Q (Y1 , . . . , Yn ) for all Y1 , . . . , Yn ∈ n −→ I, F(Q ) : P(E )n −→ P(E), is mapped to fuzzy quantifiers F(Q) : P(E) n = F(Q), i.e. F(Q)(X1 , . . . , Xn ) = F(Q )(X1 , . . . , Xn ), for I with F(Q )|P(E) all X1 ,. . . , Xn ∈ P(E). Theorem 4.45. Every DFS F is extensional. This is apparent from (Z-6) and Th. 4.37. To illustrate the role of this theorem to the semantics of fuzzy quantifiers, let us substitute the crisp concepts married and have children in the above example with fuzzy replacements. Hence consider the NL statement “Few young are rich”. Just as in the crisp example, we certainly expect F(fewE )(young, rich) to be invariant under the precise choice of domain, too, as long as it is large enough to contain the fuzzy sets young, rich of interest (to be precise, the support of the union of the fuzzy arguments). For example, adding other individuals (like the above shoes) to the domain, which are fully irrelevant to the arguments, should not affect the computed semantic value. By the above theorem, then, every considered model of fuzzy quantification is known to comply with these adequacy considerations.
4.15 Contextuality The preservation of locally observed properties, e.g. ‘local monotonicity’ in Th. 4.30 and Th. 4.34, can be explained in terms of a fundamental property which underlies all models of DFS theory. 10
11
), with the obvious embedding P(E) Here we must view P(E) as a subset of P(E ), where X → X ∈ P(E µX (e) : e ∈ E µX (e) = 0 : e∈ /E for all e ∈ E . The reader is warned not to confuse this definition of fuzzy quantifiers that ‘have extension’ with the totally unrelated concept of an extensional fuzzy quantifier, introduced by Thiele [158].
4.15 Contextuality
135
Unlike most other adequacy criteria discussed so far, this property, called ‘contextuality’, is not borrowed from logic or linguistics. By contrast, it captures an important aspect of the semantics of fuzzy quantification, which is directly related to the way that we perceive fuzziness. In order to formalize contextuality, we first need to recall some familiar notions of fuzzy set theory. Hence let X ∈ P(E) be a given fuzzy subset. The support and the core of X, in symbols: spp(X) ∈ P(E) and core(X) ∈ P(E), are defined by spp(X) = {e ∈ E : µX (e) > 0} core(X) = {e ∈ E : µX (e) = 1} .
(4.3) (4.4)
In other words, spp(X) contains all elements which potentially belong to X and core(X) contains all elements which fully belong to X. The interpretation of a fuzzy subset X is thus ambiguous only with respect to crisp subsets Y in the context range cxt(X) = {Y ∈ P(E) : core(X) ⊆ Y ⊆ spp(Y )} .
(4.5)
For example, let the base set E = {a, b, c} be given and suppose that X ∈ P(E) is the fuzzy subset 1 : x = a or x = b (4.6) µX (e) = 1 : x=c 2 In this case, the corresponding context range becomes cxt(X) = {Y : {a, b} ⊆ Y ⊆ {a, b, c}} = {{a, b}, {a, b, c}} . Now let us consider the existential quantifier ∃ : P(E) −→ 2. Because ∃({a, b}) = ∃({a, b, c}) = 1, we know that ∃(Y ) = 1 for all crisp subsets in the context range of X. We hence expect that F(∃)(X) = 1, simply because the crisp quantification result is always equal to one, regardless of whether we assume that c ∈ X or that c ∈ / X. Abstracting from the example, we obtain the following apparent definition of contextual equality, relative to a given context range. Definition 4.46 (Contextually equal). n be given. We say that Q and Let Q, Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E) Q are contextually equal relative to (X1 , . . . , Xn ), in symbols: Q ∼(X1 ,...,Xn ) Q , if and only if Q|cxt(X1 )×···×cxt(Xn ) = Q |cxt(X1 )×···×cxt(Xn ) , i.e. Q(Y1 , . . . , Yn ) = Q (Y1 , . . . , Yn ) for all Y1 ∈ cxt(X1 ), . . . , Yn ∈ cxt(Xn ).
136
4 Semantic Properties of the Models
It is apparent that for each E = ∅, n ∈ N and X1 , . . . , Xn ∈ P(E), the resulting relation of contextual equality ∼(X1 ,...,Xn ) is an equivalence relation n on the set of all semi-fuzzy quantifiers Q : P(E) −→ I. Definition 4.47. n A QFM F is said to be contextual if for all Q, Q : P(E) −→ I and every choice of fuzzy argument sets X1 , . . . , Xn ∈ P(E), Q ∼(X1 ,...,Xn ) Q entails that F(Q)(X1 , . . . , Xn ) = F(Q )(X1 , . . . , Xn ) . As illustrated by the motivating example, it is highly desirable that a QFM satisfies this very elementary but fundamental adequacy condition. And indeed, every DFS will exhibit the desired property: Theorem 4.48. Every DFS F is contextual. To give an example of how contextuality is useful in establishing properties of QFMs, consider the following theorem which generalizes Th. 4.27. Theorem 4.49. Suppose that F is a contextual QFM compatible with cylindrical extensions. Then F is compatible with argument insertions, i.e. F(QA) = F(Q)A for n all semi-fuzzy quantifiers Q : P(E) −→ I of arity n > 0 and all crisp subsets A ∈ P(E). Other applications of contextuality are presented below in (Chap. 6), which substantiate the significance of the novel concept to the modelling of fuzzy quantification.
4.16 Semantics of the Standard Quantifiers We have already seen in Th. 4.39 and Th. 4.40 that the existential quantifier can be expressed in terms of the induced extension principle and vice versa. This section will present some further results on the interpretation of the universal and existential quantifiers in the considered models of fuzzy quantification. Let us first recall Thiele’s analysis of fuzzy universal and existential quantification [156–158]. Thiele has developed an axiomatic characterisation of these dual types of quantifiers and their expected semantics in the presence of fuzziness, which culminates in his definitions of T-quantifiers and
4.16 Semantics of the Standard Quantifiers
137
S-quantifiers. Thiele also develops representation theorems for T- and Squantifiers which show how these quantifiers can be decomposed into possibly infinitary formulas involving t- or s-norms. These theorems, which will be presented below, have proven invaluable for establishing the results on the interpretation of the standard quantifiers in DFS theory. Thiele’s definition of T-quantifiers captures the essential requirements on fuzzy universal quantifiers. Making use of the concepts developed here, the proposed definition can be expressed as follows. Definition 4.50 (T-quantifiers). : P(E) satisfies the A fuzzy quantifier Q −→ I is called a T-quantifier if Q following axioms: a. For all X ∈ P(E) and e ∈ E, Q(X ∪ ¬{e}) = µX (e); b. For all X ∈ P(E) and e ∈ E, Q(X ∩ ¬{e}) = 0; such that X ⊆ X , it holds c. Q is nondecreasing, i.e. for all X, X ∈ P(E) that Q(X) ≤ Q(X ); is quantitative, i.e. for every automorphism (permutation) β : E −→ E, d. Q ˆ = Q. ◦ βˆ Q Note 4.51. In the above definition, ∩ is the standard fuzzy intersection based on min, and ∪ is the standard fuzzy union based on max. However, all fuzzy intersections based on t-norms and all fuzzy unions based on s-norms will give the same results, because one of the arguments is a crisp subset of E. There is a close relationship between T-quantifiers and t-norms. Following Q which corresponds to the T-quantifier. Thiele, we define the connective ∧ Definition 4.52. : P(E) Q : I × I −→ I is Suppose Q −→ I is a T-quantifier and |E| > 1. ∧ defined by Q x2 = Q(X) x1 ∧ is defined by for all x1 , x2 ∈ I, where X ∈ P(E) x1 : e = e1 µX (e) = x2 : e = e2 1 : else
(4.7)
and e1 = e2 , e1 , e2 ∈ E are two arbitrary, distinct elements of E. Q x2 Note 4.53. It is evident from the quantitativity of T-quantifiers that x1 ∧ does not depend on the choice of e1 , e2 ∈ E. It is Thiele’s merit of having shown that T-quantifiers are exactly those quantifiers that can be decomposed into the following (possibly infinitary) construction based on an underlying t-norm:
138
4 Semantic Properties of the Models
Theorem 4.54 (Characterisation of T-quantifiers). : P(E) Q is a Suppose Q −→ I is a T-quantifier, where |E| > 1. Then ∧ t-norm, and m Q µX (ai ) : A = {a1 , . . . , am } ∈ P(E) finite, ai = Q(X) = inf ∧ aj if i = j i=1
for all X ∈ P(E). (See Thiele [156, Th-8.1, p.47]) Building upon Thiele’s characterisation theorem for T-quantifiers, we improved upon previous work on the interpretation of the standard quantifiers in DFS theory [53, Th-26, p.42] by showing that the fuzzy universal quantifiers F(∀) induced by a DFS are plausible in the sense of belonging to the class of T-quantifiers: Theorem 4.55 (Universal quantifiers in DFSes). Suppose F is a DFS and E = ∅ is a given base set. Then F(∀) : P(E) −→ I is a T-quantifier constructed from the induced t-norm of F, i.e. F(∀) is defined by F(∀)(X) m = inf ∧ µX (ai ) : A = {a1 , . . . , am } ∈ P(E) finite, ai = aj if i = j i=1
for all X ∈ P(E). Thiele has also introduced the dual concept of S-quantifiers, which formalize the semantical requirements on reasonable fuzzy existential quantifiers. These are defined as follows (again adapted to our notation): Definition 4.56 (S-quantifier). : P(E) satisfies the A fuzzy quantifier Q −→ I is called an S-quantifier if Q following axioms: a. For all X ∈ P(E) and e ∈ E, Q(X ∪ {e}) = 1; b. For all X ∈ P(E) and e ∈ E, Q(X ∩ {e}) = µX (e); is nondecreasing, i.e. for all X, X ∈ P(E) such that X ⊆ X , it holds c. Q ); that Q(X) ≤ Q(X d. Q is quantitative, i.e. for every automorphism (permutation) β : E −→ E, ˆ = Q. ◦ βˆ Q Q , which will play a Again, it is possible to define a connective, denoted ∨ special role in characterising the class of S-quantifiers.
4.16 Semantics of the Standard Quantifiers
139
Definition 4.57. : P(E) Q : I × I −→ I is Suppose Q −→ I is an S-quantifier and |E| > 1. ∨ defined by Q x2 = Q(X) x1 ∨ is defined by for all x1 , x2 ∈ I, where X ∈ P(E) x1 : e = e1 µX (e) = x2 : e = e2 0 : else
(4.8)
and e1 = e2 , e1 , e2 ∈ E are two arbitrary distinct elements of E. Q on the chosen elements e1 , e2 ∈ E Note 4.58. Again, the independence of ∨ is apparent from the quantitativity of S-quantifiers. There is a dual characterisation theorem for S-quantifiers that has been proven by Thiele: Theorem 4.59 (Characterisation of S-quantifiers). : P(E) Q is an s-norm, and Let Q −→ I be an S-quantifier, |E| > 1. Then ∨ m Q µX (ai ) : A = {a1 , . . . , am } ∈ P(E) finite, ai = aj if i = Q(X) = sup ∨ j i=1
for all X ∈ P(E). (See Thiele [156, Th-8.2, p.48]) Note 4.60. Some properties of s-norm aggregation of infinite collections in the form expressed by the theorem (and hence, as expressed by S-quantifiers) have been studied by Rovatti and Fantuzzi [142] who view S-quantifiers as a special type of non-additive functionals. Based on the characterisation of S-quantifiers, a theorem dual to Th. 4.55 can be proven for existential quantifiers. Theorem 4.61. Consider a DFS F and a base set E = ∅. Then F(∃) : P(E) −→ I is an S-quantifier constructed from the induced s-norm of F, i.e. F(∃) is defined by F(∃)(X) = sup
m µX (ai ) : A = {a1 , . . . , am } ∈ P(E) finite, ai = aj if i = j ∨ i=1
for all X ∈ P(E).
140
4 Semantic Properties of the Models
Let us notice that if E is finite, i.e. E = {e1 , . . . , em } where the ei are pairwise distinct, then the expressions presented in the above theorems can be simplified into m
µX (ei ) , F(∀)(X) = ∧ i=1 m
µX (ei ) . F(∃)(X) = ∨ i=1
Hence the fuzzy universal (existential) quantifiers of F are reasonable in the sense that the important relationship between ∀ and ∧ (∃ and ∨, resp.), which holds in the finite case, is preserved by the DFS. In particular, the above theorems show that in every DFS, the fuzzy existential and fuzzy universal quantifiers are uniquely determined by the induced fuzzy disjunction and conjunction. Theorem 4.62. = F(∨). Suppose F is a DFS, F its induced extension principle and ∨ , in the way described by Th. 4.39 and a. F is uniquely determined by ∨ Th. 4.61, i.e. µF (f )(X) (e ) = sup
m
µX (ai ) : A = {a1 , . . . , am } ∈ f −1 (e ) finite, ∨ ai = aj if i = j
i=1
and e ∈ E , where E, E = ∅. for all f : E −→ E , X ∈ P(E) viz. x1 ∨ is uniquely determined by F, x2 = ( π∅ ◦ F(!))(X) for all b. ∨ x1 , x2 ∈ I, where X ∈ P({1, 2}) is defined by µX (1) = x1 and µX (2) = x2 , and ! is the unique mapping ! : {1, 2} −→ {∅}. ˆ ˆ is the standard extension principle, then ∨ = max. In particular, if F = (•) = max to be a priori excluded Because I did not want QFMs in which ∨ from consideration, it was not possible to state (Z-6) in terms of the standard extension principle. Acknowledging this constraining role, it was rather necessary to introduce general extension principles along with the construction of induced extension principles, which selects an appropriate choice of such general extension principle for each given F. Only by developing the theory of fuzzy quantification in this way it was possible to leave open the chance for models based on different choices of fuzzy disjunction, like bounded sum or algebraic sum, and perhaps even the extreme case of drastic sum.
4.17 Fuzzy Inverse Images We shall now return to a more theoretically oriented construction which is nevertheless worthwhile investigating, because it contributes to a theorem (in
4.18 The Semantics of Fuzzy Multi-Place Quantification
141
the subsequent section) which elucidates the semantic relationship between unary (i.e. one-place) and general multi-place quantification which underlies all models of DFS theory. The required construction is that of fuzzy inverse images, a notion closely related to the extension principle. In the case of crisp sets, we assume the usual definition of inverse images which has already been stated in the previous chapter, see equation (3.5). Generalising this concept, every QFM F induces fuzzy inverse images by means of the following construction. Definition 4.63. Suppose F is a QFM and f : E −→ E is some mapping. F induces a fuzzy ) −→ P(E) ) which to each X ∈ P(E inverse image mapping F−1 (f ) : P(E −1 assigns the fuzzy subset F (f ) defined by µF −1 (f )(X) (e) = F(χf −1 (•) (e))(X) , for all e ∈ E. If F is a DFS, then its induced fuzzy inverse images coincide with the apparent ‘reasonable’ definition: Theorem 4.64. ). Then for all Suppose F is a DFS, f : E −→ E is a mapping and X ∈ P(E e ∈ E, µF −1 (f )(X) (e) = µX (f (e)) . Hence F not only induces a meaningful extension principle, but also induces a reasonable choice of the reverse construction.
4.18 The Semantics of Fuzzy Multi-Place Quantification In this section, we will uncover the internal structure of fuzzy multi-place quantification and elucidate its semantical grounding in one-place quantification. The reduction of an n-place quantifier to a corresponding unary quantifier will be the key tool for analysing multi-place quantification. In order to understand how n-place quantifiers can be reduced to one-place quantification, let us recall that for arbitrary sets A, B, C, it always holds that AB×C ∼ = (AB )C , a relationship commonly known as ‘currying’. We then have n P(E) ∼ = (2E )n ∼ = 2E×n ∼ = P(E × n) ∼ = P(E × {1, . . . , n}) ,
and similarly n∼ × n) ∼ × {1, . . . , n}) , P(E) = (IE )n ∼ = IE×n ∼ = P(E = P(E where n abbreviates {0, . . . , n − 1} as usual. For convenience, E × n has been replaced by E × {1, . . . , n}, which better suits the convention of numbering the arguments of an n-place quantifier from 1 to n (rather than
142
4 Semantic Properties of the Models
from 0 to n − 1). This suggests that by exploiting the bijection P(E) ∼ = n P(E × {1, . . . , n}), n-place quantification as expressed by some Q : P(E) −→ I can be replaced with one-place quantification using a unary quantifier n −→ I
Q : P(E × {1, . . . , n}) −→ I, and that conversely F(Q) : P(E) × {1, . . . , n}) −→ I. To establish can always be recovered from F( Q) : P(E this result, we need some formal machinery. For a given domain E and n ∈ N, we will abbreviate En = E × {1, . . . , n}. This will provide a concise notation for the base sets of the resulting unary quantifiers. For n = 0, we obtain the empty product E0 = ∅. n
Definition 4.65 (ın,E ). i Let E be a given set, n ∈ N \ {0} and i ∈ {1, . . . , n}. By ın,E : E −→ En we i denote the inclusion defined by ın,E (e) = (e, i) , i for all e ∈ E. : Note 4.66. The crisp extension (powerset mapping, see Def. 3.19) of ın,E i E −→ En will be denoted by ıin,E : P(E) −→ P(En ). The inverse image mapping of ıE,n will be denoted (ıiE,n )−1 : P(En ) −→ P(E), see equation i (3.5). We can use these injections to define the unary quantifier Q of interest. Definition 4.67. n Let Q : P(E) −→ I be an n-place semi-fuzzy quantifier, where n > 0. Then
Q : P(En ) −→ I is defined by
Q(Y ) = Q((ı1n,E )−1 (Y ), . . . , (ınn,E )−1 (Y )) for all Y ∈ P(En ). is defined similarly, using the fuzzy inverse image For fuzzy quantifiers, Q n,E −1 ˆı n ) −→ P(E) mapping (ˆ ) : P(E of ın,E . i i Definition 4.68. : : P(E) n −→ I be a fuzzy quantifier, n > 0. The fuzzy quantifier Q Let Q P(En ) −→ I is defined by n,E −1
ˆˆı
Q(X) = Q(( 1
)
n,E −1
(X), . . . , (ˆˆın
)
(X)) ,
n ). for all X ∈ P(E Let us now establish the relationship between Q and Q (semi-fuzzy case) and and Q (fuzzy case). Let us first introduce a concise notation for iterated
Q unions of a quantifier’s arguments:
4.18 The Semantics of Fuzzy Multi-Place Quantification
143
Definition 4.69. n Suppose that Q : P(E) −→ I is a semi-fuzzy quantifier, n > 0 and k ∈ N\{0}. n+k−1 −→ I is inductively defined as The semi-fuzzy quantifier Q∪k : P(E) follows: a. Q∪1 = Q; b. Q∪k = Q∪k−1 ∪ if k > 1. ∪ n+k−1 −→ I is defined : P(E) n −→ I, Q k : P(E) For fuzzy quantifiers Q analogously. Theorem 4.70. n For every semi-fuzzy quantifier Q : P(E) −→ I where n > 0, n
Q = Q∪n ◦ × ıin,E . i=1
Note 4.71. This demonstrates that n-place crisp (or ‘semi-fuzzy’) quantification, where n > 0, can always be reduced to one-place quantification. If we allowed for empty base sets, this would also go through for n = 0; we only had to exclude this case because E0 = ∅, i.e. in this case we have Q : P(∅) −→ I, which does not qualify as a semi-fuzzy quantifier because the base set is empty. A theorem analogous to Th. 4.70 can also be proven in the fuzzy case: Theorem 4.72. : I × I −→ I has x ∨ 0 = 0∨ x = x for all x ∈ I, and ∪ is the Suppose ∨ fuzzy union element-wise defined in terms of ∨. For every fuzzy quantifier : P(E) n −→ I, n > 0, Q n
n,E = Q ∪ n ◦ × ˆˆıi Q . i=1
Note 4.73. The theorem shows that it is possible to reduce n-place quantification to one-place quantification in the fuzzy case as well. Nullary quantifiers (n = 0) had to be excluded for same reason as in Th. 4.70. The central fact which links these results to QFMs is the following: Theorem 4.74. Suppose F is a QFM with the following properties: 0=0∨ x = x for all x ∈ I; a. x ∨ n −→ I where n > 0, b. for all semi-fuzzy quantifiers Q : P(E) ; F(Q∪) = F(Q)∪ c. F satisfies (Z-6) (functional application);
144
4 Semantic Properties of the Models
d. If E, E are nonempty sets and f : E −→ E is an injective mapping, ˆ, i.e. F coincides with the standard extension principle on ) = fˆ then F(f injections. n
Then for all semi-fuzzy quantifiers Q : P(E) −→ I of arity n > 0, F( Q) = F(Q) . In particular, every DFS commutes with •, and hence permits the reduction of multi-place quantification to one-place quantification. In other words, the behaviour of F on general multi-place quantifiers of arbitrary arities n > 0 is already implicit in the definition of F for one-place quantifiers, because the quantification results of multi-place quantification are completely determined by the behaviour of F in the unary case. In fact, we can actually define F(Q) in terms of F( Q), by utilizing Th. 4.70 and Th. 4.72. The semantical relationship discovered here, and the associated constructions of Q and Q, hence allow us to generate a full-fledged QFM from its core of an underlying ‘unary QFM’. In perspective, this would permit us to restrict attention to the behaviour of F on one-place quantifiers, and to reformulate the DFS axioms (Z-1)–(Z-6) into axioms imposed on the unary base mechanism which underlies F. However, the resulting axioms are likely to become more abstract than the current DFS axioms, and we shall therefore not pursue this idea further here. From a practical point of view, the techniques developed in this section have already proven useful for establishing the equivalence of the alternative constructions of induced fuzzy truth functions in all models of the theory.
4.19 Chapter Summary This chapter has shown that the proposed axiomatization of plausible models in terms of the six basic requirements (Z-1)–(Z-6) accounts for a wide range of expectations on models of fuzzy quantification. We first considered the property of correct generalization, i.e. the fuzzy quantifier F(Q) must properly generalize Q in the sense that F(Q)(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn ) for crisp arguments. The corresponding condition (Z-1) in the axiom system was artificially restricted to quantifiers of arity n ≤ 1. However, as shown by Th. 4.1, the condition also holds unconditionally for quantifiers of arbitrary arities. We then considered the propositional fragment of the models, which is determined by the construction of induced truth functions. It was shown that the identity truth function maps to the corresponding identity on the unit interval, which is of course the only acceptable choice for the continuous-valued case. The models were further shown to translate the original crisp negation into a strong negation operator, the usual abstraction of negation operators in fuzzy set theory. Next we investigated the two-place connectives. First of all,
4.19 Chapter Summary
145
the induced conjunctions of the models belong to the familiar class of t-norms. As to disjunction, it is the notion of an s-norm which constitutes the class of ‘or’-like connectives, and it was shown that in all considered models, the disjunction translates into the ¬ -dual s-norm of the induced conjunction. The induced implication can then be expressed in terms of the induced disjunction and negation. In addition to discussing individual truth functions, we also reviewed the proposed construction of induced connectives. By comparing the results of two alternative constructions of induced truth functions, we gathered some evidence that the choice of induced connectives is indeed canonical, because it is shared by two independent constructions. Following this discussion of their propositional structure, we then investigated the compliance of the models with certain operations on the quantifiers’ arguments. Some of these operations are motivated by linguistic constructions and familiar from TGQ, while others are concerned with the coherence of interpretations. First of all, we considered a construction which permutes argument positions. The construction can be used, for example, to express symmetry properties of quantifiers. The proposed axiom system ensures that all conforming models be compatible with argument transpositions (and thus also to argument permutations). In particular, these models will preserve the symmetry pattern of a given quantifier, which reveals an important aspect of its meaning. Having treated the issue of argument symmetry, we turned to cylindrical extensions, a construction which augments a given quantifier by vacuous argument positions. This construction is mainly required to fit a number of given quantifiers to some construction on these quantifiers which requires that the involved quantifiers refer to a common argument list. After stating the formal definition of cylindrical extensions, it was shown that the considered models of fuzzy quantification comply with this construction. In other words, if an argument of a semi-fuzzy quantifier has no effect on the computed interpretations, then it will also have no effect on the quantification results of the corresponding fuzzy quantifier. We then investigated the various types of negation in order to assess the degree to which the models preserve Boolean structure. This analysis revealed the full compliance of a DFS with the formation of antonyms (complementation of arguments) and the external negation of quantifiers (where the negation operator is applied to the quantifying expression as a whole). Because every model also complies with dualisation by (Z-3), this means that every DFS preserves Aristotelian squares of quantifiers. These visualize the interrelations between the various types of negation which can be applied to a quantifier. The relevance of the Aristotelian squares stems from their close relationship to the Piaget group of transformations, which catches some findings of developmental psychology. Apart from the block-wise type of negation/complementation considered so far, we also discussed a more fine-grained type of negation, which can be modelled by the symmetrical difference of arguments with given
146
4 Semantic Properties of the Models
crisp sets. In this way, only part of an argument can be complemented, while the remaining portion is left unchanged. This fine-structured type of negation does not pose problems to the proposed models, though, which can handle it unconditionally, just like the simple negation and complementation. Having discussed the Boolean structure which expresses in the various types of negation, attention was then shifted to the structure imposed by unions and intersections. As to the formation of unions, it is explicitly required by (Z-4) that all models comply with this construction when it occurs in the last argument. Due to the compatibility with argument permutations, it is also clear that every DFS conforms to unions in arbitrary argument position. Recalling the compliance of the models with complementation, and applying De Morgan’s law, we conclude that every model also conforms to intersections in arbitrary argument positions. In this sense, the models are fully compatible with Boolean argument structure. The models were further shown to comply with another construction of relevance to natural language, the insertion of arguments into a quantifier. The construction is linguistically significant because adjectival restriction can be analysed in terms of intersections and argument insertion. In order to investigate this construction, we focused on an elementary insertion step. The models were then shown to to comply with the insertion operation. Thus every DFS is compatible with adjectival restriction by a crisp adjective (like “married”). Due to the simple fact that a fuzzy argument cannot be inserted into a semi-fuzzy quantifier, the related issues of fuzzy argument insertion and fuzzy adjectival restriction must be handled with a rather different approach. The discussion of fuzzy argument insertion has therefore been delayed to the Sect.6.8 which discusses more advanced topics. Monotonicity properties are especially important because they constrain the valid patterns of reasoning. The monotonicity postulate (Z-5) requires that legal models preserve the nonincreasing monotonicity of a quantifier in its last argument. This artificial restriction of the criterion helped to keep the DFS axioms as succinct as possible. However, every reasonable model of fuzzy quantification should certainly preserve monotonicity of a quantifier in other arguments as well, regardless of its monotonicity type (nondecreasing/nonincreasing). Apart from this general monotonicity property, the models were further shown to preserve monotonicity properties that hold only locally, a notion which we defined in terms of closed ranges of crisp sets. The proposed relativization of monotonicity properties to any desired choice of argument ranges, is not critical to the models. In fact, every DFS can be shown to preserve any type of monotonicity properties, even if they hold only locally. We were also concerned with the monotonicity of the fuzzification mechanism itself (as opposed to the preservation of monotonicity properties of a quantifier). In other words, we would expect that the models preserve inequalities between quantifiers, and indeed every DFS was shown to be monotonic in this sense. Again, the models will also preserve local inequalities between quantifiers which hold only in a given range of arguments.
4.19 Chapter Summary
147
The subsequent analysis of the induced extension principle also showed some interesting findings. First of all, it revealed that the induced extension principle is functorial, i.e. compatible with the composition of powerset mappings. In addition, it was shown that the induced extension principles will always assign the intended interpretation to injective mappings. Most importantly, we discussed the relationship between the induced extension principle and existential quantification: It was shown that the induced powerset mappings can always be expressed in terms of existential quantification and conversely, existential quantification can always be reduced to an application of the extension principle. We then considered several properties of linguistic relevance which make use of powerset mappings and thus depend on the induced extension principle. First of all, we generalized the well-known concept of quantitativity to semifuzzy and fuzzy quantifiers, adopting the usual modelling of quantitativity in terms of automorphism invariance. The subsequent analysis of the models revealed that quantitative semi-fuzzy quantifiers will be mapped to quantitative counterparts, while non-quantitative examples remain non-quantitative when applying a DFS. This demonstrates the suitability of the proposed models to express both types of quantifiers. Existing approaches, by contrast, only support a limited fragment of the quantitative type. These results on the interpretation of fuzzy powerset mappings made it possible to investigate another important issue from the linguistic point of view, viz the property of ‘having extension’. Natural language quantifiers are typically insensitive to the precise choice of the domain. Intuitively, the domain constitutes a background for quantification but its total extent should be inessential as long as it is large enough to contain the considered choice of arguments. Int the chapter, we introduced the notion of an extensional QFM, i.e. model of fuzzy quantification which preserves the property of having extension. It was then shown that every DFS is indeed extensional. This is an important result from a linguistic perspective because (apart from a few possible exceptions), NL quantifiers are generally extensional. We also considered the novel concept of contextuality, which is concerned with our intuitive understanding of fuzziness and the relationship between fuzzy sets and corresponding crisp sets. Hence let us consider a fuzzy sub set X ∈ P(E) involved in a quantification. It is then only the intermediate cases with gradual membership µX (e) ∈ (0, 1), but not the clear cases with crisp membership µX (e) ∈ {0, 1}, which are ambiguous with respect to the decision if “e ∈ X?”. Based on the familiar notions of core and support (envelope) of a fuzzy set, the collection of these unclear cases was resolved into a closed range of crisp sets, the ambiguity range cxt(X). The requirement of contextuality states that the model should only depend on the behaviour of a quantifier for crisp arguments in the ambiguity ranges. Thus, the fuzzy arguments create an interpretation context, and the behaviour of the quantifier outside the context range of the arguments is considered irrelevant to the quantification result. Every plausible model was then shown to comply with
148
4 Semantic Properties of the Models
this contextuality requirement. We shall experience later that contextuality conflicts with another desideratum, that of preserving general convexity properties of quantifiers. However, contextuality captures so elementary an aspect of fuzzy quantification that it clearly outweighs other desiderata. The chapter was also concerned with the interpretation of the universal and existential quantifiers in the models. For that purpose, we reviewed Thiele’s analysis of the standard quantifiers in terms of so-called T- and S-quantifiers, and the universal and existential quantifiers induced by a DFS were shown to belong to these classes. By applying Thiele’s decomposition theorems an explicit representation of these quantifiers was achieved, which can be expressed directly in terms of the induced t-norm/s-norm of the model. The decomposition reveals in particular that the important relationship between conjunction and universal quantification (disjunction and existential quantification), which holds in the crisp case, is preserved when applying a DFS. We also obtained a new representation of the induced extension principle which reveals the internal structure of the induced fuzzy powerset mappings. Finally, we considered two more theoretically oriented topics. We first reviewed the notion of inverse images (the converse of powerset mappings). Unlike extension principles, there is only one natural generalisation of inverse images to the fuzzy case. We should then expect that the model of fuzzy quantification induce this natural choice of fuzzy inverse images. The proposed models were shown to be plausible in this respect as well. This result contributed to the solution of the last problem discussed in the chapter, viz the reducibility of multi-place quantification to a special kind of unary quantification. By exploiting an apparent Currying relationship, we reduced multi-place semi-fuzzy and fuzzy quantifiers to ‘simple’ unary quantifiers. We then discussed the precise conditions that a QFM F must fulfill in order to be compatible with these reductions. This analysis revealed that in the proposed models, multi-place quantification can always be reduced to a special form of unary quantification.
5 Special Subclasses of Models
5.1 Motivation and Chapter Overview The last chapter has shed some light on the semantical properties shared by all models of the theory. By contrast, we will now be concerned with the structuring of these models into natural subclasses with certain joint properties. Furthermore these subclasses provide a context for defining concepts which require a minumum degree of homogeneity on the models’ side. For example, we will only be able to compare models which are sufficiently similar for a comparison to make sense. Finally, the research into subclasses also serves to identify a class of standard models for fuzzy quantification which best meet our intuitive expectations. In the chapter, the models are first grouped by their induced negation. A model transformations scheme is then proposed which is capable of fitting a given QFM to different choices of negation operators. In this way, we can switch from any choice of induced negation to any other choice of induced negation. Without loss of generality, we will therefore focus on those models which induce the standard negation, called ¬-DFSes. These models will , thus forming the classes be further grouped by their induced disjunction ∨ -DFSes. These classes will be sufficiently homogeneous to allow the deof ∨ finition of a model aggregation scheme which combines a given collection of -DFSes into a new model. In addition we can define a specificity order on ∨ -DFSes which judges the degree to which a model commits to crisp the ∨ quantification results. We will also consider the aggregation of quantifiers by conjunctions and disjunctions and state some results concerning the compatibility of a restricted class of models with these novel constructions. Finally we will identify the class of standard models for fuzzy quantification within the proposed framework. Intuitively, the standard models should conform to the standard concepts of fuzzy set theory, like the standard negation 1 − x, the standard conjunction and disjunction min and max, the standard extension principle etc. The chapter presents an axiomatization of the standard models which accounts for all of these considerations. I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 149–160 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
150
5 Special Subclasses of Models
5.2 Models Which Induce the Standard Negation To begin with, let us classify the models according to their induced negation. Definition 5.1. Let ¬ : I −→ I be a strong negation operator. A DFS F is called a ¬-DFS if its induced negation coincides with ¬ , i.e. F(¬) =¬ . In particular, we will call F a ¬-DFS if it induces the standard negation ¬x = 1 − x. In the following it will be shown that without loss of generality, one can restrict attention to ¬-DFSes. To achieve this, a mechanism is needed which allows us to transform a given DFS F into another. A suitable model transformation scheme is defined as follows. Definition 5.2 (Model transformation scheme). Suppose F is a DFS and σ : I −→ I a bijection. For every semi-fuzzy quantifier n we define Q : P(E) −→ I and all X1 , . . . , Xn ∈ P(E), F σ(Q)(X1 , . . . , Xn ) = σ −1 F(σQ)(σX1 , . . . , σXn ) , where σQ abbreviates σ ◦ Q, and σXi ∈ P(E) is the fuzzy subset with µσXi = σ ◦ µXi . Let us now establish that every proper instantiation of the transformation scheme indeed constructs a new model of fuzzy quantification: Theorem 5.3. If F is a DFS and σ : I −→ I an increasing bijection, then F σ is a DFS. It is well-known [99, Th-3.7] that for every strong negation ¬ : I −→ I there is a monotonically increasing bijection σ : I −→ I such that ¬ x = σ −1 (1 − σ(x)) for all x ∈ I. The mapping σ is called the generator of ¬ . Theorem 5.4. −1 Suppose F is a ¬-DFS and σ : I −→ I is the generator of ¬ . Then F = F σ σ is a ¬-DFS and F = F . The theorem states that the model transformation accomplishes a bidirectional translation between the models which is capable of adapting the induced negation. The transformation scheme thus demonstrates a universal translation property with respect to strong negation operators: Every model can be fitted to any choice of negation operator and vice versa. Put differently, each class of ¬-DFSes is representative of the full class of models, because the remain ) can -DFSes (based on different choices of the negation ¬ ing classes of ¬ be generated from the given source class. In particular, no models of interest are lost if we focus on ¬-DFSes only, i.e. on those models which induce the standard negation. Due to the universal translation property, the definitions
5.2 Models Which Induce the Standard Negation
151
of novel concepts and constructions on the models can now be restricted to ¬-DFSes whenever convenient. In the following, the models based on the standard negation are further grouped according to their induced disjunction. Definition 5.5. is called a ∨ -DFS. A ¬-DFS F which induces a fuzzy disjunction ∨ -DFSes are suffiThe benefit of introducing these classes of models is that ∨ ciently homogeneous for a straightforward definition of constructions that act on collections of these models. In particular, it now becomes possible to introduce the following model aggregation scheme, which combines a collection of -DFSes into a new ∨ -DFS, in accordance with a given aggregation operator ∨ Ψ. Theorem 5.6 (Model aggregation scheme). Suppose J is a non-empty index set and (Fj )j∈J is a J -indexed collection of -DFSes. Further suppose that Ψ : IJ −→ I satisfies the following conditions: ∨ a. If f ∈ IJ is constant, i.e. if there is a c ∈ I such that f (j) = c for all j ∈ J , then Ψ (f ) = c. b. Ψ (1 − f ) = 1 − Ψ (f ), where 1 − f ∈ IJ is point-wise defined by (1 − f )(j) = 1 − f (j), for all j ∈ J . c. Ψ is monotonically increasing, i.e. if f (j) ≤ g(j) for all j ∈ J , then Ψ (f ) ≤ Ψ (g). If we define Ψ [(Fj )j∈J ] by Ψ [(Fj )j∈J ](Q)(X1 , . . . , Xn ) = Ψ ((Fj (Q)(X1 , . . . , Xn ))j∈J ) n for all semi-fuzzy quantifiers Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E), then -DFS. Ψ [(Fj )j∈J ] is a ∨
In particular, convex combinations (e.g., arithmetic mean) and stable sym -DFSes are again ∨ -DFSes. metric sums [150] of ∨ The ¬-DFSes can be partially ordered by ‘specificity’ or ‘fuzziness’, in the sense of closeness to 12 . We define a partial order c ⊆ I × I by x c y ⇔ y ≤ x ≤
1 2
or
1 2
≤ x ≤ y,
(5.1)
for all x, y ∈ I. c is Mukaidono’s ambiguity relation, see [118]. This basic definition of c for scalars can be extended to the case of DFSes in the obvious way: Definition 5.7. Let F, F be ¬-DFSes. We say that F is consistently less specific than F , n in symbols: F c F , if for all semi-fuzzy quantifiers Q : P(E) −→ I and all X1 , . . . , Xn ∈ P(E), F(Q)(X1 , . . . , Xn ) c F (Q)(X1 , . . . , Xn ).
152
5 Special Subclasses of Models
-DFSes. As Let us now establish the existence of consistently least specific ∨ -DFSes it turns out, the greatest lower specificity bound of a collection of ∨ can be expressed using the fuzzy median, defined as follows. Definition 5.8. The fuzzy median med 1 : I × I −→ I is defined by 2
min(u1 , u2 ) med 1 (u1 , u2 ) = max(u1 , u2 ) 2 1
: : :
2
min(u1 , u2 ) > 12 max(u1 , u2 ) < 12 else
The plot of med 1 is displayed in Fig. 5.1. 2
1 0.8 1
0.6
0.8
0.4
0.6 0.2 0.4 0 0
0.2
0.2 0.4
0.6
1 0
0.8
Fig. 5.1. The fuzzy median med 1 2
med 1 is an associative mean operator [16] and the only stable (i.e. idempo2
tent) associative symmetric sum [150]. The fuzzy median can be generalised to an operator m 12 : P(I) −→ I which accepts arbitrary subsets of I as its arguments. Firstly, because it is associative, idempotent and commutative, med 1 can be generalized to arbitrary 2
finite sets of arguments (just apply med 1 in any order). Noting that for all 2
5.2 Models Which Induce the Standard Negation
153
finite X = {x1 , . . . , xn } ⊆ I, n ≥ 2, it holds that m 12 X = med 1 (min X, max X) , 2
the proper definition of m 12 X in the case n = 0, n = 1 becomes m 12 ∅ = med 1 (min ∅, max ∅) = med 1 (1, 0) = 2
2
1 2
,
m 12 {u} = med 1 (min{u}, max{u}) = med 1 (u, u) = u 2
2
We can extend m 12 to arbitrary subsets X ⊆ I as follows. Definition 5.9. The generalized fuzzy median m 12 : P(I) −→ I is defined by m 12 X = med 1 (inf X, sup X) ,
for all X ∈ P(I).
2
Note 5.10. This definition is obviously compatible with the above considerations on the proper definition of m 12 X for finite subsets of I. Based on the generalized fuzzy median, we can now state the desired theorem on the existence and representation of lower specificity bounds on given -DFSes. collections of ∨ Theorem 5.11. be an s-norm, and F a non-empty collection of ∨ -DFSes F ∈ F. Then Let ∨ -DFS Fglb such there exists a greatest lower specificity bound on F, i.e. a ∨ that Fglb c F for all F ∈ F (i.e. Fglb is a lower specificity bound), and for all other lower specificity bounds F , F c Fglb . Fglb is defined by Fglb (Q)(X1 , . . . , Xn ) = m 12 {F(Q)(X1 , . . . , Xn ) : F ∈ F} , n for all Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E).
-DFSes, In particular, the theorem asserts the existence of a least specific ∨ is an s-norm such that ∨ -DFSes exist at all, then there exists i.e. whenever ∨ -DFS (just apply the above theorem to the collection of all a least specific ∨ -DFSes). ∨ As concerns the converse issue of most specific models, i.e. least upper bounds with respect to c , the following definition of ‘specificity consistency’ turns out to provide the key concept: Definition 5.12. is an s-norm and F is a non-empty collection of ∨ -DFSes F ∈ F. Suppose ∨ n F is called specificity consistent if for all Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E), either RQ,X1 ,...,Xn ⊆ [0, 12 ] or RQ,X1 ,...,Xn ⊆ [ 12 , 1], where RQ,X1 ,...,Xn = {F(Q)(X1 , . . . , Xn ) : F ∈ F} .
154
5 Special Subclasses of Models
Based on this definition of specificity consistency, we can express the exact -DFSes has a least upper specificity conditions under which a collection of ∨ bound, and provide an explicit description of the resulting bound in those cases where it exists. Theorem 5.13. is an s-norm and F is a non-empty collection of ∨ -DFSes F ∈ F. Suppose ∨ a. F has upper specificity bounds exactly if F is specificity consistent. b. If F is specificity consistent, then its least upper specificity bound is the -DFS Flub defined by ∨ sup RQ,X1 ,...,Xn : RQ,X1 ,...,Xn ⊆ [ 12 , 1] Flub (Q)(X1 , . . . , Xn ) = inf RQ,X1 ,...,Xn : RQ,X1 ,...,Xn ⊆ [0, 12 ] where RQ,X1 ,...,Xn = {F(Q)(X1 , . . . , Xn ) : F ∈ F}.
5.3 Models Which Induce the Standard Disjunction The models of fuzzy quantification can also be grouped according to their induced negation and disjunction: Definition 5.14. = F(∨) )-DFS. A DFS F such that ¬ = F(¬) and ∨ is called a ( ¬, ∨ The ( ¬, max)-DFSes in particular, comprise those models which induce the standard disjunction and an arbitrary strong negation operator. These are the models for which we will now state a theorem on the interpretation of conjunctions and disjunctions of quantifiers. To this end, let us first introduce the relevant constructions which build conjunctions and disjunctions from given quantifiers. In the Theory of Generalized Quantifiers there are constructions Q ∧ Q and Q ∨ Q of forming the conjunction (disjunction) of two-valued quantifiers n Q, Q : P(E) −→ 2, see e.g. [7, p. 194], [50, p. 234]. These constructions are easily generalized to (semi)-fuzzy quantifiers. Definition 5.15. , ∨ : I × I −→ I are given (usually the connectives induced by Suppose ∧ n an assumed QFM). For all semi-fuzzy quantifiers Q, Q : P(E) −→ I, the n Q : P(E)n −→ I Q : P(E) −→ I and the disjunction Q ∨ conjunction Q ∧ of Q and Q are defined by Q (Y1 , . . . , Yn ) Q )(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn ) ∧ (Q ∧ Q )(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn ) ∨ Q (Y1 , . . . , Yn ) (Q ∨ ∧ and Q ∨ are defined Q Q for all Y1 , . . . , Yn ∈ P(E). For fuzzy quantifiers, Q analogously.
5.4 The Standard Models of Fuzzy Quantification
155
Note 5.16. Conjunctions and disjunctions of (semi-)fuzzy quantifiers of different arities can be formed through cylindrical extensions, i.e. by adding vacuous arguments, see Sect. 4.6. As to the interpretation of these conjunctions and disjunctions of quantifiers, the following can be shown for ( ¬, max)-DFSes. Theorem 5.17. n Suppose F is a ( ¬, max)-DFS. Then for all Q, Q : P(E) −→ I, a. F(Q ∧ Q ) ≤ F(Q) ∧ F(Q ) b. F(Q ∨ Q ) ≥ F(Q) ∨ F(Q ). Notes 5.18. • It should be remarked that the theorem does not state the compatibility of ( ¬, max)-DFSes to conjunction and disjunction, it simply establishes bounds on the quantification results. See Sect. 6.4 for a discussion why the inequalities in the above theorem cannot be replaced with equalities on formal grounds. • Because the theorem refers to the standard fuzzy conjunction and disjunction, the constructions on quantifiers have been written Q ∧ Q and Q ∨ Q , omitting the ‘tilde’ notation for fuzzy connectives. Similarly, the standard fuzzy intersection and standard fuzzy union will be written X ∩ Y and X ∪ Y , resp., where µX∩Y (e) = min(µX (e), µY (e)) and µX∪Y (e) = max(µX (e), µY (e)). The same conventions are stipulated for and unions Q∪ of the arguments of a fuzzy quantifier, as intersections Q∩ of fuzzy quantifiers, well as for duals Q of semi-fuzzy quantifiers or Q based on the standard negation. We have not made any claims yet concerning the interpretation of ↔ = F(↔) (equivalence) and xor
= F(xor) (exclusive or) in a given DFS F, and indeed, there are currently no results available for the full class of models. In the special case of ( ¬, max)-DFSes, though, these connectives are assigned the following interpretation. Theorem 5.19. Suppose F is a ( ¬, max)-DFS. Then for all x1 , x2 ∈ I, ↔ x2 = (x1 ∧ x2 ) ∨ ( ¬ x1 ∧ ¬ x2 ) a. x1
x2 = (x1 ∧ ¬ x2 ) ∨ ( ¬ x1 ∧ x2 ) . b. x1 xor
5.4 The Standard Models of Fuzzy Quantification The most restricted subclass of models we will consider – and the best-behaved – is that of standard DFSes, i.e. the class of those models which comply
156
5 Special Subclasses of Models
with the standard operations of fuzzy set theory (min, max etc.). Due to the excellent adequacy properties of these models, and their conformance to the established core of fuzzy set theory, it is suggested that this type of DFSes be considered the standard models of fuzzy quantification. Formally, we define standard DFSes as follows. Definition 5.20. By a standard DFS we denote a (¬, max)-DFS. Building on the earlier theorems, the fuzzy truth functions induced by a standard DFS can now be summarized as follows. Theorem 5.21 (Truth functions in standard DFSes). In every standard DFS F, ¬ x1 = 1 − x1 x2 = max(x1 , x2 ) x1 ∨ x2 = min(x1 , x2 ) x1 ∧ x1 → x2 = max(1 − x1 , x2 ) x1 ↔ x2 = max(min(x1 , x2 ), min(1 − x1 , 1 − x2 )) x1 xor
x2 = max(min(x1 , 1 − x2 ), min(1 − x1 , x2 )) The standard DFSes therefore induce the standard connectives of fuzzy logic, i.e. the propositional fragment of a standard model coincides with the wellknown K-standard sequence logic of Dienes [37]. In particular, Kleene’s threevalued logic [95, p. 344] is obtained when restricting to the three-valued fragment.1 By the above theorem Th. 4.62, then, standard DFSes are also known to induce the standard extension principle. As concerns the standard quantifiers, we obtain the familiar choices as well. For example, it is apparent from theorems Th. 55 and Th. 4.61 that F(∃)(X) = sup{µX (e) : e ∈ E} F(∀)(X) = inf{µX (e) : e ∈ E} for all X ∈ P(E). This is quite satisfactory. The standard models represent a boundary case of DFSes because they induce the smallest fuzzy existential quantifiers, the smallest extension principle, and the largest fuzzy universal quantifiers. This is immediate from Th. 4.61 Th. 4.39 and Th. 4.55, respectively, keeping in mind that min is the largest t-norm and max the smallest s-norm. 1
Some readers might prefer a different choice of the implication operator, namely → x2 = min(1, 1 − x1 + x2 ). However, it is clear that every QFM with the x1 highly desirable property of preserving Aristotelian squares will also preserve the interdefinability of the propositional connectives, and therefore differ from L /ukasiewicz logic.
5.5 Axiomatisation of the Standard Models
157
Finally, we consider the interpretation of two-valued quantifiers in a standard DFS. Interestingly, there is absolutely no freedom concerning the interpretation assigned to these quantifiers, which is fully determined by the requirements imposed on the standard models: Theorem 5.22. All standard DFSes coincide on two-valued quantifiers. Hence if F, F are n standard DFSes and Q : P(E) −→ 2 is a two-valued quantifier, then F(Q) = F (Q). This result establishes a link between the standard models and other fuzzification mechanisms described in the literature. This is because the examples of standard models introduced in [53] are known to coincide with the fuzzification mechanism of Gaines [49] on two-valued quantifiers. We can then conclude from the above theorem that all standard models are indeed compatible with this mechanism. Standard DFSes are more powerful in scope, though, and consistently generalize the Gainesian mechanism to arbitrary semi-fuzzy truth functions and semi-fuzzy quantifiers (see Sect. 10.9 below for a more detailed discussion of the Gainesian fuzzification mechanism). In terms of the classification of quantification types introduced by Kerre and Liu [108, p. 2], the theorem asserts that all Type II quantifications in the sense of the proposed classification are assigned the very same interpretation in all standard models. In other words, the axioms for standard models of fuzzy quantification are strong enough to identify a unique admissible interpretation for arbitrary Type-II quantifications.
5.5 Axiomatisation of the Standard Models This section presents a system of conditions on the considered fuzzification mechanism F which precisely characterise the proposed class of standard models. It should be apparent from the previous remarks which changes to the base axioms (Z-1)–(Z-6) are necessary to effect a reduction to this class. These considerations are summarized in the following adapted system of conditions on F: Definition 5.23. n Let F be a QFM. For all semi-fuzzy quantifiers Q : P(E) −→ I, we stipulate the following conditions.
158
5 Special Subclasses of Models
Correct generalisation
U(F(Q)) = Q
if n ≤ 1
Projection quantifiers
F(Q) = π e
Dualisation
F(Q) = F(Q)
(S-1)
if there exists e ∈ E s.th. Q = πe (S-2) n>0
(S-3)
Internal joins F(Q∪) = F(Q)∪ n > 0 (S-4) Preservation of monotonicity If Q is nonincreasing in n-th arg, then (S-5)
Functional application
F(Q) is nonincreasing in n-th arg, n > 0 n n ˆ (S-6) F(Q ◦ × fi ) = F(Q) ◦ × fˆi i=1
i=1
where f1 , . . . , fn : E −→ E, E = ∅. Note 5.24. Let us briefly discuss the differences compared to the original DFS axioms (Z-1)–(Z-6). In (S-3), the induced negation/complement has been replaced with the standard choice of fuzzy negation/complement. In (S-4), the induced fuzzy union has been replaced with the standard fuzzy union. Finally, (S-6) is built upon the standard extension principle, rather than requiring the compatibility of F with its induced extension principle. As stated by the next theorem, these conditions indeed capture the precise requirements for F to be a standard DFS. Theorem 5.25. The conditions (S-1)–(S-6) are necessary and sufficient for F to be a standard DFS.
5.6 Chapter Summary Building on the discussion of general properties in the last chapter, the models have now been grouped into natural subclasses and their specific properties and mutual relationships have been investigated. In particular, the goal was to develop those concepts that require a certain homogeneity on part of the models. For example, certain aggregating constructions, but also comparisons between DFSes, are only possible if the considered models show a similar structure, and thus sufficient compatibility. In the chapter, we first grouped the models according to their induced negation into the classes of ¬-DFSes. The proposed model transformation scheme then achieved a bidirectional transla tion between different classes of ¬-DFSes. By instantiating the transformation scheme, a given model of fuzzy quantification can be adapted to any desired choice of induced negation. In this sense, all negation operators are universal to DFSes. Due to this universal translation property, we can now restrict attention to a single representative choice, ¬x = 1 − x, and the corresponding
5.6 Chapter Summary
159
class of models which induce the standard negation. These ¬-DFSes form a representative class to which all other classes of models can be reduced. Essentially, this means that any new concepts of fuzzy quantification theory need only be defined for ¬-DFSes. By applying the model transformation scheme, these concepts can easily be generalized to other types of models if so desired. Keeping the standard negation fixed, we further grouped the ¬-DFSes by their induced negation. We then introduced a number of concepts for the -DFSes (i.e. models which induce the s-norm ∨ and the standard resulting ∨ negation). The relative homogeneity of these models made it possible to define a construction of model aggregation; the precise conditions which constrain the admissible aggregation operators were also stated. This investigation re -DFSes are closed under important operations like stable vealed that the ∨ symmetric sums or convex combinations. A concrete application has also been described, viz. the construction of the greatest lower specificity bound from a given collection of models. -DFSes is sufficiently discriminative that the memThe partitioning into ∨ bers of a given subclass can now be compared in a meaningful way. Due to the symmetry of DFSes with respect to negation, it is not possible to relate the models by the linear order ≤, however. We therefore resorted to a specificity order c , which is basically Mukaidono’s ambiguity relation. Roughly speaking, the models are compared by the degree to which they commit to the two-valued poles. The investigation of this aspect of the models is of particular importance because one is often interested in obtaining as specific results as possible. Unlike ≤, the proposed relation c is not a total order, i.e. there are choices of models which are comparable under c . Such models will be considered incompatible, because there are situations in which they tend to opposite crisp outcomes. The chapter has presented the necessary formal machinery for discussing such specificity issues. For example, it was shown that by instantiating the model aggregation scheme with the generalized fuzzy median, we can effectively construct the greatest lower specificity bound for every -DFSes. As concerns the converse issue of upper specificity given collection of ∨ bounds, we required the notion of ‘specificity consistency’ of a given collection of models. Such a collection is considered specificity consistent if all models are mutually consistent in the sense explained above, i.e. comparable under c . The criterion was then shown to decide upon the existence of least upper specificity bounds. The precise structure of such upper bounds has also been described. Following this discussion of specificity issues, we focused on two constructions not yet addressed in the previous chapter, viz. conjunctions and disjunctions of quantifiers. Some first results on the interpretation of conjunctions and disjunctions in ( ¬, max)-DFSes have been presented which also cover all standard models. Specifically, we have established upper and lower bounds on the possible outcome of the conjunction and disjunction. In addition, the interpretation of the equivalence and antivalence/xor truth functions was in-
160
5 Special Subclasses of Models
vestigated. It will become obvious in the next chapter why these truth function pose special difficulties. Finally, we were concerned with those standard models of fuzzy quantification in which the fuzzy connectives, logical quantifiers, and the associated extension principle are all tied to their standard interpretation in fuzzy logic. The chosen definition of these models only makes the extra requirement that the given F induce the standard negation and standard disjunction. However, this condition is sufficient to ensure the standard interpretation of all constructions in the resulting standard DFSes. In particular, this means that the propositional part of each standard model coincide with the K-standard sequence logic of Dienes, and that the three-valued fragment coincide with wellknown Kleene’s logic. The latter observation will be the starting point for a construction of concrete models described in chapter Ch. 7. All standard DFSes coincide for two-valued quantifiers. Among other things, this means that all standard models consistently extend the fuzzification mechanism, proposed by Gaines [49]. Unlike the Gainesian scheme, however, the standard DFSes are not limited to two-valued propositional functions but also cover arbitrary quantifiers.
6 Special Semantical Properties and Theoretical Limits
6.1 Motivation and Chapter Overview In the previous chapters, the proposed class of models was checked against a catalog of formal requirements in order to elucidate the properties shared by every DFS. In addition, the models were partitioned into natural subgroups which allowed the investigation of specificity issues, for example. In this chapter we shall be concerned with properties like continuity (smoothness) which are desirable in many, but not all, cases. Continuity, for example, will certainly be expected of every practical model. From a methodical perspective, however, it is important to retain the discontinuous models because these include many boundary cases of theoretical interest. In the chapter, we will discuss two continuity criteria which capture different aspects of the robustness of a model against small changes in its inputs. Both criteria will be important from an application perspective. In addition, we will pay attention to the way in which a model handles the unspecificity observed in the inputs. Intuitively, one would expect that the model’s outputs cannot become more specific when there is less information in the inputs. These concerns regarding the propagation of fuzziness in the model will be formalized in terms of the specificity order c introduced in the last chapter. Like continuity, the propagation of fuzziness will be considered an important but optional requirement. We will discuss properties which are natural to require but in conflict with other basic desiderata. This situation is familiar in fuzzy set theory, due to the fact that it is impossible to define a Boolean algebra on the set of continuous truth values in [0, 1]. In particular, it is well-known that idempotence/distributivity and the law of contradiction exclude each other in the fuzzy framework. It is not surprising that these conflicts also express in fuzzy quantifications. The (partially) conflicting criteria that we will consider are the following. First we will be concerned with conjunctions and disjunctions of quantifiers and show that no standard models comply with these constructions on formal grounds. Similar results will be obtained for multiple occurrences of I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 161–177 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
162
6 Special Semantical Properties and Theoretical Limits
variables in the argument list of a quantifier. Few models will be compatible with these constructions because every conforming model must fulfill the law of contradiction. We then turn to three properties of special linguistic relevance. First we discuss a generalized form of monotonicity which shows up in so-called convex quantifiers. The preservation of unimodal or bell-shaped monotonicity patterns will turn out pretty difficult for the proposed models. This will force us to restrict preservation of convexity to the most important cases. An important property of NL quantifiers which has attracted much interest in the literature on TGQ is that of conservativity. We will discuss two alternative generalizations of conservativity to fuzzy quantifiers, one of which is universally valid in the models, while the other cannot be required at all. In fact, the latter criterion will conflict with assumptions much weaker than the DFS axioms. Finally we review the construction of argument insertion which is important for modelling adjectival restriction. It will be shown how a compositional treatment of fuzzy argument insertion, and thus of adjectival restriction by fuzzy adjectives, can be achieved in the given framework. Acknowledging the importance of Frege’s compositionality principle to the analysis of natural language, the models which admit this construction will be optimally plausible from the point of view of NL semantics.
6.2 Continuity Conditions Let us first consider two adequacy criteria concerned with distinct aspects of the ‘smoothness’ or ‘continuity’ of a DFS. These conditions are essential for the models to be practical because it is extremely important for applications that the results of a DFS be stable under slight changes in the inputs. These ‘changes’ can either occur in the fuzzy argument sets (e.g. due to noise), or they can affect the semi-fuzzy quantifier. For example, if a person A has a slightly different interpretation of quantifier Q compared to person B, then we still want them to understand each other, and the quantification results obtained from the two models of the target quantifier should be very similar in such cases. In order to express the robustness criterion with respect to slight changes in the fuzzy arguments, a metric on fuzzy subsets is needed, which serves as a numerical measure of the similarity of the arguments. For all base sets E = ∅ n −→ I by n × P(E) and all n ∈ N, we define the metric d : P(E) n
d((X1 , . . . , Xn ), (X1 , . . . , Xn )) = max sup{|µXi (e) − µXi (e)| : e ∈ E} , (6.1) i=1
Based on this metric, we can now for all X1 , . . . , Xn , X1 , . . . , Xn ∈ P(E). express the desired criterion for continuity in arguments.
6.3 Propagation of Fuzziness
163
Definition 6.1. We say that a QFM F is arg-continuous if and only if F maps all Q : n P(E) −→ I to continuous fuzzy quantifiers F(Q), i.e. for all X1 , . . . , Xn ∈ P(E) and ε > 0 there exists δ > 0 such that |F(Q)(X1 , . . . , Xn ) − F(Q)(X1 , . . . , Xn )| < ε with d((X1 , . . . , Xn ), (X1 , . . . , Xn )) < δ. for all X1 , . . . , Xn ∈ P(E) Arg-continuity means that a small change in the membership grades µXi (e) of the argument sets does not change F(Q)(X1 , . . . , Xn ) drastically; it thus expresses an important robustness condition with respect to noise. The second robustness criterion is intended to capture the idea that slight changes in a semi-fuzzy quantifier should not cause the quantification results to change drastically. To introduce this criterion, we must first define suitable distance measures for semi-fuzzy quantifiers and for fuzzy quantifiers. Hence n for all semi-fuzzy quantifiers Q, Q : P(E) −→ I, d(Q, Q ) = sup{|Q(Y1 , . . . , Yn ) − Q (Y1 , . . . , Yn )| : Yi ∈ P(E)} ,
(6.2)
n −→ I, Q : P(E) and similarly for all fuzzy quantifiers Q, 1 , . . . , Xn ) − Q (X1 , . . . , Xn )| : Xi ∈ P(E)} Q ) = sup{|Q(X . d(Q,
(6.3)
Definition 6.2. We say that a QFM F is Q-continuous if and only if for each semi-fuzzy n quantifier Q : P(E) −→ I and all ε > 0, there exists δ > 0 such that n d(F(Q), F(Q )) < ε whenever Q : P(E) −→ I satisfies d(Q, Q ) < δ. Q-continuity captures an important aspect of robustness with respect to imperfect knowledge about the intended choice of a quantifier; i.e. slightly different definitions of Q will produce similar quantification results. Both conditions are crucial to the utility of a DFS and must be possessed by every practical model. They are not part of the DFS axioms because we would like to have models for general t-norms (including the discontinuous variety).
6.3 Propagation of Fuzziness The next two criteria are concerned with the ‘propagation of fuzziness’, i.e. the way in which the amount of imprecision in the model’s inputs affects changes of the model’s outputs. To this end, let us recall the partial order c ⊆ I × I defined by equation (5.1). We can extend c to fuzzy sets X ∈ P(E), semin n fuzzy quantifiers Q : P(E) −→ I and fuzzy quantifiers Q : P(E) −→ I as follows.
164
6 Special Semantical Properties and Theoretical Limits
X c X ⇐⇒ µX (e) c µX (e)
for all e ∈ E;
Q c Q ⇐⇒ Q(Y1 , . . . , Yn ) c Q (Y1 , . . . , Yn )
for all Y1 , . . . , Yn ∈ P(E);
c Q ⇐⇒ Q(X (X1 , . . . , Xn ) 1 , . . . , Xn ) c Q Q
for all X1 , . . . , Xn ∈ P(E) .
Intuitively, we expect that the quantification results become less specific whenever the quantifier or the argument sets become less specific: the fuzzier the input, the fuzzier the output. For example, consider a base set E = defined by {Joan, Lucas, Mary} and fuzzy subsets lucky, lucky ∈ P(E) lucky = 1/Joan + 0.8/Lucas + 0.2/Mary, lucky = 0.6/Joan + 0.5/Lucas + 0.4/Mary . Then lucky c lucky, i.e. the interpretation of “lucky” as lucky is less committed to the possible crisp decisions. We should therefore expect that F(most)(rich, lucky ) c F(most)(rich, lucky) as well, i.e. the quantification results based on lucky should be less decided than those computed from the fuzzy subset lucky, which bears more specific information. Definition 6.3. Let a QFM F be given. a. We say that F propagates fuzziness in arguments if the following property n is satisfied for all Q : P(E) −→ I and X1 , . . . , Xn , X1 , . . . , Xn ∈ P(E): If Xi c Xi for all i = 1, . . . , n, then F(Q)(X1 , . . . , Xn )c F(Q)(X1 , . . . , Xn ). b. We say that F propagates fuzziness in quantifiers if F(Q) c F(Q ) whenever Q c Q . Notes 6.4. •
Both conditions are certainly natural to require, and will therefore be considered desirable but optional. A more thorough discussion of propagation of fuzziness and its tradeoffs can be found in Ch. 8, p. 234. • The intuitive expectation that the output cannot get more detailed when the input gets fuzzier, is satisfied by the standard connectives ¬x = 1 − x, ∧ = min and ∨ = max. Thus the standard models of the DFS axioms make good candidates for propagation of fuzziness. We shall see below in Th. 8.38 that both conditions are possible but in fact optional for standard models, and also that the conditions are mutually independent. This indicates that propagation of fuzziness in quantifiers and propagation of fuzziness in arguments open distinct semantical dimensions in the full space of models for fuzzy quantification. • It should be remarked that the standard conjunction and disjunction are not the only choice of t- and s-norms which propagate fuzziness in their m defined by equation (4.1) apparently arguments. In fact, the t-norm ∧ also propagates fuzziness in its arguments, a property which transfers to m under the standard negation. The corresponding m of ∧ the dual s-norm ∨ m -DFSes might then witness the existence of nonstandard models class of ∨
6.5 Multiple Occurrences of Variables
165
which propagate fuzziness. However, it is currently not known whether models exist at all in this formal class.
6.4 Conjunctions and Disjunctions of Quantifiers Let us now review the construction introduced in Def. 5.15 which builds a new quantifier from a conjunction or disjunction of given ones. In the last chapter, we presented a result on the interpretation of such conjunctions and disjunctions in ( ¬, max)-DFSes. However, we only established upper and lower bounds rather than an equality. Let us now show that an improved result stating the full compatibility of ( ¬, max)-DFSes with this construction cannot be achieved because the t-norm min induced by these models violates the law of contradiction. Theorem 6.5. Suppose a DFS F has the property that F(Q ) Q ) = F(Q) ∧ F(Q ∧
(6.4)
n = F(∧). Then for all semi-fuzzy quantifiers Q, Q : P(E) −→ I, where ∧
¬ x∧ x=0 for all x ∈ I. A DFS F which is homomorphic with respect to conjunctions (or equivalently, disjunctions) of quantifiers therefore induces a t-norm which respects the law of contradiction. This requirement would exclude many interesting t-norms; in particular, the standard choice F(∧) = min. Consequently we have not required in general that a DFS be compatible with conjunctions/disjunctions of quantifiers.
6.5 Multiple Occurrences of Variables The discussion of the propositional connectives ↔ and xor has been separated from the discussion of the simple cases like conjunction and disjunction in Sect. 4.3. The difficulties in proving properties of the models for these connectives are due to the fact that the definition of ↔ and xor involves multiple occurrences of propositional variables, viz. x1 ↔ x2 = (x1 ∧ x2 ) ∨ (¬x1 ∧ ¬x2 ) = (¬x1 ∨ x2 ) ∧ (x1 ∨ ¬x2 ) x1 xor x2 = (x1 ∧ ¬x2 ) ∨ (¬x1 ∧ x2 ) = (x1 ∨ x2 ) ∧ (¬x1 ∨ ¬x2 ) ,
166
6 Special Semantical Properties and Theoretical Limits
for all x1 , x2 ∈ {0, 1}. A glance at the DFS axioms reveals that there is no axiom which describes this case of multiple occurrences of variables. In order to be able to discuss the issue on the formal level, let us introduce the following construction: Definition 6.6. m Suppose Q : P(E) −→ I is a semi-fuzzy quantifier and ξ : {1, . . . , n} n −→ {1, . . . , m} is a mapping. By Qξ : P(E) −→ I we denote the semi-fuzzy quantifier defined by Qξ(Y1 , . . . , Yn ) = Q(Yξ(1) , . . . , Yξ(n) ), for all Y1 , . . . , Yn ∈ P(E). We use an analogous definition for fuzzy quantifiers. The interesting case is that of a non-injective ξ, which inserts the same variable in two (or more) argument positions of the original quantifier Q. Theorem 6.7. Let F be a DFS compatible with the duplication of variables, i.e. for Q : m P(E) −→ I and ξ : {1, . . . , n} −→ {1, . . . , m} for some n ∈ N, it always holds that F(Qξ) = F(Q)ξ . Then the induced conjunction satisfies ¬ x∧ x=0 for all x ∈ I. Again, this will be considered too restrictive because this type of conjunction excludes the standard models. Consequently, it was not required in general that F be homomorphic with respect to the duplication of variables.
6.6 Convex Quantifiers Apart from the monotonicity type, TGQ has also developed more sophisticated concepts which describe the characteristic shape of a given quantifier. The following definition of convex quantifiers covers unimodal, bell-shaped, trapezoidal and similar cases. Definition 6.8. n Let Q : P(E) −→ I be an n-ary semi-fuzzy quantifier, n > 0. Q is said to be convex in its i-th argument, where i ∈ {1, . . . , n}, if Q(Y1 , . . . , Yn ) ≥ min( Q(Y1 , . . . , Yi−1 , Yi , Yi+1 , . . . , Yn ), Q(Y1 , . . . , Yi−1 , Yi , Yi+1 , . . . , Yn )) whenever Y1 , . . . , Yn , Yi , Yi ∈ P(E) and Yi ⊆ Yi ⊆ Yi . : P(E) n −→ I in the i-th argument Convexity of a fuzzy quantifier Q is defined analogously based on arguments in P(E) and the fuzzy inclusion relation.
6.6 Convex Quantifiers
167
Notes 6.9. •
To present an example, the absolute quantifier “between 10 and 20” is convex in both arguments, and the proportional quantifier “about 30 percent” is convex in the second argument. • In TGQ, those quantifiers which we call ‘convex’ are usually dubbed ‘continuous’, see e.g. [160] and [50, Def. 16, p. 250]. The terminology has been changed here in order to avoid the possible confusion with ‘continuous’ in the sense of ‘smooth’. Some well-known properties of two-valued convex quantifiers also carry over to semi-fuzzy and fuzzy quantifiers. Theorem 6.10 (Conjunctions of convex semi-fuzzy quantifiers). n Let Q, Q : P(E) −→ I be semi-fuzzy quantifiers of arity n > 0 which are convex in the i-th argument, where i ∈ {1, . . . , n}. Then the semi-fuzzy quantifier n Q ∧ Q : P(E) −→ I, defined by (Q ∧ Q )(Y1 , . . . , Yn ) = min(Q(Y1 , . . . , Yn ), Q (Y1 , . . . , Yn )) for all Y1 , . . . , Yn ∈ P(E), is also convex in the i-th argument. Note 6.11. The theorem states that conjunctions of convex semi-fuzzy quantifiers are convex (provided the standard fuzzy conjunction ∧ = min is chosen). A similar point can be made about fuzzy quantifiers. Theorem 6.12 (Conjunctions of convex fuzzy quantifiers). Q : P(E) n −→ I be fuzzy quantifiers of arity n > 0 which are convex Let Q, ∧Q : in the i-th argument, where i ∈ {1, . . . , n}. Then the fuzzy quantifier Q n P(E) −→ I, defined by 1 , . . . , Xn ), Q (X1 , . . . , Xn )) ∧Q )(X1 , . . . , Xn ) = min(Q(X (Q is also convex in the i-th argument. for all X1 , . . . , Xn ∈ P(E), Let us further observe that every convex semi-fuzzy quantifier can be decomposed into a conjunction of a nonincreasing and a nondecreasing semi-fuzzy quantifier: Theorem 6.13 (Decomposition of convex semi-fuzzy quantifiers). n A semi-fuzzy quantifier Q : P(E) −→ I is convex in its i-th argument, i ∈ {1, . . . , n}, if and only if Q is the conjunction of a nondecreasing and n a nonincreasing semi-fuzzy quantifier, i.e. if there exist Q+ , Q− : P(E) −→ I + − such that Q is nondecreasing in its i-th argument; Q is nonincreasing in its i-th argument, and Q = Q+ ∧ Q− . Again, a similar point can be made about fuzzy quantifiers.
168
6 Special Semantical Properties and Theoretical Limits
Theorem 6.14 (Decomposition of convex fuzzy quantifiers). : P(E) n −→ I is convex in its i-th argument, i ∈ A fuzzy quantifier Q is the conjunction of a nondecreasing and a non{1, . . . , n}, if and only if Q + , Q − : P(E) n −→ I such that increasing fuzzy quantifier, i.e. if there exist Q + is nondecreasing in its i-th argument; Q − is nonincreasing in its i-th Q + − argument, and Q = Q ∧ Q . We say that a QFM F preserves convexity if the convexity of a quantifier in its arguments is preserved when applying F. Definition 6.15. A QFM F is said to preserve convexity of n-ary quantifiers, where n ∈ N\{0}, n if and only if every n-ary semi-fuzzy quantifier Q : P(E) −→ I which is convex in its i-th argument is mapped to a fuzzy quantifier F(Q) which is also convex in its i-th argument. F is said to preserve convexity if F preserves the convexity of n-ary quantifiers for all n > 0. As we shall now see, preservation of convexity expresses a semantical criterion which in its strong form conflicts with other desirable properties. Let us first notice that contextuality of a QFM excludes preservation of convexity. Theorem 6.16. Suppose that F is a contextual QFM with the following properties: for every base set E = ∅, a. the quantifier O : P(E) −→ I, defined by O(Y ) = 0 for all Y ∈ P(E), is mapped to the fuzzy quantifier defined by F(O)(X) = 0 for all X ∈ P(E); b. If X ∈ P(E) and there exists some e ∈ E such that µX (e) > 0, then F(∃)(X) > 0; c. If X ∈ P(E) and there exists e ∈ E such that µX (e) < 1, then F(∼∀)(X) > 0, where ∼∀ : P(E) −→ 2 is the quantifier defined by 1 : X = E (∼∀)(Y ) = 0 : X=E Then F does not preserve convexity of unary quantifiers Q : P(E) −→ I on finite base sets E = ∅. In particular, no DFS preserves convexity. This means that even if we restrict to the simple case of unary quantifiers on finite base sets and restrict our semantic postulates to the very elementary assumptions of the theorem, there is still no possible choice of a QFM which will preserve convexity in this restricted sense. Because contextuality is one of the most fundamental conditions, it seems better to weaken the requirements on the preservation of convexity rather than compromising contextuality or the other elementary conditions.
6.6 Convex Quantifiers
169
Noticing that the common examples of convex NL quantifiers are typically of the quantitative kind, it is straightforward to weaken the requirement of preserving convexity to quantitative convex quantifiers (see Def. 4.41 and Def. 4.42 for the assumed definitions of quantitativity). Unfortunately, this weakening is insufficient yet and the targeted class of convex quantifiers still too broad, as witnessed by the following theorem. Theorem 6.17. Suppose that F is a contextual QFM compatible with cylindrical extensions which satisfies the following properties: for all base sets E = ∅, a. the quantifier O : P(E) −→ I, defined by O(Y ) = 0 for all Y ∈ P(E), is mapped to the fuzzy quantifier defined by F(O)(X) = 0 for all X ∈ P(E); b. If X ∈ P(E) and there exists some e ∈ E such that µX (e) > 0, then F(∃)(X) > 0; c. If X ∈ P(E) and there exists and there exists some e ∈ E such that µX (e ) = 0 for all e ∈ E \ {e} and µX (e) < 1, then F(∼∃)(X) > 0, where ∼∃ : P(E) −→ 2 is the quantifier defined by 1 : X=∅ (∼∃)(Y ) = 0 : X = ∅ Then F does not preserve the convexity of quantitative semi-fuzzy quantifiers of arity n > 1 even on finite base sets. In particular, no DFS preserves the convexity of quantitative semi-fuzzy quantifiers of arity n > 1. There remains a chance that certain models will preserve the convexity of quantitative semi-fuzzy quantifiers of arity n = 1. For simplicity, we shall investigate this preservation property for the case of finite domains only. Definition 6.18. A QFM F is said to weakly preserve convexity if F preserves the convexity of quantitative one-place quantifiers on finite domains. In this case, we get positive results on the existence of models that satisfy the specified criterion. An example of a DFS which weakly preserves convexity, the model MCX , will be presented in Def. 7.56 on p. 201. Let us now return to a special case of multi-place quantification. In general, we have negative results concerning the preservation of convexity for quantifiers of arity n > 1, even if these are quantitative (see Th. 6.17). However it is possible for a DFS to preserve convexity properties in the case of absolute two-place quantifiers, which are of obvious interest to natural language interpretation. Theorem 6.19. 2 Suppose Q : P(E) −→ I is an absolute quantifier on a finite base set, i.e. there
170
6 Special Semantical Properties and Theoretical Limits
exists a quantitative unary quantifier Q : P(E) −→ I such that Q = Q ∩. If a DFS F has the property of weakly preserving convexity and Q is convex in its arguments, then F(Q) is also convex in its arguments. Although the proportional type is not covered, the theorem demonstrates that weak preservation of convexity is strong enough to ensure a proper interpretation of many NL quantifiers of interest, e.g. between 10 and 20, about 50 and others.
6.7 Conservativity One of the pervasive properties of NL quantifiers is conservativity, see e.g. Keenan and Stavi [92, p. 275, eq. (40)], van Benthem [10, p. 445] and [9, p. 452] or Gamut [50, pp. 245-249]. Definition 6.20 (Conservativity). 2 We shall call Q : P(E) −→ I conservative if Q(Y1 , Y2 ) = Q(Y1 , Y1 ∩ Y2 ) for all Y1 , Y2 ∈ P(E). All two-valued or semi-fuzzy quantifiers introduced so far are conservative.1 In particular, all proportional quantifiers are conservative by definition (see Def. 11.51 below). To give an example, if E is a set of persons, married ∈ P(E) is the subset of married persons, and have children ∈ P(E) is the set of persons who have children, then the conservative semi-fuzzy quantifier 2 almost all : P(E) −→ I satisfies almost all(married, have children) = almost all(married, married ∩ have children) i.e. the meanings of “Almost all married persons have children” and “Almost all married persons are married persons who have children” coincide. Like having extension, conservativity expresses an aspect of context insensitivity: If an element of the domain is irrelevant to the restriction (first argument) of a two-place quantifier, then it does not affect the quantification result at 2 all. For example, every conservative Q : P(E) −→ I apparently satisfies Q(X1 , X2 ∩ X1 ) = Q(X1 , X2 ∪ ¬X1 ). Hence in the case that e ∈ / X1 for a / X2 . given element e of the base set, it does not matter whether e ∈ X2 or e ∈ 2 A corresponding fuzzy quantifier F(Q) : P(E) −→ I should at least possess the following property of weak conservativity. 1
with the only exception of “only”.
6.7 Conservativity
171
Definition 6.21 (Weak conservativity). : P(E) 2 −→ I is said to be weakly conservative if A fuzzy quantifier Q 1 , X2 ) = Q(X 1 , spp(X1 ) ∩ X2 ) , Q(X for all X1 , X2 ∈ P(E), where spp(X1 ) is the support of X1 , see (4.3). This definition is sufficiently strong to capture the context insensitivity aspect of conservativity: an element e ∈ E which is irrelevant to the restriction of the quantifier, i.e. µX1 (e) = 0, has no effect on the quantification result, which is independent of µX2 (e). Theorem 6.22. 2 Every DFS F weakly preserves conservativity, i.e. if Q : P(E) −→ I is conservative, then F(Q) is weakly conservative. Definition 6.23 (Strong conservativity). : P(E) 2 −→ I is strongly conservative if Let us say that a fuzzy quantifier Q 1 , X2 ) = Q(X 1 , X1 ∩ X2 ) Q(X
for all X1 , X2 ∈ P(E).
In addition to the context insensitivity aspect, strong conservativity also reflects the definition of crisp conservativity in terms of intersection with the first argument. Theorem 6.24. 2 ) = idI ; (b) Assume the QFM F satisfies the following conditions: (a) F(id is a t-norm; (d) F is compatible with ¬ is a strong negation operator; (c) ∧ internal meets, see (4.2); (e) F is compatible with dualisation. Then F does 2 not strongly preserve conservativity, i.e. there are conservative Q : P(E) −→ I such that F(Q) is not strongly conservative. In particular, no DFS strongly preserves conservativity. Hence strong preservation of conservativity cannot be ensured in a fuzzy framework, even under assumptions which are much weaker than the DFS axioms. However, the weak form of preserving conservativity already covers the most important aspect of context insensitivity, and it should be sufficient for most practical concerns that the considered models comply with conservativity in this sense. It is instructive to see how conservativity interacts with the property of having extension, in case a quantifier satisfies both. Then for all crisp Y1 , Y2 ∈ P(E), Y1 = ∅, QE (Y1 , Y2 ) = QE (Y1 , Y1 ∩ Y2 ) = QE (E , Y1 ∩ Y2 ) = QE (Y1 ∩ Y2 ) ,
172
6 Special Semantical Properties and Theoretical Limits
where E = Y1 , and QE : P(E ) −→ I is the unrestricted form of QE defined by QE (Z) = QE (E , Z) for all Z ∈ P(E ). This example demonstrates that in the crisp case, restricted quantification based on a quantifier which is conservative and has extension can be reduced to unrestricted (unary) quantification on another domain (supplied by the first argument). The example thus explains why one-place, unrestricted quantification is important although natural language quantifiers are typically at least two-place. Of course, such a reduction is not possible in the fuzzy case because a fuzzy subset Y1 ∈ P(E) cannot serve as a domain (we have only admitted crisp base sets E).
6.8 Fuzzy Argument Insertion When discussing crisp argument insertion (see p. 126), it has been remarked that adjectival restriction with fuzzy adjectives cannot be modelled directly: If A ∈ P(E) is a fuzzy subset of E, then only F(Q)A is defined, but not QA. However, one can ask if F(Q)A can be represented by a semi-fuzzy quantifier Q , i.e. if there is a Q such that F(Q)A = F(Q ) .
(6.5)
The obvious choice of Q is the following. Definition 6.25. n+1 −→ I a semi-fuzzy quantifier and A ∈ P(E). Let F be a QFM, Q : P(E) n Then Q A : P(E) −→ I is defined by Q A = U(F(Q)A) , i.e. Q A(Y1 , . . . , Yn ) = F(Q)(Y1 , . . . , Yn , A) for all crisp Y1 , . . . , Yn ∈ P(E). Notes 6.26. • Q A is written with the ‘tilde’ notation in order to distinguish it from QA and emphasize that it depend on the chosen QFM F. • As already noted in [53, p. 54], Q = Q A is the only choice of Q which possibly satisfies (6.5), because any Q which satisfies F(Q ) = F(Q)A also satisfies Q = U(F(Q )) = U(F(Q)A) = Q A, which is apparent from Th. 4.1 Unfortunately, Q A is not guaranteed to fulfill (6.5) in a QFM (not even in a DFS). Let us therefore reformulate this equation as a postulate which ensures that Q A convey the intended meaning in a given QFM F.
6.9 Chapter Summary
173
Definition 6.27. Let F be a QFM. We say that F is compatible with fuzzy argument insertion n if for every semi-fuzzy quantifier Q : P(E) −→ I of arity n > 0 and every A ∈ P(E), F(Q A) = F(Q)A . The main application of this property in natural language is that of adjectival restriction of a quantifier by means of a fuzzy adjective. For example, suppose E is a set of people, and lucky ∈ P(E) is the fuzzy subset of those people in E 2 who are lucky. Further suppose that almost all : P(E) −→ I is a semi-fuzzy quantifier which models “almost all”. Finally, suppose the DFS F is chosen as the model of fuzzy quantification. We can then construct the semi-fuzzy lucky . If F is compatible with fuzzy argument quantifier Q = almost all∩ insertion, then the semi-fuzzy quantifier Q is guaranteed to adequately model the composite expression “Almost all X1 ’s are lucky X2 ’s”, because lucky) F(Q )(X1 , X2 ) = F(almost all)(X1 , X2 ∩ which (relative to F) is the proper for all fuzzy arguments X1 , X2 ∈ P(E), expression for interpreting “Almost all X1 ’s are lucky X2 ’s” in the fuzzy case. Compatibility with fuzzy argument insertion is a very restrictive adequacy condition. We shall present the unique standard DFS which fulfills this condition on p. 201.
6.9 Chapter Summary This chapter completes the investigations into the semantical properties of the proposed models of fuzzy quantification, which ultimately decide on the suitability of the proposal for linguistic modelling. While Chap. 4 was devoted to properties shared by every DFS and Chap.5 to specific constructions and concepts defined on natural subgroups of the models, the present chapter finally discussed those properties which cannot be assumed of all prototypical models. •
Some of the properties are justified from the perspective of applications, but they exclude important models of theoretical interest. Requiring that all models be continuous in arguments, for example, will exclude the boundary case of those models which induce the smallest t-norm, because the drastic product is known to be discontinuous. Hence there is good reason for not including this condition into the list of absolute requirements which must be obeyed by all admissible models. Nevertheless, it is necessary to formalize the continuity requirement in order to permit the selection of robust models suited for practical applications. • The second type of requirements comprises those problematic cases which either result in a partial inconsistency with some of the intended models
174
6 Special Semantical Properties and Theoretical Limits
(e.g. with the standard DFSes), or even in a total inconsistency. The strategy then suggests itself of weakening the conflicting condition and reducing it to a core requirement still compatible with the basic axiom system. In the chapter, we first considered the practical concern of robustness or stability against small changes in the inputs. Every practical model should absorb slight variations due to noise, quantization errors etc., which are typical of real-world applications. There are two kinds of input to the fuzzification mechanism, viz. the fuzzy quantifier and its arguments. Consequently, two facets of robustness must be distinguished, viz (a) robustness against variation in the quantifier, and (b), robustness against variation in the arguments. In both cases, it is necessary for robust system behaviour that the model at least satisfy the minimal requirement of continuity (i.e. small changes in the inputs will result in small changes of the quantification results). We therefore need models which are •
Q-continuous, i.e. continuous in the base quantifiers, in order to ensure the robustness of the quantification results under slight changes in the quantifier; and • arg-continuous, i.e. the quantification results must be stable under slight changes in the membership functions of the quantifier’s arguments. The variation observed can either be non-systematic (random or resulting from imprecision) or systematic, in which case it reflects the varying interpretations of quantifiers and NL concepts across language users. In order to absorb these sources of variation in the inputs, it is essential for every practical model to obey these continuity conditions. Nonetheless, the robustness criteria should only be considered optional. As explained above, this will keep a number of interesting but discontinuous examples in the full class of models, like upper and lower specificity bounds. Following the discussion of these robustness issues, we then investigated the compatibility of the models with the specificity order c . In general, it seems reasonable to assume that whenever either the quantifier or the argument sets become fuzzier, the computed quantification results should become fuzzier as well. In order to express this condition in terms of the specificity order, the basic definition of c as a relation on scalars was extended to fuzzy arguments as well as semi-fuzzy and fuzzy quantifiers in the apparent way. This made it possible to state precisely what it means for a QFM to propagate fuzziness in its inputs, i.e. in the quantifiers and their arguments. The standard choice of fuzzy connectives min, max and 1 − x are known to be compatible with c . Thus the standard DFSes possibly include models which propagate fuzziness (although other choices beyond standard models are also conceivable). As we shall see in the next chapter, there are indeed many standard DFSes which share this property. Not all standard models propagate fuzziness, though, as shown by the counterexamples described in Chap. 8. This chapter will also demonstrate that other demands on the models can outweigh the desideratum of propagating fuzziness. Thus, the propagation of
6.9 Chapter Summary
175
fuzziness in arguments and quantifiers should not be considered for arbitrary models. Next the compatibility requirement with respect to conjunctions and disjunctions of quantifiers was reconsidered. The earlier Th. 5.17 only established lower and upper bounds on the resulting quantifications. In the present chapter, it was shown that the compliance of the models with conjunctions and disjunctions cannot be assumed in the general case: This is only possible if the induced t-norm satisfy the law of contradiction. The bounded product b x2 = max(0, x1 + x2 − 1) is a possible example, but the standard conx1 ∧ junction x1 ∧ x2 = min(x1 , x2 ) is not. Thus, in order to retain the standard models, the compositionality with conjunction and disjunction must be considered an optional requirement on the models. Similar considerations apply in the case of another construction, which builds a new quantifier from a given one by unifying variables in different argument positions. This construction was motivated by the equivalence (biimplication) and antivalence (xor) connectives, whose definition involves multiple occurrences of variables. In order to describe such duplications of variables, we proposed a formal definition which generalizes the known constructions of argument permutations and cylindrical extensions. Apart from these unproblematic cases, the new construction also covers the desired multiple occurrences of variables. Based on this formalization, we were able to show that every model compatible with multiple occurrences of variables induces a t-norm which fulfills the law of contradiction. Again in order to retain the standard models, the compatibility with multiple occurrences of variables will be considered an optional requirement on the models. Having considered these rather abstract constructions, the models were evaluated for their compliance with three linguistically motivated criteria. First of all, there is a more general form of monotonicity condition which no longer requires that the dependency of the quantifier on the argument of interest be uniformly increasing or decreasing (as in the unproblematic case of monotonicity properties), but also allows more general dependencies which are bell-shaped, trapezoidal etc. These quantifiers are called continuous in TGQ, but we are using a different terminology here and call these quantifiers convex in the considered argument, in order to avoid a possible confusion with continuity in its smoothness sense. A special case are unimodal quantifiers [185, p. 131]; these have a unique peak of full membership. For example, “about ten” can be modelled as a unimodal quantifier. In the chapter, we first discussed the convexity property for semi-fuzzy and fuzzy quantifiers. Abstracting from well-known properties of two-valued convex quantifiers, we established the closure property of convex quantifiers under the standard conjunction. It was further shown that every convex quantifier can be decomposed into a conjunction of a nonincreasing and a nondecreasing quantifier. Both results apply to semi-fuzzy as well as fuzzy quantifiers. Next we considered the intuitive expectation that the convex shape of quantifiers be preserved when applying a QFM. This investigation revealed that the preservation of con-
176
6 Special Semantical Properties and Theoretical Limits
vexity is difficult to achieve for models of fuzzy quantification. It cannot be guaranteed in the general case, even if a weaker system of basic axioms is assumed. In practice, it will be acceptable if a model of fuzzy quantification deform the convex shape of exotic quantifiers not expressed in NL, if only it will behave as expected for the cases of linguistic relevance. The requirement was therefore weakened to convex quantifiers of the quantitative variety. In addition, a restriction to unary quantifiers or convex absolute quantifiers like “about 50” was necessary to achieve compatibility with the basic framework. The resulting criterion of weakly preserving convexity is still very demanding. The model MCX presented in Chap. 7 demonstrates that the requirement can indeed be fulfilled, but this property is confined to a small fragment of the models. After this discussion of convexity properties, we turned to the definition and analysis of conservative quantifiers. Conservativity expresses an important aspect of context insensitivity, which is very common for simple quantifiers. Basically, a quantifier is considered conservative if only the restriction decides on its interpretation, and not the chosen domain. For example, the truth of “All men are mortal” depends on the set of considered men only. The remaining elements of the base set, which fall outside the considered subdomain of ‘men’, have no effect at all on the quantification results. The original notion of conservativity familiar from TGQ is readily extended to a definition for semi-fuzzy quantifiers, see Def. 6.20. The proper way of generalizing conservativity to fuzzy quantifiers is less obvious, though. In the chapter, we considered two alternative definitions which extend the original concept to the case of fuzzy arguments. a. The proposed definition of weak conservativity merely targets at the domain insensitivity aspect of conservativity. The quantification results of weakly conservative quantifiers should thus be independent of those elements which are totally irrelevant to the restriction of the quantifier. This conception of conservativity combines easily with the proposed framework, i.e. all models will map conservative quantifiers to fuzzy quantifiers which are at least conservative in this weak sense. b. The alternative proposal of strong conservativity is obtained when the crisp intersection in the original definition of conservativity is replaced which will be used to directly comwith the induced fuzzy intersection ∩ bine the fuzzy arguments. Apart from the context insensitivity aspect, the resulting strong form of conservativity also parallels the definition of crisp conservativity in terms of an intersection with the first argument. There is negative evidence concerning the compliance of this alternative notion with the models of fuzzy quantification. As witnessed by Th. 6.24, the requirement of strong conservativity conflicts with fundamental assumptions on plausible choices of models, so we cannot assume it in the fuzzy framework. The weak form of conservativity will therefore be considered more appropriate when working with fuzzy arguments.
6.9 Chapter Summary
177
Finally we discussed the construction of fuzzy argument insertion. In order to motivate this criterion, we briefly reviewed adjectival restriction based on crisp adjectives like “married”, which can be modelled by crisp argument insertions. From a linguistic point of view, it is desirable to achieve a compositional interpretation of adjectival restriction in accordance with Frege’s principle of compositionality. Thus the interpretation of a composite expression should be a function of the meanings of its subexpressions. When applied to the current situation of adjectival restriction this means that the target which results from adjectival restriction of Q by A should be quantifier Q = F(Q ), representable in terms of a semi-fuzzy quantifier Q which has Q and Q should show a functional dependency on the original representation Q and the interpretation of the adjective A, i.e. Q = c(Q, A), where c is the construction for adjectival restriction. In the crisp case, these goals were accomplished by letting c(Q, A) = Qτi ∩Aτi ; this restricts the i-th argument of Q to the crisp denotation A of the given adjective. In the important case of fuzzy adjectives, the strategy suggests itself to decompose fuzzy adjectival restriction in an analogous way, thus resolving it into an intersection, argument transpositions, and the crucial step of fuzzy argument insertion. However, fuzzy arguments are not admissible for semi-fuzzy quantifiers. Consequently a different approach is necessary to model the target construction of fuzzy argument insertion. To this end, we observed from Th. 4.1 that there is at A which possimost one conceivable choice of semi-fuzzy quantifier Q = Q A = U(F(Q)A). However, it is not a bly satisfies F(Q ) = F(Q)A, viz Q A will matter of course that this choice of the semi-fuzzy quantifier Q = Q properly represent F(Q)A in the models. By explicitly requiring that the equality F(Q A) = F(Q)A be valid, we obtain the criterion of compatibility with fuzzy argument insertion. A QFM compatible with this construction permits an explicit representation of the intermediate quantifiers which result from the insertion of a fuzzy argument into one of the argument slots, because A on the level of semi-fuzzy it offers corresponding base descriptions Q = Q quantifiers. The requirement of compatibility with fuzzy argument insertion is very restrictive. As shown in the next chapter, the DFS MCX is in fact the only standard model with this property.
7 Models Defined in Terms of Three-Valued Cuts and Fuzzy-Median Aggregation
7.1 Motivation and Chapter Overview By stating the DFS axioms, we have made explicit the linguistic expectations on models of fuzzy quantification. In the following chapters, we will turn attention from the discussion of semantic properties to the presentation of concrete models which instantiate the axiomatic framework. The existence of such concrete examples will prove the consistency of the proposed axioms. In addition, the constructions on which these examples are based and their internal structure might provide new insights into fuzzy quantification. Most importantly, the development of practical models and corresponding algorithms is absolutely mandatory to make the theory useful for applications. The following chapters are devoted to the investigation of increasingly broader classes of models, each of which will have its special characteristics. These classes will be defined in terms of parametrized mechanisms which build all models in the class from a generic constructive principle. For each considered construction, we will start by introducing the parametrized base mechanism, which spans a ‘raw’ class of totally unrestricted fuzzification mechanisms. By imposing conditions on the allowable choices of parameters, the unrestricted class of QFMs will then be pruned to the subclass of well-behaved models definable from the base mechanism. In order to implement this basic strategy, we need a suitable starting point from which to abstract the required constructive principle. As we shall see below, some standard methods of fuzzy logic, and in particular α-cuts and the resolution principle, are not suited to define models of the theory. The problems with using α-cuts are caused by their lack of symmetry with respect to complementation. This observation will point attention to richer descriptions offered by three-valued sets. The definition of a three-valued cutting mechanism will overcome the asymmetry problem. In order to ensure a consistent processing of the resulting three-valued sets, we present a constructive principle for Kleene’s three-valued logic, which is known to underly all standard models. The basic mechanism will then be fitted to the case of I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 179–220 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
180
7 The Class of MB -Models
semi-fuzzy quantifiers; this will be accomplished by fuzzy median aggregation. In this way, we obtain a fuzzy quantification result Qγ (X1 , . . . , Xn ) ∈ I for each choice of the cutting parameter γ. The results spread over the cut levels must then be aggregated by applying an aggregation operator B. We thus obtain a construction of fuzzification mechanisms in terms of three-valued cuts and fuzzy-median aggregation, By varying the aggregation operator B, we obtain the class of MB -QFMs. Next we will characterize the (well-behaved) MB -DFSes in terms of necessary and sufficient conditions on the aggregation mapping B. Some prototypical examples (including the most important cases) will also be presented. We will then assess the characteristics of the individual models. It will be shown that some of the considered properties are be shared by all MB -DFSes. In addition, the extreme cases of models in terms of specificity will be identified. Finally, there will be a thorough discussion of a special MB -model with unique semantic properties.1
7.2 Alpha-Cuts are not Suited to Define Models In the proposed framework, an instance of fuzzy quantification can be described in terms of the following data, (a) the given semi-fuzzy quantifier n Q : P(E) −→ I, and (b), a choice of fuzzy arguments X1 , . . . , Xn ∈ P(E). The problem to be solved is that of making the quantifier applicable to the fuzzy arguments, although it is (directly) defined for crisp arguments only. One idea that comes to mind is that of resorting to the familiar concept of α-cuts (or strict α-cuts) which resolve the fuzzy arguments into layers of crisp arguments which are organized by the cutting parameter α. For each considered layer, the resulting crisp arguments can then be supplied to the semi-fuzzy quantifier, which determines a corresponding quantification result for each cut level. In order to formalize this basic approach, let me first recall the usual definition of α-cuts and strict α-cuts: Definition 7.1. Let E be a given set, X ∈ P(E) a fuzzy subset of E and α ∈ I. By X≥α ∈ P(E) we denote the α-cut X≥α = {e ∈ E : µX (e) ≥ α} . Definition 7.2. Let X ∈ P(E) be given and α ∈ I. By X>α ∈ P(E) we denote the strict α-cut X>α = {e ∈ E : µX (e) > α} . The effect of α-cutting relative to µX (e) is displayed in Fig. 7.1. 1
The proofs of the theorems cited in this chapter are presented in [55]. The main results have been published in [66].
7.2 Alpha-Cuts are not Suited to Define Models
181
Fig. 7.1. α-cut as a function of α and µX (v)
α-cuts and strict α-cuts are linked by negation, as witnessed by the following equalities. For all fuzzy subsets X ∈ P(E) and α ∈ I, (¬X)≥α = ¬(X>¬α )
(7.1)
(¬X)>α = ¬(X≥¬α ) .
(7.2)
It is apparent from these equalities that neither α-cuts nor strict α-cuts are compatible with complementation. We can now seize the above suggestion of evaluating the quantifier Q for the crisp arguments obtained at each cut level α ∈ I. The quantification results f (α) = Q((X1 )≥α , . . . , (Xn )≥α ) obtained at the cut levels α ∈ I must then be combined into a single scalar result. This process is delegated to a subsequent aggregation step. Probably the first choice of aggregation operation which comes to mind is that of integration. Let us hence consider the following attempt to define a QFM based on α-cuts and integration: 1 Q((X1 )≥α , . . . , (Xn )≥α ) dα , A(Q)(X1 , . . . , Xn ) = 0
n We for all semi-fuzzy quantifiers Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E). should first notice that this is really only an ‘attempt’ to define a QFM, because the above integral may not exist (I have not made any assumptions on the ‘well-behavedness’ of Q, like measurability). Hence A is only partially
182
7 The Class of MB -Models
defined.2 However, even if we allow an arbitrary completion of A into a fully defined fuzzification mechanism, the resulting QFM A still fails to be a DFS. This is because A is subject to an additional flaw, apart from its ‘definition gaps’. It is easily observed that A does not comply with the construction of internal complementation and thus violates a semantical postulate which is known from Th. 4.19 to be mandatory for all intended models. From a broader perspective, the failure of the above attempt to define a model by integration over α-cuts is just an instance of a more general fact, and cannot be attributed to the choice of the integral which served as the aggregation operator. By contrast, it can be shown that it is impossible in general to define models of the DFS axioms from layers of α-cuts. This is because there exists no choice of aggregation operator which makes a DFS of the proposed base construction: Theorem 7.3 (Nonexistence of alpha-cut based DFSes). n n −→ P(I) by Define C, Q : P(E) −→ I → C(Q) : P(E) µC(Q)(X1 ,...,Xn ) (α) = Q((X1 )≥α , . . . , (Xn )≥α ) −→ I s.th. F = S ◦ C satisfies the following for α ∈ I. There is no S : P(I) conditions: 2 ) = idI , i.e. the identity truth function is mapped to its proper fuzzy • F(id analogue; • ¬ = F(¬) is a strong negation operator; • F is compatible with internal complementation. −→ I such that F = S ◦ C is a DFS. In particular, there exists no S : P(I) Note 7.4. The second part of the theorem is again apparent from Th. 4.19. This failure of the α-cut based approach to define plausible models of fuzzy quantification is caused by their lack of symmetry with respect to complementation, see (7.1) and (7.2). Due to this negative evidence concerning the potential utility of α-cuts to decompose fuzzy quantification into several layers of quantification involving crisp arguments, I will now consider a different but related mechanism. The well-known resolution principle has successfully been applied in other contexts for the transfer of crisp concepts into corresponding concepts for fuzzy sets. Adapted to the present situation of semi-fuzzy quantifiers supplied with fuzzy arguments, the resolution principle becomes: Definition 7.5 (Resolution principle). Let f : P(E) −→ V be a mapping. By R(f ) we denote the mapping R(f ) : 2
To see this, consider a non-measurable function f : I −→ I and let E = I. Define Q : P(I) −→ I by Q(X) = f (inf X) for all X ∈ P(I). A is undefined on the fuzzy defined by µX ∗ (e) = e for all e ∈ I, because Q(X ∗ ≥α ) = f (α). subset X ∗ ∈ P(I)
7.3 From Two-Valued Logic to Kleene’s Logic, and Beyond
183
) which to each X ∈ P(E) P(E) −→ P(V associates the fuzzy set R(f )(X) defined by µR(f )(X) (v) = sup{α ∈ I : f (X≥α ) = v} for v ∈ V . Then R(f ) is said to result from f by applying the resolution principle. Note 7.6. I have stated the resolution principle for one-place functions only. n n ∼ × n), this generalises to nBecause P(E) ∼ = P(E × n) and P(E) = P(E n place functions f : P(E) −→ V in the obvious way, i.e. by component-wise α-cutting. n
By applying the resolution principle, then, a semi-fuzzy quantifier Q : P(E) −→ n −→ P(I) which to each choice of I is mapped to that R(Q) : P(E) deX1 , . . . , Xn ∈ P(E) assigns the fuzzy subset R(Q)(X1 , . . . , Xn ) ∈ P(I) fined by µR(Q)(X1 ,...,Xn ) (v) = sup{α ∈ I : Q((X1 )≥α , . . . , (Xn )≥α ) = v} for all v ∈ I. Upon closer inspection, the resolution principle comes out as a special but important case of the general α-cut based approach discussed first, which has already been shown to be doomed to failure. As a corollary to the above theorem on α-cut based attempts to define DFSes, we hence obtain: Theorem 7.7 (Nonexistence of models based on the resolution principle). It is not possible to define models of the theory based on the resolution principle, i.e. no S : P(I) −→ I exists which makes F = S ◦ R a DFS. By the α-cut representation of a choice of fuzzy arguments (X1 , . . . , Xn ) ∈ n n , I denote the fuzzy set RES(X1 , . . . , Xn ) ∈ P(P(E)) P(E) defined by µRES(X1 ,...,Xn ) ((Y1 , . . . , Yn )) = sup{α ∈ I : (X1 )≥α = Y1 , . . . , (Xn )≥α = Yn } , for all Y1 , . . . , Yn ∈ P(E). Of course, DFSes can be defined based on this αcut representation of X1 , . . . , Xn (at least if DFSes exist at all, which will be proven below). But this is a trivial statement, since (X1 , . . . , Xn ) can be recovered from its α-cut representation.
7.3 From Two-Valued Logic to Kleene’s Logic, and Beyond As witnessed by Th. 5.21, the {0, 12 , 1}-valued portion of the logic induced by a standard model coincides with Kleene’s three-valued logic (see [95, p. 344];
184
7 The Class of MB -Models
also described e.g. in [97, p. 29]). This observation provides a suitable starting point for defining models of the theory. Kleene’s logic can be constructed from two-valued logic by the following mechanism of generalizing a propositional n function f : 2n −→ 2 to the three-valued f˘ : {0, 12 , 1} −→ {0, 12 , 1}. 1 Suppose that x1 , . . . , xn ∈ {0, 2 , 1} are given. Associate to each xi a set T (xi ) ∈ P(2) as follows: : xi = 0 {0} T (xi ) = {0, 1} : xi = 12 {1} : xi = 1 The set T (xi ) collects the alternative interpretations of yi as two-valued truth values: 0 and 1 are non-ambiguous and correspond to unique interpretations {0} and {1}, respectively, while the 12 represents absence of knowledge, and is hence ambiguous between all possible choices in {0, 1}. In terms of these sets n of alternatives, we then define f˘ : {0, 12 , 1} −→ {0, 12 , 1} by 1 : f (y1 , . . . , yn ) = 1 for all yi ∈ T (xi ), i = 1, . . . , n ˘ f (x1 , . . . , xn ) = 0 : f (y1 , . . . , yn ) = 0 for all yi ∈ T (xi ), i = 1, . . . , n 1 : else 2 for all x1 , . . . , xn ∈ {0, 12 , 1}. In this definition, the ‘indeterminate’ value 12 is treated by considering both alternatives 0, 1. The truth-values 0 and 1 do not induce any indeterminacy. Let us now recall the definition of the generalized fuzzy median, see Def. 5.9 By applying m 12 : P(I) −→ I to the supervaluation results, we can express the above definition more compactly, viz f˘(x1 , . . . , xn ) = m 12 {f (y1 , . . . , yn ) : yi ∈ T (xi ), i = 1, . . . , n} .
(7.3)
This median-based definition has the advantage that it can also be applied to n functions f : 2n −→ I, which it maps to f˘ : {0, 12 , 1} −→ I, and thus takes us beyond Kleene’s logic.
7.4 Adaptation of the Mechanism to Quantifiers In this book, we are more interested in quantifiers rather than propositional functions. Hence let us adapt the above mechanism to the case of three-valued quantifiers, which are then translated into quantifiers that accept three-valued arguments. To this end, I must first introduce some concepts related to threevalued subsets, which are necessary to express the definition of the target mechanism. Three-valued subsets of a given base set E will be modelled in a way analogous to fuzzy subsets, i.e. we shall assume that each three-valued subset X of E
7.4 Adaptation of the Mechanism to Quantifiers
185
is uniquely characterised by its membership function νX : E −→ {0, 12 , 1} (I use the symbol νX rather than µX in order make unambiguous that the membership function is three-valued). The collection of three-valued subsets of a given E will be denoted P˘ (E); we shall assume that P˘ (E) is a set, and E clearly we have P˘ (E) ∼ = {0, 12 , 1} . As in the case of fuzzy subsets, it might be convenient to identify three-valued subsets and their membership functions, E i.e. to stipulate P˘ (E) = {0, 12 , 1} . However, I will again not enforce this identification. I will assume that each crisp subset X ∈ P(E) can be viewed as a threevalued subset of E, and that each three-valued subset X of E can be viewed as a fuzzy subset of E. Hence at times, the same symbol X will be used to denote a particular crisp subset of E, as well as the corresponding threevalued and fuzzy subsets. If one chooses to identify membership functions and three-valued/fuzzy subsets, then the crisp subset X is distinct from its representation as a three-valued or fuzzy subset, which corresponds to its characteristic function χX . In this case, it is understood that the appropriate transformations (i.e., using characteristic function) are performed whenever X is substituted for a three-valued or fuzzy subset. Let us now recall the construction of Kleene’s logic presented in the previous section, which was based on the representation of three-valued x ∈ {0, 12 , 1} by corresponding ambiguity ranges T (x) ∈ P(2). This basic construction is easily adapted to three-valued subsets. In order to express the ‘ambiguity’ or ‘indeterminacy’ which originates from occurrences of the membership grade 12 , a closed range of crisp subsets of E will then be associated with each three-valued subset X ∈ P˘ (E). This range, denoted T (X), is intended to capture the alternative interpretations of the three-valued set X in terms of compatible two-valued sets. Definition 7.8. Suppose E is some set and X ∈ P˘ (E) is a three-valued subset of E. We associate with X crisp subsets X min , X max ∈ P(E), defined by X min = {e ∈ E : νX (e) = 1} X max = {e ∈ E : νX (e) ≥ 12 } . Based on X min and X max , we associate with X the range of crisp sets T (X) ⊆ P(E) defined by T (X) = {Y ∈ P(E) : X min ⊆ Y ⊆ X max } . Notes 7.9. •
If e ∈ E is such that νX (e) = 1, then all Y ∈ T (X) contain e; if e ∈ E is such that νX (e) = 0, then no Y ∈ T (X) contains e, and for those e ∈ E with νX = 12 , T (X) contains all combinations of the alternatives e ∈ Y , e∈ / Y.
186
•
7 The Class of MB -Models
The pairs (X min , X max ), X min ⊆ X max form a representation of the threevalued sets X because every X can be recovered from (X min , X max ) in the apparent way, viz 1 : e ∈ X min / X max νX (e) = 0 : e ∈ 1 : else 2 for all e ∈ E.
Based on this concept, which resolves each three-valued subset into its ambiguity range of compatible crisp subsets, I can now modify the above mechanism for extending two-valued propositional functions to three-valued arguments, and fit its core idea to the case of three-valued quantifiers. It is then convenient to abbreviate T (X1 , . . . , Xn ) = T (X1 ) × · · · × T (Xn ) . I can then define ˘ 1 , . . . , Xn ) Q(X 1 : Q(Y1 , . . . , Yn ) = 1 for all (Y1 , . . . , Yn ) ∈ T (X1 , . . . , Xn ) = 0 : Q(Y1 , . . . , Yn ) = 0 for all (Y1 , . . . , Yn ) ∈ T (X1 , . . . , Xn ) 1 : else 2 for all X1 , . . . , Xn ∈ P˘ (E). This construction extends the three-valued quann ˘ : P˘ (E)n −→ {0, 1 , 1}, which tifier Q : P(E) −→ {0, 12 , 1} to a quantifier Q 2 now accepts three-valued arguments. Again, we can profit from the generalized fuzzy median m 12 and rephrase this according to the same idea which underlies equation (7.3). Stated this way, the mechanism can be applied to arbitrary semi-fuzzy quantifiers as well, which produce continuous rather than n three-valued outputs. A given semi-fuzzy quantifier Q : P(E) −→ I is then n ˘ : P˘ (E) −→ I, which generalizes its base quantifier mapped to a quantifier Q Q to the case of three-valued arguments: Definition 7.10. n To all semi-fuzzy quantifiers Q : P(E) −→ I, we associate a corresponding n ˘ : P˘ (E) −→ I defined by Q ˘ 1 , . . . , Xn ) = m 1 {Q(Y1 , . . . , Yn ) : Y1 ∈ T (X1 ), . . . , Yn ∈ T (Xn )} Q(X 2 for all X1 , . . . , Xn ∈ P˘ (E). ˘ now accepts three-valued arguments. In the general case, The resulting Q though, we must deal with unrestricted fuzzy arguments which are not tied to membership grades in {0, 12 , 1}. In the following, I therefore suggest a reduction of fuzzy quantification to the case of three-valued arguments.
7.5 Three-Valued Cuts of Fuzzy Subsets
187
7.5 Three-Valued Cuts of Fuzzy Subsets We are now able to evaluate quantifying statements which involve three-valued arguments. In order to further generalize this construction and support arbitrary fuzzy arguments, I propose the use of a cutting mechanism which reduces the fuzzy arguments of the quantifier to corresponding three-valued arguments, and control this reduction by a cutting parameter γ ∈ I. We know from Th. 7.3 that α-cuts (i.e. simple two-valued cuts) are not suited to effect the reduction, due to their lack of symmetry with respect to complementation. However, it is apparent how a reduction to three-valued sets (rather than the crisp sets of α-cuts) can be performed in a symmetrical fashion, and hence achieves the desired compatibility with negation. Let me first state the definition for individual membership grades in the unit interval, which are cut to membership grades in {0, 12 , 1} which correspond to ‘false’, ‘unknown’ and ‘true’, respectively. Definition 7.11. For every x ∈ I and γ ∈ I, the three-valued cut of x at γ is defined by 1 1 1 : x ≥ 2 + 2γ tγ (x) = 12 : 12 − 12 γ < x < 12 + 12 γ 0 : x ≤ 12 − 12 γ if γ > 0, and 1 t0 (x) =
1
2 0
: : :
x> x= x<
1 2 1 2 1 2
in the case that γ = 0. Note 7.12. The cutting parameter γ can be conceived of a degree of ‘cautiousness’ because the larger γ becomes, the closer tγ (x) will approach the ‘undecided’ result of 12 . Hence tγ (x) c tγ (x) whenever γ ≤ γ . The three-valued cut mechanism for scalars can be extended to three-valued cuts of fuzzy subsets, by applying it element-wise to the membership functions. Definition 7.13. Suppose E is some set and X ∈ P(E) a subset of E. The three-valued cut of X at γ ∈ I is the three-valued subset Tγ (X) ∈ P˘ (E) defined by νTγ (X) (e) = tγ (µX (e)) , for all e ∈ E.
188
7 The Class of MB -Models
Fig. 7.2. Three-valued cut as a function of γ and µX (v)
Note 7.14. The effect of three-valued cutting is displayed in Fig. 7.2. Usually the three-valued cut will not be applied in isolation. By contrast, it will be composed with the construction introduced in Def. 7.8, and hence resolved into a pair of crisp sets ((Tγ (X))min , (Tγ (X))max ), or alternatively into the crisp range T (Tγ (X)) determined by the three-valued set which results from the cut operation. Due to this practice, which generally combines the cut operation with the subsequent resolution, it is convenient to introduce the succinct notation Xγmin and Xγmax for the above pair of crisp sets, and also introduce an abbreviation Tγ (X) for the cut range T (Tγ (X)) which represents the three-valued cut at the ‘cautiousness level’ γ ∈ I by a set of crisp alternatives. Definition 7.15. Let E be some set, X ∈ P(E) a fuzzy subset of E and γ ∈ I. Xγmin , Xγmax ∈ P(E) and Tγ (X) ⊆ P(E) are defined by Xγmin = (Tγ (X))min Xγmax = (Tγ (X))max Tγ (X) = T (Tγ (X)) = {Y : Xγmin ⊆ Y ⊆ Xγmax } . Hence if γ > 0, then Xγmin = X
1 1 ≥2+2γ
Xγmax = X
1 1 >2−2γ
,
7.5 Three-Valued Cuts of Fuzzy Subsets
189
and in the case that γ = 0, X0min = X> 1 2
X0max = X
1 ≥2
.
Notes 7.16. •
Let us recall that the three-valued set which results from the three-valued cut of X at γ ∈ I can be represented by the pair (Xγmin , Xγmax ), and further notice that both Xγmin and Xγmax are defined by α-cuts. Hence every three-valued cut can be represented by a pair of α-cuts, which in turn also determines the corresponding range of crisp sets, Tγ (X). • The γ-indexed family (Xγmin , Xγmax )γ∈I faithfully represents X because X can recovered from this representation by means of 1 + 1 sup{γ ∈ I : e ∈ Xγmin } : e ∈ X0min µX (e) = 21 21 / Xγmax } : e ∈ / X0min 2 − 2 sup{γ ∈ I : e ∈
for all e ∈ E. Hence every fuzzy subset X can be resolved into a family of pairs (Xγmin , Xγmax )γ∈I which represent the three-valued cuts of X at all choices of γ ∈ I. • In turn, X is also faithfully represented by the family of cut ranges (Tγ (X))γ∈I , because Xγmin and Xγmax can be recovered from these ranges by forming the intersection Xγmin = ∩ Tγ (X) and the union Xγmax = ∪ Tγ (X). Hence when resorting to the cut ranges, we also achieve the resolution property for three-valued cuts. • Abusing language, I will usually identify the pair of crisp sets (Xγmin , Xγmax ), and the cut range Tγ (X) with the three-valued cut Tγ (X), and refer to both as ‘the three-valued cut of X at γ’ although in a precise sense, the three-valued cut is only represented by these modelling devices. It is also instructive to observe how the choice of the ‘cautiousness parameter’ γ effects the results of the cutting operation on a given fuzzy subset X. If γ = 0, the set of indeterminates (i.e. of those e ∈ E such that e ∈ Xγmax \Xγmin ) contains only those e ∈ E with µX (e) = 12 ; all other elements of E are mapped to the closest truth value in {0, 1}. As γ increases, the set of indeterminates is increasing. For γ = 1, then, the level of maximal cautiousness is reached where all elements of E except those with µX (e) ∈ {0, 1} are interpreted as indeterminates (see also Fig. 7.2). Some obvious properties of Xγmin and Xγmax are summarized in the following theorem. Theorem 7.17. Suppose that E is a given set and X, X ∈ P(E) are fuzzy subsets of E. Then for all γ ∈ I,
190
7 The Class of MB -Models min
a. (¬X)γ
max
= ¬(Xγmax ) and (¬X)γ
b. (X ∩ X )γ
min
c. (X ∪
min X )γ
= Xγmin ∩ X γ
min
=
Xγmin
∪
min X γ
= ¬(Xγmin );
and (X ∩ X )γ
max
and (X ∪
max X )γ
= Xγmax ∩ X γ
max
=
Xγmax
∪
;
max X γ .
This demonstrates that the three-valued cutting mechanism is compatible with all Boolean operations on the arguments of a quantifier, i.e. Tγ ((¬X)) = {¬Y : Y ∈ Tγ (X)} Tγ ((X ∩ X )) = {Y ∩ Y : Y ∈ Tγ (X), Y ∈ Tγ (X )} Tγ ((X ∪ X )) = {Y ∪ Y : Y ∈ Tγ (X), Y ∈ Tγ (X )} , for all X, X ∈ P(E) and γ ∈ I.
7.6 The Fuzzy-Median Based Aggregation Mechanism In this section we can put the pieces together and combine the interpretation ˘ and the threemechanism for three-valued arguments, which takes Q to Q, valued cutting operation. The three-valued cuts at a given level γ will then be applied to reduce given choices of fuzzy arguments to the simpler situa˘ Hence tion of three-valued arguments, which can already be handled by Q. in order to evaluate a quantifying statement based on a semi-fuzzy quantin fier Q : P(E) −→ I at a given level of cautiousness γ, we apply the above ˘ : P˘ (E)n −→ I, and then insert the three-valued mechanism to Q, yielding Q cuts of the fuzzy argument sets at γ. In fact, both mechanisms, i.e. quantifier generalisation and three-valued cut, are exclusively used in this specific combination. It is therefore beneficial to introduce a shorthand notation for the composition of the two mechanisms, and subsequently treat the resulting combination as an atomic construction which can be used to define models. This approach is summarized in the following definition. Definition 7.18. For every γ ∈ I, we denote by (•)γ the QFM defined by Qγ (X1 , . . . , Xn ) = m 12 {Q(Y1 , . . . , Yn ) : Yi ∈ Tγ (Xi )} , n
for all semi-fuzzy quantifiers Q : P(E) −→ I. Notes 7.19. •
It should be apparent how the definition of Qγ (X1 , . . . , Xn ) evolves from ˘ introduced in Def. 7.10, and the three valued cuts the composition of Q defined by Def. 7.13. • Considering the resulting construction in total, the basic idea that can be identified is that of treating the crisp ranges which correspond to the three-valued cuts as providing a number of alternatives to be checked.
7.7 The Unrestricted Class of MB -QFMs
191
Hence in order to evaluate a semi-fuzzy quantifier Q at a certain cut level γ, we have to consider all choices of Q(Y1 , . . . , Yn ), where Yi ∈ Tγ (Xi ). The collection of supervaluation results obtained in this way must then be aggregated to a single result in the unit interval, which is achieved by applying the generalized fuzzy median. • If the base set E is finite, then the number of distinct three-valued cuts of is finite as well. In this case, f (γ) = Qγ (X1 , . . . , Xn ) is X1 , . . . , Xn ∈ P(E) a step function, i.e. piece-wise constant in γ, except for a finite number of discontinuities. It is this property which makes the concept of three-valued cuts and the definition of (•)γ still suited for efficient implementation: results need to be computed only for the (usually small) number of distinct three-valued cuts. More details on the computational aspect can be found in Chap. 11. The next theorem presents a result concerning the monotonicity properties of f (γ) = Qγ (X1 , . . . , Xn ), when viewed as a function of γ ∈ I. Theorem 7.20. n Let Q : P(E) −→ I be a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E). a. If Q0 (X1 , . . . , Xn ) > 12 , then Qγ (X1 , . . . , Xn ) is monotonically nonincreasing in γ and Qγ (X1 , . . . , Xn ) ≥ 12 for all γ ∈ I. b. If Q0 (X1 , . . . , Xn ) = 12 , then Qγ (X1 , . . . , Xn ) = 12 for all γ ∈ I. c. If Q0 (X1 , . . . , Xn ) < 12 , then Qγ (X1 , . . . , Xn ) is monotonically nondecreasing in γ and Qγ (X1 , . . . , Xn ) ≤ 12 for all γ ∈ I. Note 7.21. These monotonicity properties reflect the intuition that as soon as one becomes more cautious (i.e. the ‘level of cautiousness’ γ increases), the results obtained will become less specific, in the sense of being closer to 12 . Owing to this fuzzy-median based aggregation mechanism, we are now able to interpret fuzzy quantifiers for any fixed choice of the cutting parameter γ ∈ I. However, none of the QFMs (•)γ is a DFS yet, because the required information is spread over various cut levels, and the fuzzy median suppresses too much structure at each individual cut level. Nonetheless, these QFMs prove useful in defining models of fuzzy quantification. We simply need to abstract from individual cut levels, and simultaneously consider the results obtained at all levels of cautiousness γ.
7.7 The Unrestricted Class of MB-QFMs In order to implement this basic idea of supplying the assumed QFM with the total amount of information, which is spread over cautiousness levels, we consider the γ-indexed family (Qγ (X1 , . . . , Xn ))γ∈I . Unlike the individual QFMs (•)γ , which fail to be a DFS because they only see one choice of γ at a
192
7 The Class of MB -Models
time, the collection of their results Qγ (X1 , . . . , Xn ), obtained for all choices of γ ∈ I, contains the desired information across cut levels. In order to construct fuzzification mechanisms which have a chance of being a DFS, it is therefore suggested to apply an aggregation operator to these γ-indexed results. This can be done e.g. by means of integration: Definition 7.22. By M we denote the QFM defined by M(Q)(X1 , . . . , Xn ) =
1
Qγ (X1 , . . . , Xn ) dγ 0 n
for all semi-fuzzy quantifiers Q : P(E) X1 , . . . , Xn ∈ P(E).
−→ I and fuzzy arguments
Note 7.23. The integral is known to exist because the integrands f (γ) = Qγ (X1 , . . . , Xn ) are bounded and monotonic by Th. 7.20. Let me now state that the fuzzification mechanism so defined indeed qualifies as a model of fuzzy quantification. Theorem 7.24. M is a standard DFS. Notes 7.25. •
M is the first DFS that I discovered, and subsequently implemented in order to support quantifying queries in an experimental retrieval system for multimedia weather documents [61]. • As we shall see below, M is also a ‘practical’ model because it satisfies both continuity requirements that have been defined for QFMs. Consequently, the quantification results obtained from M are robust against variation or noise in the fuzzy arguments X1 , . . . , Xn and in the chosen quantifier Q. • For the implementation of quantifiers in M see Chap. 11. The integral, which was used in the definition of M, is not the only possible way of abstracting from the cutting parameter γ. In the following, I will generalize this example and consider a broader range of aggregation operators. The symbol B will be used for the aggregation operator, and the resulting QFM will be denoted MB . Before introducing the desired construction of QFMs, let us first identify the precise domain on which these aggregation operators can act. Definition 7.26. 1 B+ , B 2 , B− and B ⊆ II are defined by
7.7 The Unrestricted Class of MB -QFMs
B+ = {f ∈ II : f (0) > 1
B 2 = {f ∈ II : f (x) = B− = {f ∈ II : f (0) <
1 2
and f (I) ⊆ [ 12 , 1] and f nonincreasing }
1 2 1 2
for all x ∈ I }
193
and f (I) ⊆ [0, 12 ] and f nondecreasing }
1
B = B+ ∪ B 2 ∪ B− . Note 7.27. In the following, we shall denote by ca : I −→ I the constant mapping ca (x) = a ,
(7.4) 1
for all a, x ∈ I. Using this notation, apparently B 2 = {c 1 }. 2
In terms of these abbreviations, we can now express the following theorem, which is actually a corollary to Th. 7.20: Theorem 7.28. n n. Suppose Q : P(E) −→ I is a semi-fuzzy quantifier and (X1 , . . . , Xn ) ∈ P(E) a. If Q0 (X1 , . . . , Xn ) > 12 , then (Qγ (X1 , . . . , Xn ))γ∈I ∈ B+ ;
1
b. If Q0 (X1 , . . . , Xn ) = 12 , then (Qγ (X1 , . . . , Xn ))γ∈I ∈ B 2 (i.e. constantly 1 2 ); c. If Q0 (X1 , . . . , Xn ) < 12 , then (Qγ (X1 , . . . , Xn ))γ∈I ∈ B− . n
In particular, the theorem substantiates that regardless of Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E), it always holds that (Qγ (X1 , . . . , Xn ))γ∈I ∈ B . Hence B is large enough to embed all mappings f (γ) = Qγ (X1 , . . . , Xn ) which can arise from a possible quantification instance, as given by Q and a choice of X1 , . . . , Xn . In other words, it contains the full range of operands that must possibly be accepted by the assumed aggregation operator B. Conversely, the set B is also exhausted by the possible range of (Qγ (X1 , . . . , Xn ))γ∈I . Hence for each f ∈ B there exist choices of Q and X1 , . . . , Xn such that f = (Qγ (X1 , . . . , Xn ))γ∈I , as I will now state. Theorem 7.29. Suppose f is some mapping f ∈ B. Define Q : P(I) −→ I by Q(Y ) = f (inf Y )
(Th. 7.29.a.i)
be the fuzzy subset with membership function for all Y ∈ P(I) and let X ∈ P(I) µX (z) =
1 2
+ 12 z
(Th. 7.29.a.ii)
194
7 The Class of MB -Models
for all z ∈ I. Then Qγ (X) = f (γ) for all γ ∈ I. Note 7.30. Combining this with the above observations, we now have evidence that B is the minimal set of mappings f : I −→ I which embeds all possible operands of the assumed aggregation mapping. Hence B precisely characterises the domain of suitable aggregation operators. Let us now return to the original idea of abstracting from the above definition of M, and formulating a generic construction of QFMs by aggregating the results of Qγ (X1 , . . . , Xn ) obtained for all choices of the cautiousness parameter. We know from the last theorem that a corresponding aggregation operator must be defined on B, because (Qγ (X1 , . . . , Xn ))γ∈I ∈ B, and no smaller set A ⊆ II will suffice. Hence consider an aggregation operator B : B −→ I. By composing B with the γ-indexed cut mechanism, we can now construct a QFM, denoted MB , which corresponds to the operator B. This basic suggestion can be formalized as follows. Definition 7.31. Let B : B −→ I be given. The QFM MB is defined by MB (Q)(X1 , . . . , Xn ) = B((Qγ (X1 , . . . , Xn ))γ∈I ) , for all semi-fuzzy quantifiers Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E). n
By the class of MB -QFMs I mean the class of all QFMs MB defined in this way. Because no conditions whatsoever were imposed on the aggregation mapping B, it is apparent that the resulting fuzzification mechanisms constitute a ‘raw’, totally unrestricted class of MB -QFMs, which apart from the well-behaved models, also contains many QFMs which violate the axiomatic requirements. In order to shrink down this ‘raw’ class to the reasonable cases of MB -DFSes, thus identifying its plausible models, I will now address the problem of formalizing the required conditions on admissible choices of the aggregation mapping.
7.8 Characterisation of MB-DFSes In this section, I will identify the class of well-behaved models among the ‘raw’ MB -QFMs, based on properties that are visible in the aggregation mapping. To this end, I first introduce some constructions on B. These will permit us to capture the relevant behaviour of the aggregation mapping in a system of conditions imposed on B. In turn, the proposed conditions will then be
7.8 Characterisation of MB -DFSes
195
shown to be necessary and sufficient for MB to be a DFS, i.e. they precisely characterise the admissible choices of B. To begin with, let me define the required constructions on the domain of the aggregation operator. Definition 7.32. Suppose f : I −→ I is a monotonic mapping (i.e., either nondecreasing or nonincreasing). The mappings f , f : I −→ I are defined by: lim f (y) : x < 1
f = y x f (1) : x=1 lim f (y) : x > 0 f = y x f (0) : x=0 for all x ∈ I. Notes 7.33. •
Let me remark that f and f are well-defined, i.e. the limits in the above expressions exist, regardless of f . This is because f is known to be bounded and monotonic. • It is further apparent that if f ∈ B, then f ∈ B and f ∈ B. I will now introduce several coefficients which describe certain aspects of a mapping f : I −→ I. Definition 7.34. Suppose that f : I −→ I is a monotonic mapping (i.e., either nondecreasing or nonincreasing). We define f0∗ = lim f (γ)
(7.5)
f∗0 = inf{γ ∈ I : f (γ) = 0}
(7.6)
γ 0
1 f∗2 f1∗
= inf{γ ∈ I : f (γ) = 12 } = lim f (γ) γ 1
f∗1↑ = sup{γ ∈ I : f (γ) = 1} .
(7.7) (7.8) (7.9)
As usual, let us stipulate that sup ∅ = 0 and inf ∅ = 1. Notes 7.35. •
All limits in the definition of these coefficients exist due to the assumed monotonicity of f : I −→ I. • The conditions imposed on legal choices of B will only utilize one of these 1
coefficients, f∗2 . The remaining coefficients will permit the subsequent definition of example models.
196
7 The Class of MB -Models
Based on these concepts, I can now state several axioms governing the behaviour of reasonable choices of B. Definition 7.36. Let an aggregation mapping B : B −→ I be given. For all f, g ∈ B, we define the following conditions on B: B(f ) = f (0)
if f is constant, i.e. f (x) = f (0) for all x ∈ I
B(1 − f ) = 1 − B(f ) If f(I) ⊆ {0, 12 , 1}, then 1 1 1 2 + f : f ∈ B+ ∗ 2 2 B(f ) =
1 2
1 2
(B-2) (B-3)
1
:
f ∈ B2
:
f ∈ B−
1
− 12 f∗2
(B-1)
B(f ) = B(f )
(B-4)
If f ≤ g, then B(f ) ≤ B(g)
(B-5)
Note 7.37. Let me briefly comment on the meaning of the individual conditions. (B-1) states that B preserves constants. In particular, all MB -QFMs such that B satisfies (B-1) coincide on three-valued argument sets. (B-2) expresses that B is compatible with the standard negation 1 − x. (B-3) ensures that all conforming MB -QFMs coincide on three-valued quantifiers. (B-4) expresses some kind of insensitivity property of B, which turns out to be crucial with respect to MB satisfying functional application (Z-6) Finally, (B-5) expresses that B is monotonic, i.e. application of B preserves inequalities. Taken as a whole, the proposed system of conditions on B precisely captures the requirements on reasonable choices of B. To see this, we first notice that every B : B −→ I which satisfies (B-1) to (B-5) makes MB a standard DFS: Theorem 7.38. The conditions (B-1)–(B-5) on B : B −→ I are sufficient for MB to be a standard DFS. Next I consider the converse issue that every choice of B : B −→ I which makes MB a DFS, in fact satisfies the proposed set of conditions. Again, I have a positive result here. Theorem 7.39. The conditions (B-1)–(B-5) on B : B −→ I are necessary for MB to be a DFS. Combining this with Th. 7.38, it is easily seen that Theorem 7.40. Every MB -DFS is a standard DFS.
7.9 A Simplified Construction Based on Mappings B : H −→ I
197
Let us further notice that Theorem 7.41. The conditions (B-1)–(B-5) are independent. Hence none of these conditions can be expressed in terms of the remaining conditions. In other words, I have succeeded in formalizing a minimal set of conditions, and the conditions indeed separate the distinct factors that guide reasonable choices of B. The mutual independence of the conditions also avoids redundant effort in proofs.
7.9 A Simplified Construction Based on Mappings B : H −→ I In the following, a simplified representation of MB -DFSes will be developed, which facilitates the presentation and investigation of specific models. To this end, let us first observe that B is completely determined by its behaviour on 1
B+ ∪ B 2 , provided that it satisfies (B-2). This is apparent because for every f ∈ B− , we obtain that 1 − f ∈ B+ and hence B(f ) = 1 − B(1 − f ). Noticing that c 1 = 1 − c 1 , we can further conclude that in every B which satisfies 2
2
(B-2), it holds that B(c 1 ) = 1 − B(1 − c 1 ) = 1 − B(c 1 ) , 2
2
2
i.e. B(c 1 ) = 12 . We can hence strengthen the above remark, and state that B is 2
even fully determined by its behaviour on B+ only. If we further assume that B satisfies (B-5), then we also know that B(f ) ≥ B(c 1 ) = 12 for all f ∈ B+ . 2
This means that we can restrict the range of possible B(f ) to the upper half of the unit interval, [ 12 , 1]. In order to develop the simplified constructions, let us now recall from Th. 7.39 that the considered conditions (B-2) and (B-5) are necessary for MB to be a DFS. Hence no models of interest are lost if we focus on those B : B −→ I which satisfy (B-2) and result in B(f ) ≥ 12 for all f ∈ B+ . In this case, I can give a more concise description of the models: Definition 7.42. By H ⊆ II we denote the set of nonincreasing mappings f : I −→ I, f = c0 , i.e. H = {f ∈ II : f nonincreasing and f (0) > 0 } . We can then associate with each B : H −→ I a B : B −→ I as follows:
198
7 The Class of MB -Models
B(f ) =
1 1 2 + 2 B (2f − 1) 1
2 1
−
2
1 2 B (1
− 2f )
: f ∈ B+ 1
: f ∈ B2 : f ∈ B−
(7.10)
It is apparent that each B constructed in this way again satisfies (B-2) and results in B(f ) ≥ 12 for all f ∈ B+ . Conversely, all suitable choices of B can be represented in terms of this new construction, as I will now state. Theorem 7.43. Consider B : B −→ I and suppose that B satisfies (B-2) and results in B(f ) ≥ 1 + 2 for all f ∈ B . Then B can be defined in terms of a mapping B : H −→ I according to equation (7.10). B is defined by B (f ) = 2B( 12 + 12 f ) − 1 .
(7.11)
By the above reasoning, these conditions are satisfied by every B : B −→ I which fulfills (B-2) and (B-5); in particular, every choice of B which makes MB a DFS can be represented in this simplified format. We can therefore restrict attention on properties of mappings B : H −→ I, and still cover all of the desired models.
7.10 Characterisation of the Models in Terms of Conditions on B Let us now characterise the MB -models in terms of conditions on the core aggregation mapping B , which are fitted to the simplified construction. Definition 7.44. Suppose B : H −→ I is given. For all f, g ∈ H, we define the following conditions on B : B (f ) = f (0)
if f is constant, i.e. f (x) = f (0) for all x ∈ I
If f (I) ⊆ {0, 1}, then B (f ) = B (f ) = B (f ) if f((0, 1]) = {0}
f∗0 ,
If f ≤ g, then B (f ) ≤ B (g)
(C-1) (C-2) (C-3) (C-4)
These conditions on B are straightforward from the known conditions (B-1)– (B-5) that must be imposed on the corresponding B. The following results on the (C-1)–(C-4) are therefore apparent from the previous results on the ‘B-conditions’: Theorem 7.45. The conditions (C-1)–(C-4) on B : H −→ I are sufficient for MB to be a standard DFS.
7.11 Examples of MB -Models
199
Theorem 7.46. The conditions (C-1)–(C-4) on B : H −→ I are necessary for MB to be a DFS. Theorem 7.47. The conditions (C-1)–(C-4) are independent. Note 7.48. In the above theorems on the ‘C-conditions’, it is understood that MB be constructed from B according to (7.10) and Def. 7.31. To sum up, the simplified construction of the relevant MB -QFMs in terms of a mapping B : H −→ I results in a succinct characterisation of the target models. Based on this characterisation, it has now become easy to verify that a ‘DFS candidate’ MB indeed qualifies as one of the intended models, by performing a few simple checks on the aggregation operator B : B −→ I or B : H −→ I. In fact, knowing the precise contraints that govern admissible choices of B , also guided my search into examples of models.
7.11 Examples of MB-Models In the following, I will present some examples of MB -QFMs. For convenience, the new construction based on B will be utilized, in order to simplify the presentation of these models and achieve succinct definitions. Let us first reconsider the model M, which motivated the generalization to the class of MB -QFMs, and fit its definition into the new format. In terms of the improved notation which is now available, the original definition of M can be rephrased as follows. Theorem 7.49. M is the MB -QFM defined by B (f ) =
1
f (x) dx , 0
for all f ∈ H. Now turning to new examples, I first introduce the following model MU . Definition 7.50. By MU we denote the MB -QFM defined by B U (f ) = max(f∗1↑ , f1∗ ) for all f ∈ H, where the coefficients f1∗ and f∗1↑ are defined by equalities (7.8) and (7.9), resp.
200
7 The Class of MB -Models
As I now state, MU is indeed a plausible model. In fact, this can be asserted for a broader type of MB -QFMs, of which MU is only a special example: Theorem 7.51. Suppose ⊕ : I2 −→ I is an s-norm and B : H −→ I is defined by B (f ) = f∗1↑ ⊕ f1∗ ,
(Th. 7.51.a)
for all f ∈ H, where the coefficients f∗1↑ and f1∗ are defined by (7.9) and (7.8), respectively. Further suppose that B : B −→ I is defined in terms of B according to equation (7.10), and that MB is the QFM defined in terms of B according to Def. 7.31. The QFM MB is a standard DFS. In particular, MU is a standard DFS. We will see later on that MU plays a special role among the models of the MB -type, by establishing a lower bound on these models with respect to the specificity order. In [53, Def. 44, p. 63], another example M∗ was presented, which is defined as follows. Definition 7.52. By M∗ we denote the MB -QFM defined by B ∗ (f ) = f∗0 · f0∗ , for all f ∈ H, where the coefficients f∗0 and f0∗ are defined by (7.6) and (7.5), resp. The following model, based on a similar construction, will be of some interest to the later analysis of the class of MB -DFSes. Definition 7.53. By MS we denote the MB -QFM defined by B S (f ) = min(f∗0 , f0∗ ) for all f ∈ H, using the same abbreviations as for M∗ . Abstracting from the underlying construction, we may assert the following. Theorem 7.54. Suppose B : H −→ I is defined by B (f ) = f∗0 f0∗ for all f ∈ H, where : I2 −→ I is a t-norm. Further suppose that the QFM MB is defined in terms of B according to (7.10) and Def. 7.31. Then MB is a standard DFS. In particular, M∗ and MS are standard DFSes. Like MU , the model MS is also of special relevance to the MB -type. We shall see later that it plays the opposite role, and constitutes an upper bound on the models under the specificity order. In [53, Def. 45, p. 64], a further type of model M∗ was presented.
7.12 Properties of MB -Models
201
Definition 7.55. By M∗ we denote the MB -QFM defined by B∗ (f ) = sup{x · f (x) : x ∈ I} , for all f ∈ H. By slightly altering this construction, I discovered a model which is now known to be of key relevance to DFS theory. Definition 7.56. By MCX we denote the MB -QFM defined by B CX (f ) = sup{min(x, f (x)) : x ∈ I} for all f ∈ H. By abstracting from the constructive principle underlying these examples, we obtain the following more general result: Theorem 7.57. Suppose : I2 −→ I is a continuous t-norm and B : H −→ I is defined by B (f ) = sup{γ f (γ) : γ ∈ I}
(Th. 7.57.a)
for all f ∈ H. Further suppose that B : B −→ I is defined in terms of B according to equation (7.10). Then the QFM MB , defined by Def. 7.31, is a standard DFS. Note 7.58. In particular, MCX and M∗ are standard models. As remarked above, MCX is a DFS with unique properties, and a separate section has therefore been devoted to its discussion. Before focusing on this particular model in Sect. 7.13, however, it is first necessary to develop the criteria on B or B which permit us to check whether a given DFS MB satisfies an adequacy property of interest. For example, it would be helpful if we could perform specificity comparisons solely based on the known aggregation mappings B ; if we could test MB for Q-continuity and arg-continuity by looking at B only, etc. In turn, these conditions will then permit a deeper investigation of MCX and other prominent models of the theory.
7.12 Properties of MB-Models Let us now take a closer look at MB -DFSes. The present goal is to expose the internal structure of these models and relate it to the properties of interest, like the continuity requirements or propagation of fuzziness. I am also interested in identifying properties shared by all MB -DFSes which unveil some
202
7 The Class of MB -Models
characteristic aspects of this class of models. In addition, it is useful to locate models with special properties, and in particular the boundary cases with respect to the specificity order. Knowing these extreme cases delimits the space of possible models, which is opened by the proposed base construction. These special examples hence provide some insight into the behaviour that must be expected by other models of interest, which are known to be in-between the extreme poles. To begin with, I state a theorem which simplifies the comparison of MB DFSes with respect to specificity. Theorem 7.59. Suppose B 1 , B 2 : H −→ I are given. Further suppose that B1 , B2 ∈ H are the mappings associated with B 1 and B 2 , respectively, according to equation (7.10), and MB1 , MB2 are the corresponding QFMs defined by Def. 7.31. Then MB1 c MB2 if and only if B 1 ≤ B 2 . Based on the above theorem and the examples of models introduced so far, it is now easily shown that c is a genuine partial order on MB -DFSes. Theorem 7.60. c is not a total order on MB -DFSes. Note 7.61. In particular, the standard DFSes are only partially ordered by c . In practice, this means that we cannot simply compile a list of the example models (or other models) ordered by specificity, because some of these might fail to be comparable under c . However, it is still possible to investigate extreme cases of MB -DFSes with respect to the specificity order. For example, it is not hard to prove that the DFS MU defined by Def. 7.50 represents a boundary case of MB -DFSes in terms of specificity: Theorem 7.62. MU is the least specific MB -DFS. Let us now consider the question of the existence of most specific MB -DFSes. We first observe that Theorem 7.63. All MB -DFSes are specificity consistent. It is then immediate from Th. 5.13 that there exists a least upper specificity bound Flub on every nonempty collection of MB -models. As I will now show, Flub is in fact an MB -DFS. Theorem 7.64. Let F be a given nonempty collection of MB -DFSes. The least upper specificity bound Flub of F is an MB -DFS.
7.12 Properties of MB -Models
203
Hence arbitrary collections of MB -QFMs have a least upper specificity bound, which is again an MB -DFS. If we start from the collection of all MB -DFSes, then we obtain the model MS defined in Def. 7.53, which represents the other extreme case of MB -DFS in terms of specificity: Theorem 7.65. MS is the most specific MB -DFS. Next I will discuss the two continuity requirements that were identified in Sect. 6.2, and develop the required criteria for deciding upon the Q-continuity and arg-continuity of a given MB -DFS. Let us first make some general observations how the continuity conditions are related to (•)γ . Theorem 7.66. n Let Q, Q : P(E) −→ I be given. Then d(Qγ , Qγ ) ≤ d(Q, Q ) for all γ ∈ I. We can use this inequality to formulate a condition on B : H −→ I which is necessary and sufficient for MB to be Q-continuous. To this end, we first define a metric d : H × H −→ I by d(f, g) = sup{|f (γ) − g(γ)| : γ ∈ I}
(7.12)
for all f, g ∈ H. Theorem 7.67. Suppose MB is an MB -DFS and B is the corresponding mapping B : H −→ I. MB is Q-continuous if and only if for all ε > 0 there exists δ > 0 such that |B (f ) − B (g)| < ε whenever f, g ∈ H satisfy d(f, g) < δ. In the case of continuity in arguments, we need a different distance measure on H. Hence let us define d : H × H −→ I by d (f, g) = sup{inf{γ : γ ∈ I, max(f (γ ), g(γ )) ≤ min(f (γ), g(γ))} − γ : γ ∈ I} , (7.13) for all f, g ∈ H. Let us notice that d is only a ‘pseudo-metric’. It is symmetric and satisfies the triangle inequality, but fails to be a metric in the strict sense because d (f, g) = 0 does not entail that f = g. However, those f, g with d (f, g) = 0 are treated alike by reasonable choices of B . It is this fact which justifies the use of d in lieu of a metric, which permits a successful reduction of arg-continuity to a criterion decidable from the aggregation mapping. The next theorem established this desired criterion, and hence shows how the arg-continuous MB models can be characterised in terms of the underlying mapping B : H −→ I. Theorem 7.68. Suppose B : H −→ I satisfies (C-2), (C-3) and (C-4). Further suppose that MB is defined in terms of B according to (7.10) and Def. 7.31. Then the following conditions are equivalent:
204
7 The Class of MB -Models
a. MB is arg-continuous. b. for all f ∈ H and all ε > 0, there exists δ > 0 such that |B (f ) − B (g)| < ε whenever d (f, g) < δ. Sometimes the following sufficient condition is simpler to check. Theorem 7.69. Let B : H −→ I be a given mapping which satisfies (C-2), (C-3) and (C-4). If for all ε > 0 there exists δ > 0 such that B (g) − B (f ) < ε whenever f ≤ g and d (f, g) < δ, then MB is arg-continuous. Both theorems have proven useful for deciding the continuity issue in the case of the example models. In the following, I present the results of this investigation, which establish or reject the properties of Q-continuity and arg-continuity for the examples of MB -models. Theorem 7.70. Suppose ⊕ : I2 −→ I is an s-norm and B : H −→ I is defined by (Th. 7.51.a). Further suppose that B : B −→ I is defined in terms of B according to equation (7.10), and that MB is the QFM defined in terms of B according to Def. 7.31. The QFM MB is neither Q-continuous nor arg-continuous. In particular, MU violates both continuity conditions. Theorem 7.71. Suppose B : H −→ I is defined by B (f ) = f∗0 f0∗ for all f ∈ H, where : I2 −→ I is a t-norm. Further suppose that the QFM MB is defined in terms of B according to (7.10) and Def. 7.31. Then MB is neither Qcontinuous nor arg-continuous. In particular, MS and M∗ fail at both continuity conditions. These results illustrate that MU and MS are only of theoretical interest, because they represent extreme cases in terms of specificity. Due to their discontinuity, these models are not suited for applications. The DFS M∗ is also impractical. Having considered these boundary cases, I will now discuss practical models. In order to establish that M is arg-continuous, it is useful to know how the distance measures d and d are related. To this end, I introduce the mapping ♦ (•) : H −→ H defined by f ♦ (v) = inf{γ ∈ I : f (γ) < v}
(7.14)
for all f ∈ H and v ∈ I. It is easily checked that indeed f ♦ ∈ H whenever f ∈ H. Let us now utilize this concept to unveil the relationship between d and d . Theorem 7.72. For all f, g ∈ H, d (f, g) = d(f ♦ , g ♦ ).
7.12 Properties of MB -Models
205
Building on the above theorem, it is then easy to show that the model M satisfies both continuity conditions: Theorem 7.73. M is both Q-continuous and arg-continuous. This proves that M is indeed practical, and thus a model worth considering for use in applications that need fuzzy quantifiers. As concerns the MCX -type, I have the following positive result: Theorem 7.74. Let : I2 −→ I be a uniformly continuous t-norm, i.e. for all ε > 0, there exists δ > 0 such that |x1 y1 − x2 y2 | < ε whenever x1 , x2 , y1 , y2 ∈ I satisfy (x1 , y1 ) − (x2 , y2 ) < δ. Further suppose that B : H −→ I is defined by equation (Th. 7.57.a), and define the QFM MB in terms of B according to (7.10) and Def. 7.31 as usual. Then MB is both Q-continuous and argcontinuous. In particular, the model MCX which exhibits the best theoretical properties (see Sect. 7.13 below), is indeed a good choice for applications, because it satisfies both continuity conditions. The theorem also shows that M∗ , presented on p. 201, is a practical DFS. This completes the discussion of the continuity conditions. Next we consider the requirement of propagating fuzziness in quantifiers and arguments, see Def. 6.3. It can be shown that all MB -models conform to both facets of propagating fuzziness. Theorem 7.75. Every MB -DFS propagates fuzziness in quantifiers. Theorem 7.76. Every MB -DFS propagates fuzziness in arguments. The above theorems indicate that the MB -DFSes constitute a subclass of the standard models which is especially well-behaved. We shall see in later chapters that other types of models can violate the requirement of propagating fuzziness. The two aspects of propagating fuzziness then become independent attributes that may or may not apply to a model of interest. Finally let us turn attention to the behaviour of MB -DFSes for threevalued quantifiers and for three-valued arguments. As it turns out, there is no room for variation in these cases, and the results of all MB -models are completely determined by the underlying construction based on three-valued cuts and the fuzzy median. Theorem 7.77. All MB -DFSes coincide on three-valued arguments, i.e. in the case that all involved in the quantification have membership arguments X1 , . . . , Xn ∈ P(E) grades µXi (e) ∈ {0, 12 , 1} for all e ∈ E.
206
7 The Class of MB -Models
Note 7.78. This is different from general standard DFSes which by Th. 4.1 are guaranteed to coincide for two-valued arguments only. A similar result can be established for three-valued quantifiers. Theorem 7.79. All MB -DFSes coincide on three-valued semi-fuzzy quantifiers, i.e. in the case that Q assumes results in the restricted range {0, 12 , 1} only, and can thus be n expressed as Q : P(E) −→ {0, 12 , 1}. Again, this is different from general standard DFSes, which are guaranteed to coincide only for two-valued quantifiers, see Th. 5.22.
7.13 A Standard Model with Unique Properties: MC X We already know from the previous section that the model MCX is both Q-continuous and arg-continuous. It is thus a practical model and suited for applications. In addition, MCX propagates fuzziness in quantifiers and in arguments, just like every MB -DFS. However, what exactly is it that makes MCX so special? The present section is devoted exclusively to this issue and will substantiate that MCX is indeed unique among MB -DFSes. In fact, the particular qualities held by this model are distinguished even among the full class of standard models of fuzzy quantification. In Sect. 4.10, I have introduced the construction of fuzzy argument insertion, and I explained that only those models which conform to this construction can adequately represent adjectival restriction by a fuzzy adjective, as in “Most X’s are lucky Y ’s”. Compatibility with fuzzy argument insertion is therefore highly desirable from a linguistic standpoint, because it ensures that this general type of adjectival restriction can be interpreted in a compositional way. As concerns fuzzy argument insertion, the following positive result has been proven for MCX : Theorem 7.80. The DFS MCX is compatible with fuzzy argument insertion. It is this property held by the model which explains the distinguished role of MCX . In fact, it can be shown that MCX is the unique standard DFS which satisfies this condition. Theorem 7.81. MCX is the only standard DFS which is compatible with fuzzy argument insertion. Hence MCX is foremost among the standard models. Due to the special importance of MCX , it is probably worthwhile to attempt an axiomatization of the model in terms of a system of conditions which uniquely identify MCX .
7.13 A Standard Model with Unique Properties: MCX
207
In fact, the desired axiomatization is rather obvious, because we only need to augment the previous axiomatization of the standard models, which has been presented in Def. 5.23, by the requirement of being compatible with fuzzy argument insertion: Definition 7.82. n Consider a QFM F. For all semi-fuzzy quantifiers Q : P(E) −→ I, I introduce the following conditions. Correct generalisation Projection quantifiers
U(F(Q)) = Q if n ≤ 1 (M-1) F(Q) = π e if there exists e ∈ E s.th. Q = πe (M-2)
Dualisation Internal joins
F(Q) = F(Q) n > 0 F(Q∪) = F(Q)∪ n > 0
Preservation of monotonicity
(M-3) (M-4)
If Q is nonincreasing in n-th arg, then (M-5)
Functional application
F(Q) is nonincreasing in n-th arg, n > 0 n n ˆ F(Q ◦ × fi ) = F(Q) ◦ × fˆi (M-6) i=1
Fuzzy argument insertion
i=1
where f1 , . . . , fn : E −→ E, E = ∅. F(Q A) = F(Q)A for all A ∈ P(E), n>0 (M-7)
We can then assert that the above conditions (M-1)–(M-7) indeed achieve an axiomatisation of the model MCX . Theorem 7.83. MCX is the unique QFM which satisfies (M-1)–(M-7). Note 7.84. By the above reasoning, this theorem is really straightforward and an immediate consequence of Th. 5.25 and Th. 7.81. As witnessed by the theorem, we have succeeded to uniquely identify MCX in the full space of possible QFMs, solely based on its observable behaviour. This was possible because the structure of MCX shows up in the distinguishing property (M-7), which is sufficient to characterise MCX within the class of standard models. Because the system (M-1)–(M-7) also comprises a minimal set of conditions, this makes a rather satisfying result from a theoretical point of view. In addition to being compatible with fuzzy argument insertion, MCX also exhibits a number of other remarkable characteristics. At present, no other models are known which possess any of these distinguished properties. I hence suspect that the other properties, too, are unique to MCX , but there is no proof of this yet, and more research should be directed at these issues. Let us first consider the preservation of convexity properties.
7 The Class of MB -Models
208
Theorem 7.85. The DFS MCX weakly preserves convexity. Theorem 7.86. Suppose MB is an MB -DFS. If MB weakly preserves convexity, then MCX c MB . The above theorems substantiate that MCX is the least specific MB -DFS which weakly preserves convexity. As remarked above, there is no evidence yet that other models exist which fulfill this criterion, and it might hence express another distinctive property of MCX . Let us further observe that MCX is a concrete implementation of a socalled ‘substitution approach’ to fuzzy quantification [173], i.e. the fuzzy quantifier is modelled by constructing an equivalent logical formula. This is apparent if we rewrite MCX as follows. Theorem 7.87. n For every Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E), MCX (Q)(X1 , . . . , Xn ) n L = sup{Q V, W (X1 , . . . , Xn ) : V, W ∈ P(E) , V1 ⊆ W1 , . . . , Vn ⊆ Wn } U (X1 , . . . , Xn ) : V, W ∈ P(E)n , V1 ⊆ W1 , . . . , Vn ⊆ Wn } = inf{Q V, W where L (X1 , . . . , Xn ) Q V, W
(7.15)
V,W (X1 , . . . , Xn ), L(Q, V, W )) = min(Ξ U (X1 , . . . , Xn ) Q
(7.16)
V, W
V,W (X1 , . . . , Xn ), U (Q, V, W )) = max(1 − Ξ V,W (X1 , . . . , Xn ) Ξ
(7.17) (7.18) (7.19)
n
/ Wi }) = min min(inf{µXi (e) : e ∈ Vi }, inf{1 − µXi (e) : e ∈
(7.20)
L(Q, V, W ) = inf{Q(Y1 , . . . , Yn ) : Vi ⊆ Yi ⊆ Wi , all i}
(7.21) (7.22)
U (Q, V, W ) = sup{Q(Y1 , . . . , Yn ) : Vi ⊆ Yi ⊆ Wi , all i} .
(7.23) (7.24)
i=1
Notes 7.88. •
To see how this is related to the substitution approach, let us notice that in the common case of a finite base set, inf and sup reduce to the logical connectives ∧ = max and ∨ = min as usual. The infimum and supremum are only needed to express the infinitary conjunctions and disjunctions that are required for quantification on infinite domains. In addition, it was necessary to allow occurrences of continuous-valued constants
7.13 A Standard Model with Unique Properties: MCX
209
Table 7.1. Computation of L(Q, V, W ) and U (Q, V, W ) (example) V ∅ ∅ ∅ ∅ {a} {a} {b}
W ∅ {a} {b} {a, b} {a} {a, b} {b}
V,W (X) Ξ ¬p ∧ ¬q ¬q ¬p 1 p ∧ ¬q p ¬p ∧ q
{b} {a, b}
{a, b} {a, b}
q p∧q
L(Q, V, W ) 0 0 0 0 1 1 2 3 2 3
1
U (Q, V, W ) 0 1 2 3
1 1 1
2 3
1 1
L(Q, V, W ), U (Q, V, W ) ∈ I in the resulting formulas because the fuzzification mechanism is applied to general semi-fuzzy quantifiers, not only to two-valued quantifiers. • From a different point of view, the above representation of MCX demonstrates that the model can be defined independently of the cut ranges and the median-based aggregation mechanism. By contrast, it can be rephrased into a compact form, which reduces fuzzy quantification in MCX to the evaluation of a Boolean formula involving fuzzy connectives and continuous-valued propositional variables. Before discussing any implications of the above reformulation of MCX in terms of the substitution approach, I will first give an example of the proposed construction, in order to facilitate understanding of the above reformulation of MCX , and to elucidate the structure of the constructed formula and the function of the involved coefficients. Hence let us consider a simple two-element base set E = {a, b} and a semi-fuzzy quantifier defined on E, say 0 : Y = ∅ Q(Y ) = 23 : Y = {b} 1 : Y = {a} ∨ Y = {a, b} for all Y ∈ P({a, b}). Let us now construct the quantification formula step by step, in order to gain some understanding how it is built from the base quantifier. Hence let a fuzzy argument set X ∈ P({a, b}) be given. It is convenient to abbreviate p = µX (a), q = µX (b). Table 7.1 lists all possible choices of V, W ∈ P({a, b}) with V ⊆ W , along with the corresponding quantities V,W (X) which express the compatibility of X with the closed range of subΞ sets {Y ∈ P(E) : V ⊆ Y ⊆ W }, as well as the lower and upper bounds L(Q, V, W ) and U (Q, V, W ) on the quantification results achieved by Q in this given range of subsets. Based on this table, we can then fill in the formulas L (X), U (X) and Q which define the upper and lower bound quantifiers Q V, W V, W as shown in Table 7.2. Recalling the first construction of the quantification formula described in Th. 7.87, which resorts to the lower bound quantifiers
210
7 The Class of MB -Models L U Table 7.2. Computation of Q V, W (X) and QV, W (X) (example) V ∅ ∅ ∅ ∅ {a} {a} {b}
W ∅ {a} {b} {a, b} {a} {a, b} {b}
{b} {a, b}
{a, b} {a, b}
L Q V, W (X) ¬p ∧ ¬q ∧ 0 ¬q ∧ 0 ¬p ∧ 0 1∧0 p ∧ ¬q ∧ 1 p∧1 ¬p ∧ q ∧ 23 q ∧ 23 p∧q∧1
U Q V, W (X) p∨q∨0 q∨1 p ∨ 23 0∨1 ¬p ∨ q ∨ 1 ¬p ∨ 1 p ∨ ¬q ∨ 23 ¬q ∨ 1 ¬p ∨ ¬q ∨ 1
L , we can now build the desired formula which reduces MCX (Q)(X) to Q V, W a propositional expression involving continuous-valued variables: MCX (Q)(X) = max{¬p ∧ ¬q ∧ 0, ¬q ∧ 0, ¬p ∧ 0, 1 ∧ 0, p ∧ ¬q ∧ 1, p ∧ 1, ¬p ∧ q ∧ 23 , q ∧ 23 , p ∧ q ∧ 1} . After some element-wise simplification, this becomes MCX (Q)(X) = max{0, 0, 0, 0, p ∧ ¬q, p, ¬p ∧ q ∧ 23 , q ∧ 23 , p ∧ q} . By eliminating all occurrences of the identity 0 of max, and by utilizing the law of absorption, the above expression can then be reduced to MCX (Q)(X) = p ∨ (q ∧ 23 ) . Of course, we could also have applied the second construction of the quantification formula proposed in Th. 7.87, which is based on the upper bound U . In this case, too, we start by building the ‘raw’ formula quantifiers Q V, W which expresses MCX (Q)(X), now based on the second construction. We then obtain the conjunction, MCX (Q)(X) = min{p ∨ q ∨ 0, q ∨ 1, p ∨ 23 , 0 ∨ 1, ¬p ∨ q ∨ 1, ¬p ∨ 1, p ∨ ¬q ∨ 23 , ¬q ∨ 1, ¬p ∨ ¬q ∨ 1} . Now effecting some element-wise simplification, the raw formula becomes MCX (Q)(X) = min{p ∨ q, 1, p ∨ 23 , 1, 1, 1, p ∨ ¬q ∨ 23 , 1, 1} . Again by removing occurrences of the identity 1 of min, and by taking benefit of the law of absorption in order to eliminate some of the subexpressions, we can further simplify the latter result into MCX (Q)(X) = (p ∨ q) ∧ (p ∨ 23 ) . Now recalling the law of distributivity, we can extract the variable p from the two disjunctions, thus rewriting the above equation as
7.13 A Standard Model with Unique Properties: MCX
211
MCX (Q)(X) = p ∨ (q ∧ 23 ) . Hence the result of the second construction indeed coincides with that obtained from the first construction, as claimed by the theorem. This completes the example, which demonstrated the utility of the substitution formulas to the modelling of fuzzy quantification, and I will now discuss some further observations which are apparent from the reformulation of MCX . The above representation of MCX , which is made explicit by Th. 7.87, also reveals the robustness of this model against variation in the quantifier and its arguments. This is important because in practice, there often remains some uncertainty concerning the precise choice of numeric membership grades which best model the target NL quantifier and the arguments (i.e. the target NL concepts which are to be modelled by suitable fuzzy sets). •
Let us first observe from the above representation that MCX (Q) (X1 , . . . , Xn ) is not only arg-continuous (i.e. a smooth function of the membership grades µXi (e) attained by the arguments), but indeed extremely stable against slight changes of the membership grades. This is because for all i ∈ {1, . . . , n} and e ∈ E, ∂MCX (Q)(X1 , . . . , Xn ) ≤1 ∂µX (e) i
whenever the partial derivative is defined, which is obvious from the above representation. A change in the arguments which does not exceed a given ∆ thus cannot change the quantification results by more than ∆. This indicates that with MCX , the precise choice of membership grades for the arguments (which might be to some degree arbitrary), is rather uncritical. • A similar remark can also be made concerning the robustness of the quantification results MCX (Q)(X1 , . . . , Xn ) against variation in the chosen n quantifier . In this case, we observe that for all (Y1 , . . . , Yn ) ∈ P(E) , ∂MCX (Q)(X1 , . . . , Xn ) ≤1 ∂Q(Y1 , . . . , Yn ) whenever the partial derivative is defined, which is again obvious from the above representation. In fact, a change of Q(Y1 , . . . , Yn ) by some ∆ > 0 can maximally change the quantification results by ∆. Hence the precise choice of the semi-fuzzy quantifier is uncritical with MCX as well. This means that the imprecision in the choice of numeric membership grades for quantifier and arguments is not amplified in any way when applying MCX , but either kept at its original level, or even suppressed. Summarizing, I have managed to relate the model MCX to Yager’s suggestion of modelling fuzzy quantification by constructing a suitable logical formula [173], known as the ‘substitution approach’. In addition, I have discussed some important implications that are apparent from the representation
212
7 The Class of MB -Models
of MCX in terms of a fuzzy propositional formula. In the following, I will continue along these lines, thus connecting the model to existing work on fuzzy quantification. To this end, I first recall the definition of the Sugeno integral, which is very closely tied to the model. Definition 7.89. Suppose Q : P(E) −→ I is a nondecreasing semi-fuzzy quantifier and X ∈ P(E). The Sugeno integral (S) X dQ is defined by (S) X dQ = sup{min(α, Q(X≥α )) : α ∈ I} . In fact, MCX properly generalizes the Sugeno integral: Theorem 7.90. Suppose Q : P(E) −→ I is nondecreasing. Then for all X ∈ P(E), (S) X dQ = MCX (Q)(X) . Hence MCX (Q) coincides with the well-known Sugeno integral whenever the latter is defined. Unlike the integral, though, MCX is defined for arbitrary semi-fuzzy quantifiers, and hence extends the Sugeno integral from monotonic measures to the ‘hard’ cases of unrestricted multi-place and non-monotonic quantifiers. In order to relate this result with previous approaches to fuzzy quantification, let us now formally define the quantity µ[j] (X) already mentioned in the introduction. Definition 7.91. Let a finite base set E = ∅ of cardinality |E| = m be given. For a fuzzy subset X ∈ P(E), we denote by µ[j] (X) ∈ I, j = 1, . . . , m, the j-th largest membership value of X (including duplicates). More formally, consider an ordering of the elements of E such that E = {e1 , . . . , em } and µX (e1 ) ≥ · · · ≥ µX (em ). Then define µ[j] (X) = µX (ej ). It is apparent that the results do not depend on the chosen ordering if ambiguities exist. Let us further stipulate that µ[0] (X) = 1 and that µ[j] (X) = 0 whenever j > m. As a corollary to the above theorem, we then obtain (cf. [21]): Theorem 7.92. Suppose E = ∅ is a finite base set, q : {0, . . . , |E|} −→ I is a nondecreasing mapping and Q : P(E) −→ I is defined by Q(Y ) = q(|Y |) for all Y ∈ P(E). Then for all X ∈ P(E), MCX (Q)(X) = max{min(q(j), µ[j] (X)) : 0 ≤ j ≤ |E|} . Notes 7.93.
7.13 A Standard Model with Unique Properties: MCX
213
•
Hence MCX consistently generalises the basic FG-count approach of [174, 197], which is restricted to quantitative and nondecreasing one-place quantifiers. • From a computational point of view, the theorem shows that in the special case of nondecreasing quantitative unary quantifiers on finite domains, MCX (Q)(X) can be determined from a fuzzy cardinality measure on X, namely from the well-known FG-count(X). As will be shown later in Th. 11.18, it is possible to generalize this result to arbitrary quantitative unary quantifiers on finite domains. In this generic case, though, the FG-count must be replaced with a different measure of fuzzy cardinality, the fuzzy interval cardinality Xiv , which will be introduced in Th. 11.16 below. In closing this section on MCX , I finally discuss two additional properties of the model which make it particular suited for real-world applications. The first property is concerned with an aspect of parsimony. Typical applications of fuzzy logic do not fully exploit the available range of continuous truth values. In many situations, it is sufficient to restrict attention to a modest number of truth values/membership grades. For example, a repertoire of five possible grades Υ = {0, 14 , 12 , 34 , 1}
(7.25)
might be adequate, which correspond to the ordinal scale ‘false’, ‘quite false’, ‘undecided’, ‘quite true’, ‘true’ .
(7.26)
The model MCX combines well with such ordinal scales of truth values, and hence supports a parsimoneous use of membership grades. This is because MCX does not ‘invent’ any new truth values, provided that the restricted scale is closed under negation: Theorem 7.94. Let Υ ⊂ I be a given set with the following properties: • Υ is finite; • if υ ∈ Υ , then 1 − υ ∈ Υ ; • {0, 1} ⊆ Υ . n
Further suppose that Q : P(E) −→ Υ is a semi-fuzzy quantifier with quan are Υ -valued fuzzy subsets tification results in Υ and that X1 , . . . , Xn ∈ P(E) of E, i.e. based on membership functions µXi : E −→ Υ , i = 1, . . . , n. Then it also holds that MCX (Q)(X1 , . . . , Xn ) ∈ Υ . Hence in the above example of the five element set Υ defined by (7.25), fuzzy quantification based on MCX will always yield results in {0, 14 , 12 , 34 , 1}, provided that both the semi-fuzzy quantifier and its arguments assume values in this restricted range only.
214
7 The Class of MB -Models
So far, we have considered a restriction to finite scales of membership grades, which is possible in many applications and reduces the complexity of the model. The achieved parsimony limits the set of available truth values but does not alter the basic modelling style, which is essentially numerical and heavily relies on the use of numbers to express non-numerical, symbolic natural language concepts. In applications which emphasize the linguistic aspects of fuzzy quantification, it might be more natural to depart from numerical modelling altogether, and express the membership grades and gradual truth values in terms of ordinal scales, like the one presented in (7.26). Due to the fact that the computational procedures of fuzzy modelling rely on the use of numerical membership grades, the question then arises of how these ordinal scales should be embedded into the continuous range of truth values in [0, 1], and thus be interpreted in terms of numerical membership grades. In practice, there is seldom perfect knowledge concerning the precise choice of numeric membership grades which best represent the given ordinal scale. It is therefore essential to a good model of fuzzy information processing, and in particular of fuzzy quantification, that a certain variation in the choice of membership grades, which may result from this uncertainty concerning the precise embedding into numerical representations, be absorbed by the interpretation process. As stated by the next theorem, MCX is robust with respect to such variation in the numeric membership grades, as long as these changes are symmetrical with respect to negation. Recalling the model transformation scheme introduced in Def. 5.2, we can express this property as follows. Theorem 7.95. Suppose σ : I −→ I is a mapping which satisfies the following requirements: • σ is a bijection; • σ is nondecreasing; • σ is symmetrical with respect to negation, i.e. σ(1 − x) = 1 − σ(x) for all x ∈ I. Then MCX σ = MCX . The theorem states that if the scale of continuous truth values is deformed in a consistent way, then the results obtained from fuzzy quantification based on MCX will be subjected to exactly the same deformation, and automatically fit into the adapted scale. Let us return to the above example on finite truth values, which is also suited to demonstrate how the property expressed by the theorem is related to the original problem, that of absorbing the uncertainty concerning the interpretation of ordinal truth values in terms of numerical assignments. In the example, I have chosen to interpret the ordinal scale presented in (7.26) in terms of the numbers {0, 14 , 12 , 34 , 1}. However, it might be more appropriate to model ‘quite true’ by 45 instead of the earlier 34 . The requirements stated in the theorem then guide us to adapt ‘quite false’ accordingly, which receives
7.13 A Standard Model with Unique Properties: MCX
215
the new interpretation of 15 . In terms of the re-interpretation mapping σ, these decisions can be expressed as σ( 34 ) = 45 ,
σ( 41 ) =
1 5
.
(7.27)
Let us now consider an arbitrary extension of σ to a ‘full’ mapping σ : I −→ I, which conforms to the requirements of Th. 7.95 and fulfills the criteria stated in (7.27). We then also know that σ(0) = 0, σ( 21 ) = 12 and σ(1) = 1. Hence by applying σ, the original interpretation of the ordinal scale (7.26) in terms of {0, 14 , 12 , 34 , 1}, is consistently transformed into an alternative interpretation of the scale, which is now based on {0, 15 , 12 , 45 , 1}. Combined with the earlier result Th. 7.94, the new theorem Th. 7.95 ensures that the results obtained from MCX automatically adapt to the new interpretation of the ordinal scale, provided that both the semi-fuzzy quantifier and the fuzzy argument sets are fitted to the new interpretation by applying σ. Hence suppose that we had MCX (many)(rich, lucky) = 34 prior to the re-interpretation, which corresponds to the ordinal result ‘quite true’. Now we fit rich, lucky to the alternative interpretation by applying σ, hence obtaining the new arguments σrich, σlucky. The quantifier must also be adapted to the new interpretation, and thus translates into σmany. Let us now instantiate the theorem by the given data; we then obtain that MCX σ (many)(rich, lucky) = MCX (many)(rich, lucky) .
(7.28)
Now expanding the left hand member of the equation according to Def. 5.2, the above equation (7.28) becomes σ −1 MCX (σmany)(σrich, σlucky) = MCX (many)(rich, lucky) , which finally can be rewritten as MCX (σmany)(σrich, σlucky) = σ(MCX (many)(rich, lucky)) . Hence the quantification result obtained from the re-interpretation of the ordinal scale (left-hand member of the equation) can be computed from the quantification result based on the original interpretation of the ordinal scale, simply by applying the re-interpretation mapping σ which fits the result into the new system (right-hand member of equation). In the above example, then, the equation ensures that the previous result of MCX (many)(rich, lucky) =
3 4
,
which corresponds to ‘quite true’ under the original interpretation, now translates into MCX (σmany)(σrich, σlucky) =
4 5
,
which is the proper representation of ‘quite true’ in the adapted system.
216
7 The Class of MB -Models
To sum up, there are the following interactions between the two properties that were identified for MCX : The first condition (expressed in Th. 7.94) justifies a restriction to the finite set of numerical membership grades that interpret the ordinal values. In particular, this guarantees that numerical quantification results can be translated back into an ordinal result on the assumed base scale. Thus the first condition frees applications from having to support the full range of continuous membership grades, and permits them to decide on the desired granularity of discerned membership grades. The second condition expressed in Th. 7.95, asserts that the precise numerical interpretation of the ordinal base scale is inessential, because the re-translation of the result into the ordinal scale is invariant under the chosen assignment of numerical membership grades. Compared to the property of smoothness in quantifiers and arguments, which makes the model robust against small non-systematic variations, the second property suggested here is concerned with coherent variations, which are allowed to be arbitrarily large. The example illustrates that the properties are most useful when taken together, because the model is then suited to handle ordinal scales of truth values, regardless of the chosen numerical interpretation. The only assumption needed is that these scales be closed under negation. As witnessed by the above theorems Th. 7.94 and Th. 7.95, MCX conforms to both criteria, which makes an even bolder point for preferring this model in applications.
7.14 Chapter Summary This chapter was concerned with the development of a constructive principle for models of fuzzy quantification. To this end, we introduced a construction for QFMs which provides a rich source of plausible models. In order to identify such a constructive principle we first investigated the potential utility of proven mechanisms of fuzzy set theory. We hence reviewed α-cuts and the resolution principle, which reduce a target operation on fuzzy sets into layers of crisp computations. However, the analysis of these techniques revealed that they offer little help for defining models of fuzzy quantification because they are not symmetric with respect to complementation. This observation suggested a reformulation of the cutting mechanism, which should be compatible with complementation. In order to achieve symmetry, we introduced a three-valued cutting mechanism. A resolution property similar to that of α-cuts can be proven for this mechanism, which demonstrates that it faithfully represents a given fuzzy subset in terms of the information spread over its three-valued cuts. Thus the mechanism is comparable in expressive power to α-cuts, and indeed, every three-valued cut can be represented by a pair of symmetrical α-cuts. In order to achieve the desired reduction to layers of crisp computations, the threevalued set resulting from the cut will be further resolved into closed ranges of crisp sets on which the basic computations can then operate. The super-
7.14 Chapter Summary
217
valuation results collected for all choices of crisp sets in the considered ranges must then be aggregated into the final outcome of the interpretation process. Observing that the targeted class of standard models embeds Kleene’s threevalued logic, the aggregation procedure was defined in such a way that the resulting mechanism implements a constructive principle for Kleene’s logic. This basic method was then fitted to the case of three-valued quantifiers rather than propositional functions. Finally it was shown how the necessary generalization to semi-fuzzy quantifiers, and hence from three-valued to continuousvalued quantification results, can be accomplished by fuzzy median aggregation. This procedure resulted in the definition of QFMs (•)γ which determine the interpretation Qγ (X1 , . . . , Xn ) of the fuzzy quantification at the ‘level of cautiousness’ γ ∈ I. However, none of these QFMS (•)γ is a DFS. In order to obtain the desired models a second aggregation step is necessary which determines a combined interpretation from the total information spread over the Qγ (X1 , . . . , Xn )’s. This abstraction from the cutting parameter γ ∈ I is delegated to an aggregation operator B which determines the final quantification result MB (Q)(X1 , . . . , Xn ) = B((Qγ (X1 , . . . , Xn ))γ∈I ). In this way we obtain a generic base construction, parametrized by the chosen aggregation operator B, which spans the unrestricted class of MB -QFMs. Following that, an independent system of necessary and sufficient conditions was presented which captures the precise requirements on B that make MB a DFS. This analysis also demonstrates that all MB -DFSes are standard models. The analysis of the necessary conditions on B which make MB a DFS also revealed that every conforming choice of B is fully determined by its behaviour on the restricted domain B+ . The elimination of this redundance still contained in B then guided a reduction to simplified aggregation mappings B : H −→ I, which capture the core behaviour of the original mapping. The ‘full’ aggregation mapping B can be recovered from B by means of a simple transformation. The mappings B are useful because they permit a succinct definition of the MB models. Taking benefit of the simplified representation, several examples of MB -DFSes have been described. Some of these models which instantiate the proposed base construction also occupy a special position in the broader classes of models to be introduced in subsequent chapters, or even in the full class of standard models. Now that several concrete models are known, the question arises how these models compare to each other. In order to identify the model which best suits a given application, it is necessary to know the special semantic characteristics of the prototypical models. To permit a quick and simple assessment, the criteria of interest have generally been reduced to elementary tests on B . For example, a comparison of the models by specificity reduces to a simple inequality on the aggregation mappings. Although c is not a total order on the MB -models, this analysis made it possible to identify the extreme cases of MB -DFSes in terms of specificity, and the quantification results of an MB -
218
7 The Class of MB -Models
model are known to be bounded by the least specific model MU and the most specific model MS .3 Following this discussion of specificity issues, we turned to the two facets of continuity which govern the behaviour of practical MB -DFSes. Based on the proposed distance measures on B , we identified the precise conditions on B which ensure that MB be Q-continuous and arg-continuous, respectively, and thus show a certain stability against fluctuations in their inputs. In addition, we considered the issue of propagating fuzziness. This investigation once more illustrates the homogeneity of MB -DFSes, all of which propagate fuzziness in quantifiers and also in the arguments. Hence less specific input cannot result in more specific output in these models, which conforms to the intuitive expectations. The relative homogeneity of the MB models also expresses in their uniform behaviour when fed with three-valued inputs. Thus all MB DFSes coincide on three-valued quantifiers and on three-valued arguments. This is different from general standard models which coincide on two-valued quantifiers and two-valued arguments only. The above criteria for analysing the characteristics of MB -DFSes have then been applied to the example models. This investigation revealed that the models MU and MS , which represent the extreme poles in terms of specificity violate both continuity conditions, and hence lack the minumum robustness necessary for applications. Consequently, these models are of theoretical interest only. The models M and MCX , by contrast, were shown to conform to both continuity requirements, which makes them a suitable choice for applications. In fact, the model M was the first DFS to be implemented and used in a prototypical application [61]. A separate section has been devoted to the model MCX in order to detail the special properties which distinguish it from all other models of fuzzy quantification. The central characteristic of MCX is that it comply with fuzzy argument insertion. In fact, MCX is the only standard DFS with this property, which means that only MCX permits a compositional interpretation of fuzzy adjectival restrictions like “Many young rich are lucky”. By augmenting the core axiom system for standard models with the requirement of complying with fuzzy argument insertion, I obtained an axiomatization of MCX in terms of a minimal set of conditions which uniquely identify the model based on its observable behaviour. Apart from its prime characteristic, the compatibility with fuzzy argument insertion, the model has further virtues which are also potentially distinctive because MCX is the only model known to possess these properties. To begin with, MCX preserves convexity properties of quantifiers to the extent possible for a DFS, and it represents the lower specificity bound to all other models with this property (if further examples exist at all). The model MCX is also related to existing research on fuzzy quantification. Specifically, MCX 3
In particular, all MB -DFSes are specificity consistent, i.e. rather homogeneous. The generalizations considered in the later chapters will show more diversity.
7.14 Chapter Summary
219
can be shown to implement the so-called ‘substitution approach’ [173], i.e. the model expresses fuzzy quantification in terms of a propositional formula involving the standard fuzzy connectives and continuous-valued propositional variables. A detailed example was given which explains how the substitution formula is constructed from the coefficients L(Q, V, W ) and U (Q, V, W ) V,W (X) which sampled from the quantifier and the compatibility measure Ξ judges the degree to which the fuzzy set X belongs to a range of crisp sets {Y ∈ P(E) : V ⊆ Y ⊆ W }. In Chap. 11 we will see that the formulas can be simplified further in the case of quantitative unary quantifiers. Based on a suitable measure of fuzzy cardinality, it will then be possible to compute the quantification results of MCX directly from cardinality information. This reduction guided the development of evaluation formulas for various two-valued quantifiers presented in Sect. 11.3 below. Some other observations are also apparent from the alternative representation of the model. First of all, MCX is not only continuous in quantifiers and their arguments, but indeed achieves extreme robustness against fluctuations in the input, because all involved gradients are tied to the range [−1, 1]. This means that the imprecision in the choice of numeric membership grades for quantifier and arguments is not amplified in any way when applying MCX , but either kept at its original level, or even suppressed. Hence the precise choice of numerical membership grades is rather uncritical in the model. The representation of MCX in terms of the substitution formulas also facilitated the proof that MCX consistently extends the well-known Sugeno integral to arbitrary n-place quantifiers including non-monotonic cases. Recalling the known relationship between the Sugeno integral and the FG-count approach, this means that the model also generalizes the FG-count approach to the ‘hard’ cases of multi-place and non-monotonic quantification. The model further accounts for several practical requirements. For example, it is often preferable to restrict the set of allowable membership grades, in order to reduce the complexity of the system. The resulting coarser granularity of truth values will still be useful for many applications. The model of fuzzy quantification must then be compatible with this restriction to a fixed set of admissible truth values, i.e. when supplied with conforming input, it should always produce results from the allowable choices. As witnessed by Th. 7.94, MCX fulfills this requirement. In addition, Th. 7.95 shows that the model will absorb arbitrary coherent changes in the inputs.4 This is particularly valuable because it lets the model suppress a certain amount of variability in the numerical assignment of membership grades. I have presented a detailed example which explains how the conjunction of these properties makes the model immune against a the numerical re-interpretation of the ordinal base scale. To sum up, the model combines well with ordinal scales of truth values and thus supports a parsimoneous modelling style. 4
Due to its continuity, it will also be robust against small unsystematic changes, of course.
220
7 The Class of MB -Models
Due to its special conformance with linguistic expectations, MCX is the preferred choice for all applications that need to implement natural language semantics. However, the model also has practical virtues, especially its robustness against random or systematic variation in the model’s inputs.
8 Models Defined in Terms of Upper and Lower Bounds on Three-Valued Cuts
8.1 Motivation and Chapter Overview The previous chapter introduced a rich class of standard models for fuzzy quantification. In addition, we identified a distinguished model which is the preferred choice for all applications which emphasize the natural language aspect of fuzzy quantifiers and need to capture their intuitive semantics. This positive result on the model MCX might suggest that we can stop the exploration of further models, now that the universal, perfect case of a standard DFS has been discovered. Things are not that easy, though, because there are situations in which different requirements on the models prevail over the desideratum of linguistic adequacy, and these requirements are not necessarily satisfied by the model MCX , nor by any other choice of an MB -DFS. Such demands may arise, for example, in applications of fuzzy quantifiers like information aggregation, data fusion and multi-criteria decisionmaking. These applications usually involve the computation of a result ranking, or a weighting of alternatives, which in turn permits the selection of the best choice. In this kind of situation, it can be the prime concern to ensure a fine-grained differentiation of the available options, and the model of fuzzy quantification must thus be as discriminating as possible. As will be shown below, the MB -models are disadvantageous in this respect. Surprisingly, the reason for their suboptimal discriminating power is buried in the fact that every MB -DFS propagates fuzziness. There is an apparent trade-off here because propagation of fuzziness captures very basic expectations on plausible models – it is simply hard to understand that the results should become more specific when the inputs (quantifier or argument) get fuzzier. However, there is a price to be paid for the propagation of fuzziness: As the input becomes less specific, the result of an MB -DFS is likely to attain the least specific value of 12 , see Th. 8.26 and Th. 8.33 below. In certain applications, it might therefore be preferable to sacrifice the propagation of fuzziness, in favour of a model with enhanced discriminatory force – a model which remains useful when the input is overly fuzzy. Secondly, the investigaI. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 221–235 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
222
8 The Class of Fξ -Models
tion of models beyond the MB type will also be useful to relate existing work on fuzzy quantification to the axiomatic solution proposed in this work. The previous chapter has shown that the Sugeno integral and hence the ‘basic’ FG-count approach can be fitted into the proposed framework. By investigating a richer class of models, it might be possible to prove a similar result for the Choquet integral and hence the ‘basic’ OWA approach. Thirdly, the study of a richer class of models is of great theoretical interest, because it will improve our understanding of the full class of standard models. Specifically, the study of prototypical standard models which do not propagate fuzziness in quantifiers and/or arguments will reveal the structure and properties of models which are rather distinct from MB -DFSes. The bottom line is that broadening the considered class of models will also broaden the knowledge about fuzzy quantification in general. In order to achieve the desired generalization of MB -type models, the use of the fuzzy median in the definition of Qγ (X1 , . . . , Xn ) will be replaced with a more general construction. We get an idea of how to proceed if we simply expand the definition of the generalized fuzzy median and rewrite Qγ (X1 , . . . , Xn ) as Qγ (X1 , . . . , Xn ) = med 1 (sup{Q(Y1 , . . . , Yn ) : Yi ∈ Tγ (Xi )}, 2
inf{Q(Y1 , . . . , Yn ) : Yi ∈ Tγ (Xi )}) .
(8.1)
(This reformulation is justified by Def. 5.9 and Def. 7.18). The fuzzy median can then be replaced with other connectives, e.g. the arithmetic mean (x + y)/2. If we view sup{Q(Y1 , . . . , Yn ) : Yi ∈ Tγ (Xi )} and inf{Q(Y1 , . . . , Yn ) : Yi ∈ Tγ (Xi )} as mappings that depend on γ, then we can even eliminate the pointwise application of the connective and define more ‘holistic’ mechanisms. In the subsequent chapter, we will discuss the corresponding extension of MB -DFSes. Following the usual scheme of developing classes of models, we shall first motivate the new base construction which spans the unrestricted class of Fξ -QFMs. Again, the full class of raw mechanisms will then be pruned to the reasonable cases, by characterizing the Fξ -DFSes in terms of conditions imposed on the aggregation mapping ξ. After presenting examples of models, we will develop the necessary formal machinery to assess the relevant properties of Fξ -DFSes. Finally, the resulting techniques will be tested on the example models. As will emerge from this analysis, the novel class of models is rich enough to accomplish the three objectives which necessitated the quest for additional models. First of all, the two criteria of propagating fuzziness will no longer be tied to all models, and now become independent attributes which may or may not be possessed by the considered examples. The second goal will also be achieved, by identifying a prototypical model in the new class, which generalizes the Choquet integral. Finally, the introduction of the new class will also have a number of theoretical ramifications, and it sheds some light into the unknown of fuzzy quantification.
8.2 The Unrestricted Class of Fξ -QFMs
223
8.2 The Unrestricted Class of Fξ -QFMs As mentioned above, it was decided to build the new class from a generalisation of the construction for MB -type models. Hence let us reconsider the definition of Qγ (X1 , . . . , Xn ) which underlies the construction of the known models. As demonstrated by equation (8.1) above, the application of the fuzzy median connective med 1 can be separated from a prior compu2
tation of the supremum and infimum sup{Q(Y1 , . . . , Yn ) : Yi ∈ Tγ (Xi )} and inf{Q(Y1 , . . . , Yn ) : Yi ∈ Tγ (Xi )}, respectively. By abstracting from the cutting parameter γ in these expressions, we arrive at the following definition of upper and lower bound mappings, which delimit the possible quantification results in the cut ranges: Definition 8.1. n Let a semi-fuzzy quantifier Q : P(E) −→ I and fuzzy arguments X1 , . . . , Xn be given. The upper bound mapping Q,X1 ,...,Xn : I −→ I and the lower bound mapping ⊥Q,X1 ,...,Xn : I −→ I are defined by Q,X1 ,...,Xn (γ) = sup{Q(Y1 , . . . , Yn ) : Y1 ∈ Tγ (X1 ), . . . , Yn ∈ Tγ (Xn )} ⊥Q,X1 ,...,Xn (γ) = inf{Q(Y1 , . . . , Yn ) : Y1 ∈ Tγ (X1 ), . . . , Yn ∈ Tγ (Xn )} . The following properties of Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn are apparent: Theorem 8.2. n a Suppose Q : P(E) −→ I is a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E) choice of fuzzy arguments. Then 1. Q,X1 ,...,Xn is monotonically nondecreasing; 2. ⊥Q,X1 ,...,Xn is monotonically nonincreasing; 3. ⊥Q,X1 ,...,Xn ≤ Q,X1 ,...,Xn . We can therefore define the domain T of aggregation operators ξ : T −→ I which combine the results of Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn as follows. Definition 8.3. T ⊆ II × II is defined by T = {(, ⊥) : : I −→ I nondecreasing, ⊥ : I −→ I nonincreasing, ⊥ ≤ } . It is apparent from Th. 8.2 that (Q,X1 ,...,Xn , ⊥Q,X1 ,...,Xn ) ∈ T, regardless of n the semi-fuzzy quantifier Q : P(E) −→ I and the chosen fuzzy arguments In addition, it can be shown that T is the minimal set X1 , . . . , Xn ∈ P(E). which embeds all such pairs of mappings. Theorem 8.4. Let (, ⊥) ∈ T be given. We define semi-fuzzy quantifiers Q , Q , Q : P(2 × I) −→ I by
224
8 The Class of Fξ -Models
Q (Y ) = (sup Y )
(8.2)
Q (Y ) = ⊥(inf Y ) Q (Y ) : Q(Y ) = Q (Y ) :
(8.3)
Y =∅ else
(8.4)
where Y = {z ∈ I : (0, z) ∈ Y } Y = {z ∈ I : (1, z) ∈ Y }
(8.5) (8.6)
for all Y ∈ P(2 × I). × I) is defined by Further suppose that the fuzzy subset X ∈ P(2 1 − 1z : c = 0 µX (c, z) = 21 21 : c=1 2 + 2z
(8.7)
for all (c, z) ∈ 2 × I. Then = Q,X and ⊥ = ⊥Q,X . Based on the aggregation operator ξ : T −→ I, we define a corresponding QFM Fξ in the obvious way. Definition 8.5. For every mapping ξ : T −→ I, the QFM Fξ is defined by Fξ (Q)(X1 , . . . , Xn ) = ξ(Q,X1 ,...,Xn , ⊥Q,X1 ,...,Xn ) , n
for all semi-fuzzy quantifiers Q : P(E) X1 , . . . , Xn ∈ P(E).
(8.8)
−→ I and all fuzzy subsets
The class of QFMs so defined will be called the class of Fξ -QFMs. Apparently, it contains a number of models that do not fulfill the DFS axioms. We therefore impose five elementary conditions on the aggregation mapping ξ which provide a characterisation of the well-behaved models, i.e. of the class of Fξ -DFSes.
8.3 Characterisation of Fξ -Models Definition 8.6. For all (, ⊥) ∈ T, we impose the following conditions on aggregation mappings ξ : T −→ I. If = ⊥, then ξ(, ⊥) = (0)
(X-1)
ξ(1 − ⊥, 1 − ) = 1 − ξ(, ⊥)
(X-2)
If = c1 and ⊥(I) ⊆ {0, 1}, then ξ(, ⊥) =
1 2
+ 12 ⊥0∗
(X-3)
ξ( , ⊥) = ξ( , ⊥) (X-4) If ( , ⊥ ) ∈ T such that ≤ and ⊥ ≤ ⊥ , then ξ(, ⊥) ≤ ξ( , ⊥ ) (X-5)
8.3 Characterisation of Fξ -Models
225
As stated in the following theorems, the conditions imposed on ξ capture exactly the requirements that make Fξ a DFS. Let us show first that (X-1) to (X-5) are sufficient for Fξ to be a standard model. Theorem 8.7. If ξ : T −→ I satisfies (X-1) to (X-5), then Fξ is a standard DFS. Theorem 8.8. The conditions (X-1) to (X-5) on ξ : T −→ I are necessary for Fξ to be a DFS. Hence the ‘X-conditions’ are necessary and sufficient for Fξ to be a DFS, and all Fξ -DFSes are indeed standard DFSes. The criteria can also be shown to be independent. To carry out the independence proof, it was useful to relate MB -QFMs to the broader class of Fξ -QFMs: Theorem 8.9. Suppose B : B −→ I is a given aggregation mapping. Then MB = Fξ , where ξ : T −→ I is defined by ξ(, ⊥) = B(med 1 (, ⊥))
(8.9)
2
for all (, ⊥) ∈ T, and med 1 (, ⊥) abbreviates 2
med 1 (, ⊥)(γ) = med 1 ((γ), ⊥(γ)) , 2
2
for all γ ∈ I. Hence all MB -QFMs are Fξ -QFMs, and all MB -DFSes are Fξ -DFSes. Sometimes we should be aware of the relationship between the ‘B-conditions’ and the ‘X-conditions’ in the case of MB -QFMs. The next theorem helps us to show that the ‘X-conditions’ are independent, because the ‘B-conditions’ have already been shown to be independent in [55]: Theorem 8.10. Suppose B : B −→ I is given and ξ : T −→ I is defined by equation (8.9). Then 1. (B-1) is equivalent to (X-1); 2. (B-2) is equivalent to (X-2); 3. a) (B-3) entails (X-3); b) the conjunction of (X-2) and (X-3) entails (B-3); 4. a) (B-4) entails (X-4); b) the conjunction of (X-2) and (X-4) entails (B-4); 5. (B-5) is equivalent to (X-5). Note 8.11. The theorem also proved useful in other contexts, e.g. to show that the MB -DFSes are exactly those Fξ -DFSes that propagate fuzziness in both quantifiers and arguments. Theorem 8.12. The conditions (X-1) to (X-5) are independent.
226
8 The Class of Fξ -Models
8.4 Examples of Fξ -Models Let us now give examples of ‘genuine’ Fξ -DFSes (i.e. models that go beyond the special case of MB -DFSes). Definition 8.13. The QFM Fowa = Fξowa is defined in terms of ξowa : T −→ I by ξowa (, ⊥) =
1 2
1
(γ) dγ + 0
1 2
1
⊥(γ) dγ , 0
for all (, ⊥) ∈ T. Note 8.14. Both integrals are guaranteed to exist because and ⊥ are monotonic and bounded. Theorem 8.15. Fowa is a standard DFS. The DFS Fowa is of special interest because of its close relationship to the well-known Choquet integral. Definition 8.16. Suppose Q : P(E) −→ I is a nondecreasing semi-fuzzy quantifier and X ∈ P(E). The Choquet integral (Ch) X dQ is defined by (Ch)
X dQ =
1
Q(X≥α ) dα . 0
Theorem 8.17. Suppose Q : P(E) −→ I is nondecreasing. Then for all X ∈ P(E), (Ch) X dQ = Fowa (Q)(X) . Hence Fowa coincides with the Choquet integral on fuzzy quantifiers whenever the latter is defined. Recalling the notation µ[j] (X) introduced in Def. 7.91, we obtain the following corollary to the above theorem (cf. [21]): Theorem 8.18. Suppose E = ∅ is a finite base set, q : {0, . . . , |E|} −→ I is a nondecreasing mapping such that q(0) = 0, q(|E|) = 1, and Q : P(E) −→ I is defined by Q(Y ) = q(|Y |) for all Y ∈ P(E). Then for all X ∈ P(E), Fowa (Q)(X) =
|E|
(q(j) − q(j − 1)) · µ[j] (X) ,
j=1
i.e. Fowa consistently generalises Yager’s OWA approach [179].
8.5 Properties of the Fξ -Models
Definition 8.19. The QFM FS is defined in terms of ξS : T −→ I by 1 min(∗ , 1 + 1 ⊥∗≤ 2 ) : ⊥(0) > 1 2 2 1 ≥ ξS (, ⊥) = max(⊥∗1 , 12 − 12 ∗ 2 ) : (0) < 1 : else 2 1 ≤2
for all (, ⊥) ∈ T, where the coefficients f∗ 1 ≤2
1 ≥2
, f∗
227
1 2 1 2
∈ I are defined by
f∗
= inf{γ ∈ I : f (γ) ≤ 12 }
(8.10)
1 ≥ f∗ 2
= inf{γ ∈ I : f (γ) ≥ 12 } ,
(8.11)
for all f : I −→ I. Theorem 8.20. FS is a standard DFS. A third model of interest is the following QFM FA : Definition 8.21. The QFM FA is defined in terms of ξA : T −→ I by 1 0 ∗ 1 : min(⊥0 , 2 + 2 ⊥∗ ) 1↓ 1 1 ∗ ξA (, ⊥) = max(0 , 2 − 2 ∗ ) : 1 : 2
⊥∗0 > ∗0 < else
1 2 1 2
for all (, ⊥) ∈ T. Theorem 8.22. FA is a standard DFS.
8.5 Properties of the Fξ -Models Turning to properties of Fξ -DFSes, let us first investigate the precise conditions under which an Fξ -DFS propagates fuzziness in quantifiers and/or in arguments. Definition 8.23. We say that ξ : T −→ I propagates fuzziness if and only if ξ(, ⊥) c ξ( , ⊥ ) whenever (, ⊥), ( , ⊥ ) ∈ T with c and ⊥ c ⊥ .
228
8 The Class of Fξ -Models
Theorem 8.24. An Fξ -QFM propagates fuzziness in quantifiers if and only if ξ propagates fuzziness. If Fξ is a DFS, then ξ’s propagating fuzziness is equivalent to the following condition, which is much easier to check: Theorem 8.25. Suppose ξ : T −→ I satisfies (X-1) to (X-5). Then ξ propagates fuzziness if and only if ξ(, ⊥) = ξ(, max(⊥, 12 )) for all (, ⊥) ∈ T with ⊥(0) > 12 . For proofs that a given Fξ -DFS does not propagate fuzziness in quantifiers, the following necessary condition can be of interest. Theorem 8.26. Let ξ : T −→ I be a mapping which satisfies (X-1) to (X-5). If ξ propagates fuzziness, then ξ(, ⊥) = whenever (, ⊥) ∈ T such that (0) ≥
1 2
1 2
≥ ⊥(0).
It is this condition which explains why the results of MB -DFSes tend to attain 1 2 when the input is overly fuzzy. If one really needs different quantification results for (, ⊥), ( , ⊥ ) with ⊥(0) ≤ 12 ≤ (0) and ⊥ (0) ≤ 12 ≤ (0), one obviously must resort to Fξ -DFSes that do not propagate fuzziness in quantifiers. As concerns the examples of Fξ -models, one can attest the following. Theorem 8.27. Fowa does not propagate fuzziness in quantifiers. Hence Fowa is a ‘genuine’ Fξ -DFS (i.e. not an MB -DFS) by Th. 7.75. In particular, this proves that the Fξ -DFSes indeed form a more general class of models than MB -DFSes. For the DFS FS , we obtain a positive result. Theorem 8.28. FS propagates fuzziness in quantifiers. Turning to FA , we have Theorem 8.29. FA does not propagate fuzziness in quantifiers. We can also state the necessary and sufficient conditions on ξ for Fξ to propagate fuzziness in arguments. To this end, we first introduce the following property of ξ.
8.5 Properties of the Fξ -Models
229
Definition 8.30. We say that ξ : T −→ I propagates unspecificity if and only if ξ(, ⊥) c ξ( , ⊥ ) whenever (, ⊥), ( , ⊥ ) ∈ T satisfy ≥ and ⊥ ≤ ⊥ . Theorem 8.31. An Fξ -QFM propagates fuzziness in arguments if and only if ξ propagates unspecificity. If Fξ is sufficiently well-behaved (in particular, if Fξ is a DFS), it is possible to state the following equivalent condition: Theorem 8.32. Suppose ξ : T −→ I satisfies (X-2), (X-4) and (X-5). Then the following conditions are equivalent: a. ξ propagates unspecificity; b. for all (, ⊥) ∈ T with ⊥(0) ≥ 12 , ξ(, ⊥) = ξ(c1 , ⊥), where c1 (x) = 1 for all x ∈ I. Let us also consider a necessary condition which facilitates the proof that a given Fξ -DFSes does not propagate fuzziness in arguments: Theorem 8.33. If an Fξ -DFS propagates fuzziness in arguments, then ξ(, ⊥) = whenever (, ⊥) ∈ T such that (0) ≥
1 2
1 2
≥ ⊥(0).
For example, we can use this condition to prove that Theorem 8.34. Fowa does not propagate fuzziness in arguments. As concerns FS , we have the following result. Theorem 8.35. FS does not propagate fuzziness in arguments. Note 8.36. Hence FS is a ‘genuine’ Fξ -DFS as well, which is apparent from Th. 7.76. Turning to FA , which failed to propagate fuzziness in quantifiers, it is easily observed that FA still propagates fuzziness in its arguments. Theorem 8.37. FA propagates fuzziness in arguments.
230
8 The Class of Fξ -Models
In particular, the two conditions of propagating fuzziness are independent in the case of Fξ -DFSes, as stated in the following corollary. Theorem 8.38. The conditions of propagating fuzziness in quantifiers and in arguments are independent for Fξ -DFSes. Finally, we can characterize the subclass of MB -DFSes which are exactly those Fξ -DFSes that propagate fuzziness in both quantifiers and arguments. Theorem 8.39. Suppose an Fξ -DFS propagates fuzziness in both quantifiers and arguments. Then Fξ is an MB -DFS. Note 8.40. The converse implication is already known from Th. 7.75 and Th. 7.76. Next we shall investigate the exact conditions under which an Fξ -QFM is Qcontinuous or arg-continuous. To be able to discuss Q-continuous Fξ -QFMs, we introduce a metric d : T × T −→ I. For all nondecreasing mappings , : I −→ I, we define d(, ) = sup{|(γ) − (γ)| : γ ∈ I} .
(8.12)
We proceed similarly for nondecreasing mappings ⊥, ⊥ : I −→ I. In this case, d(⊥, ⊥ ) = sup{|⊥(γ) − ⊥ (γ)| : γ ∈ I} .
(8.13)
Finally, we define d : T × T −→ I by d((, ⊥), ( , ⊥ )) = max(d(, ), d(⊥, ⊥ )) ,
(8.14)
for all (, ⊥), ( , ⊥ ) ∈ T. It is apparent that d is indeed a metric. We will utilize d to express a condition on ξ which characterises the Q-continuous Fξ -QFMs. Theorem 8.41. Let ξ : T −→ I be a given mapping which satisfies (X-5). Then the following conditions are equivalent: a. Fξ is Q-continuous; b. for all ε > 0, there exists δ > 0 such that |ξ(, ⊥) − ξ( , ⊥ )| < ε whenever (, ⊥), ( , ⊥ ) ∈ T satisfy d((, ⊥), ( , ⊥ )) < δ. If ξ is sufficiently well-behaved, then the above condition can be simplified to the following criterion, which is easier to check. Theorem 8.42. Suppose ξ : T −→ I satisfies (X-2) and (X-5). Then the following conditions are equivalent:
8.5 Properties of the Fξ -Models
231
a. Fξ is Q-continuous; b. for all ε > 0, there exists δ > 0 such that ξ( , ⊥) − ξ(, ⊥) < ε whenever (, ⊥), ( , ⊥) ∈ T satisfy d(, ) < δ and ≤ . I have the following results for the examples of Fξ -DFSes. Theorem 8.43. Fowa is Q-continuous. Theorem 8.44. FS is not Q-continuous. Theorem 8.45. FA is not Q-continuous. As concerns continuity in arguments, we first need to introduce another distance measure d : T × T −→ I, which can be used to characterise the argcontinuous Fξ -QFMs in terms of conditions on ξ. For all nondecreasing mappings , : I −→ I, we define d (, ) = sup{inf{γ min((γ ), (γ )) ≥ max((γ), (γ))} − γ : γ ∈ I} . (8.15) Similarly for nonincreasing mappings ⊥, ⊥ : I −→ I, d (⊥, ⊥ ) = sup{inf{γ : max(⊥(γ ), ⊥ (γ )) ≤ min(⊥(γ), ⊥ (γ))} − γ : γ ∈ I} . (8.16) Finally, we define d : T × T −→ I by d ((, ⊥), ( , ⊥ )) = max(d (, ), d (⊥, ⊥ )) ,
(8.17)
for all (, ⊥), ( , ⊥ ) ∈ T. It is easily checked that d is a ‘pseudo-metric’, i.e. it is symmetric and satisfies the triangle inequality, but d ((, ⊥), ( , ⊥ )) = 0 does not imply that (, ⊥) = ( , ⊥ ). However, d is a metric modulo , i.e.
on the equivalence classes of (, ⊥) ∼ ( , ⊥ ) ⇔ ( , ⊥ ) = ( , ⊥ ). Hence d ((, ⊥), ( , ⊥ )) = 0 entails that (, ⊥) ∼ ( , ⊥ ), i.e. ξ(, ⊥) = ξ( , ⊥ ) whenever ξ satisfies (X-2), (X-4) and (X-5). Based on d , we can now assert the following. Theorem 8.46. Suppose ξ : T −→ I satisfies (X-2), (X-4) and (X-5). Then the following conditions are equivalent: a. Fξ is arg-continuous; b. for all (, ⊥) ∈ T and all ε > 0, there exists δ > 0 such that |ξ(, ⊥) − ξ( , ⊥ )| < ε whenever ( , ⊥ ) ∈ T satisfies d ((, ⊥), ( , ⊥ )) < δ.
232
8 The Class of Fξ -Models
In some cases, the following sufficient condition can shorten the proof that a given Fξ is arg-continuous. Theorem 8.47. Suppose ξ : T −→ I satisfies (X-2) and (X-5) Then Fξ is arg-continuous if the following condition holds: For all ε > 0 there exists δ > 0 such that ξ( , ⊥) − ξ(, ⊥) < ε whenever (, ⊥), ( , ⊥) ∈ T satisfy d (, ) < δ and ≤ . Based on these theorems, it is easy to prove the following. Theorem 8.48. Fowa is arg-continuous. Theorem 8.49. FS is not arg-continuous. Theorem 8.50. FA is not arg-continuous. Hence Fowa is continuous both in quantifiers and arguments; which is important for applications. The second example, FS , fails at both continuity conditions and is thus not practical. (We will see below that FS is of theoretical interest because it represents a boundary case of Fξ -DFS). Let us also investigate the specificity of Fξ -DFSes. The following theorem facilitates the proof that a given Fξ -QFM is less specific than another Fξ -QFM by relating the specificity order on Fξ to the specificity order on ξ: Theorem 8.51. Let ξ, ξ : T −→ I be given mappings. Then the following conditions are equivalent: a. Fξ c Fξ ; b. ξ c ξ . In the case of Fξ -models that propagate fuzziness in quantifiers, it is sufficient to check a simpler condition. Theorem 8.52. Let ξ, ξ : T −→ I be given mappings which satisfy (X-1) to (X-5) and suppose that ξ, ξ have the additional property that ξ(, ⊥) = ξ (, ⊥) = 12 whenever (, ⊥) ∈ T with (0) ≥ 12 ≥ ⊥(0). Then the following conditions are equivalent: a. Fξ c Fξ ; b. for all (, ⊥) ∈ T with ⊥(0) > 12 , ξ(, ⊥) ≤ ξ (, ⊥). As regards least specific Fξ -DFSes, we can prove the following: Theorem 8.53. MU is the least specific Fξ -DFS.
8.5 Properties of the Fξ -Models
233
Turning to the issue of most specific models, let us first state a theorem for establishing or rejecting specificity consistency. This is useful because specificity consistency is tightly coupled to the existence of least upper specificity bounds, see Th. 5.13. Theorem 8.54. Consider a pair of mappings ξ, ξ : T −→ I. The QFMs Fξ and Fξ are specificity consistent if and only if ξ, ξ are specificity consistent, i.e. for all (, ⊥) ∈ T, either {ξ(, ⊥), ξ (, ⊥)} ⊆ [0, 12 ] or {ξ(, ⊥), ξ (, ⊥)} ⊆ [ 12 , 1]. An investigation of a possible most specific Fξ -DFS reveals the following. Theorem 8.55. The class of Fξ -DFSes is not specificity consistent. Hence by Th. 5.13, a ‘most specific Fξ -DFS’ does not exist. However, we obtain a positive result if we restrict attention to the class of Fξ -DFSes which propagate fuzziness in quantifiers or arguments. This is apparent from the following observation. Theorem 8.56. Suppose F is a collection of Fξ -DFSes Fξ ∈ F with the property that ξ(, ⊥) = 1 1 2 whenever (, ⊥) ∈ T is such that (0) ≥ 2 ≥ ⊥(0). Then F is specificity consistent. We then have the following corollaries. Theorem 8.57. The class of Fξ -DFSes that propagate fuzziness in quantifiers is specificity consistent. Theorem 8.58. The class of Fξ -DFSes that propagate fuzziness in arguments is specificity consistent. By Th. 5.13, the Fξ -models that propagate fuzziness in quantifiers have a least upper specificity bound which, as it turns out, also propagates fuzziness in quantifiers. Theorem 8.59. FS is the most specific Fξ -DFS that propagates fuzziness in quantifiers. Similarly, we can conclude from Th. 8.58 that there is a most specific Fξ -DFS that propagates fuzziness in arguments. Theorem 8.60. FA is the most specific Fξ -DFS that propagates fuzziness in arguments.
234
8 The Class of Fξ -Models
8.6 Chapter Summary In this chapter, we have generalized the construction of MB -DFSes in order to get a grip on a richer class of models. The construction of these models in terms of three-valued cuts provided a suitable starting point for the generalisation to a broader class of models. Specifically Qγ (X1 , . . . , Xn ) and B : B −→ I were replaced with a pair of upper and lower bound mappings (Q,X1 ,...,Xn , ⊥Q,X1 ,...,Xn ) ∈ T, along with an aggregation operator ξ : T −→ I which maps such pairs into quantification results ξ(Q,X1 ,...,Xn , ⊥Q,X1 ,...,Xn ). In order to identify the intended models within the unrestricted class of Fξ QFMs, the precise requirements on ξ that make Fξ a DFS have been formalized in terms of a system of necessary and sufficient conditions. The full set of criteria required to check the interesting properties of Fξ based on properties of the aggregation mapping ξ have also been developed. These criteria make it easy to decide whether a given Fξ -type model propagates fuzziness in quantifiers and/or arguments and thus complies with the intuitive expectation that less detailed input should not result in more specific output; whether it satisfies the continuity requirements and hence shows a certain robustness against noise in the arguments or alternative interpretations of a fuzzy quantifier; and how it compares to other models in terms of specificity. This analysis also revealed that some of the Fξ -models are rather different from MB -DFSes. Among the Fξ -DFSes, some models neither propagate fuzziness in arguments nor in quantifiers; some models propagate fuzziness in quantifiers, but not in arguments, while others propagate fuzziness in arguments, but not in quantifiers. Of course, there are also models that satisfy both requirements – and the MB -DFSes were characterized as precisely those Fξ -DFSes which propagate fuzziness both in quantifiers and arguments. The analysis of the new Fξ -models also explains why some models which do not propagate fuzziness will perform better than those that propagate fuzziness in situations where the inputs are overly fuzzy. This can be attributed to the property described in Th. 8.26 and Th. 8.33: If an Fξ -DFS propagates fuzziness in quantifiers or in arguments, then Fξ (Q)(X1 , . . . , Xn ) = 12 whenever Q,(X1 ,...,Xn ) ≥ 12 and ⊥Q,(X1 ,...,Xn ) ≤ 12 . In other words, there is a certain range in which the results of the given Fξ -DFS are constantly 12 , which can be undesirable if one needs a fine-grained result ranking. Because both types of propagating fuzziness cause this kind of behaviour, one must resort to models that violate both conditions if one needs specific results even when there is a lot of fuzziness in the inputs. The DFS Fowa is a promising choice in such situations because it also fulfills the continuity requirements. It is therefore anticipated that Fowa will find a number of uses in applications of fuzzy quantifiers. However, Fowa is also an interesting model from a theoretical point of view because the model can be shown to embed the Choquet integral. Thus the chapter relates the new theory to existing work on fuzzy quantification because the Choquet integral is known to embed the core OWA approach.
8.6 Chapter Summary
235
Concerning theoretical insights into fuzzy quantification, the chapter has shown that there are standard models beyond the MB -type – models which do not propagate fuzziness in arguments and/or quantifiers. It also turned out that the conditions of propagating fuzziness in quantifiers and arguments are independent for Fξ -type models. As opposed to the original MB -DFSes, the Fξ models were further shown to be not specificity consistent. This means that a ‘most specific Fξ -DFS’ does not exist. However, there is a most specific Fξ -DFS which propagates fuzziness in quantifiers, FS , and there is also a most specific Fξ -DFS which propagates fuzziness in arguments, FA . FS and FA are not practical models because they violate both continuity conditions, but this seems to be typical for boundary cases with respect to specificity. It is not clear at this stage whether the new models form a ‘natural’ class with certain distinguished properties. However, the Fξ -DFSes comprise relevant examples like Fowa , and all ‘practical’ models described in this monograph will belong to the Fξ class. We shall see in Chap. 11 that the models are also computational, and the upper and lower bound mappings Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn can be implemented efficiently for common quantifiers. For these reasons, the Fξ models will be of special interest to future applications of fuzzy quantifiers.
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
9.1 Motivation and Chapter Overview In this chapter, a further step will be taken to extend the class of known models. While the construction of Fξ -QFM is based on upper and lower bounds on the results obtained for the three-valued cuts and a subsequent aggregation step, we will now define fuzzification mechanisms directly in terms of these supervaluation sets, to which an aggregation mapping Ω is then applied. In order to identify the useful models in this class, we then make explicit the necessary and sufficient conditions on the aggregation mapping Ω that make FΩ a DFS. In addition, the required theory will be developed which permits us to check interesting properties of FΩ -models. It is shown that the new class of models is genuinely broader than Fξ -DFSes. However, the new class does not introduce any new ‘practical’ models because those FΩ -DFSes which are Q-continuous (and thus of potential interest to applications) are in fact Fξ -DFSes. These findings justify the introduction of Fξ -QFMs. It will also be shown that the full class of standard models which propagate fuzziness both in quantifiers and arguments is genuinely broader than the class of MB -DFSes. But again, all models outside the known range of models are not Q-continuous. We shall further investigate a subclass of FΩ -QFMs, the class of Fω -QFMs. These QFMs can be expressed in terms of a simpler construction which excludes some of the original FΩ -QFMs. However, the considered subclass still contains all well-behaved models, i.e. the FΩ -DFSes and Fω -DFSes coincide. The Fω type will be needed later on to establish the relationship between the models defined in terms of three-valued cuts and those defined in terms of the extension principle.
9.2 The Unrestricted Class of FΩ -QFMs Let us now extend the class of Fξ -QFMs to the full class of QFMs definable in terms of three-valued cuts of the argument sets. Hence consider a semi-fuzzy I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 237–258 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
238
9 The Full Class of Models Defined in Terms of Three-Valued Cuts n
quantifier Q : P(E) −→ I and a choice of fuzzy arguments X1 , . . . , Xn ∈ P(E). In order to spot a starting point for the desired generalization, we recall the definition of Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn . Apparently, the upper and lower bound mappings can be decomposed into (a) the three-valued cut mechanism, and (b) a subsequent inf/sup-based aggregation: Q,X1 ,...,Xn (γ) = sup{Q(Y1 , . . . , Yn ) : (Y1 , . . . , Yn ) ∈ Tγ (X1 , . . . , Xn )} = sup SQ,X1 ,...,Xn (γ)
(9.1)
⊥Q,X1 ,...,Xn (γ) = inf{Q(Y1 , . . . , Yn ) : (Y1 , . . . , Yn ) ∈ Tγ (X1 , . . . , Xn )} = inf SQ,X1 ,...,Xn (γ)
(9.2)
and
for all γ ∈ I, provided we define the supervaluation set SQ,X1 ,...,Xn (γ) as follows. Definition 9.1. n For every semi-fuzzy quantifier Q : P(E) −→ I and all X1 , . . . , Xn ∈ P(E), the mapping SQ,X1 ,...,Xn : I −→ P(I) is defined by SQ,X1 ,...,Xn (γ) = {Q(Y1 , . . . , Yn ) : (Y1 , . . . , Yn ) ∈ Tγ (X1 , . . . , Xn )} , for all γ ∈ I. Some basic properties of the set of all supervaluation results SQ,X1 ,...,Xn are stated in this theorem. Theorem 9.2. n Consider a semi-fuzzy quantifier Q : P(E) −→ I and choice of fuzzy subsets X1 , . . . , Xn ∈ P(E). Then a. SQ,X1 ,...,Xn (0) = ∅; b. SQ,X1 ,...,Xn (γ) ⊆ SQ,X1 ,...,Xn (γ ) whenever γ, γ ∈ I with γ ≤ γ . It is thus apparent that all possible choices of SQ,X1 ,...,Xn are contained in the following set K. Definition 9.3. I K ⊆ P(I) is defined by I
K = {S ∈ P(I) : S(0) = ∅ and S(γ) ⊆ S(γ ) whenever γ ≤ γ } . As we shall now state, K is the minimal set which contains all possible choices for SQ,X1 ,...,Xn . To this end, we first have to introduce coefficients s(z) ∈ I associated with S ∈ K, which will play an essential role throughout this chapter.
9.2 The Unrestricted Class of FΩ -QFMs
239
Definition 9.4. Consider S ∈ K. We associate with S a mapping s : I −→ I defined by s(z) = inf{γ ∈ I : z ∈ S(γ)} , for all z ∈ I. It is convenient to define a notation for the s(z)’s obtained from a given quantifier and arguments. Definition 9.5. n For every semi-fuzzy quantifier Q : P(E) −→ I and all X1 , . . . , Xn ∈ P(E), we denote the mapping s obtained from SQ,X1 ,...,Xn by applying Def. 9.4 by sQ,X1 ,...,Xn : I −→ I. The resulting mapping is thus defined by sQ,X1 ,...,Xn (z) = inf{γ ∈ I : z ∈ SQ,X1 ,...,Xn (γ)} , for all z ∈ I. As we shall see later, all FΩ -DFSes can be defined in terms of sQ,X1 ,...,Xn . Theorem 9.6. Let S ∈ K be given and define Q : P(2 × I) −→ I by Q(Y ) = Qinf Y (Y )
(9.3)
Y = {y ∈ I : (0, y) ∈ Y } Y = {y ∈ I : (1, y) ∈ Y }
(9.4) (9.5)
for all Y ∈ P(2 × I), where
and the Qz : P(I) −→ I, z ∈ I are defined by z : sup Y > s(z) Qz (Y ) = z0 : else / S(s(z)), and for all Y ∈ P(I) if z ∈ z : Qz (Y ) = z0 :
sup Y ≥ s(z) else
(9.6)
(9.7)
in the case that z ∈ S(s(z)). z0 is an arbitrary element z0 ∈ S(0) ,
(9.8)
× I) is defined by which exists by Th. 9.2. Further suppose that X ∈ P(2 1 : a=0 (9.9) µX (a, y) = 21 1 : a=1 2 − 2y for all a ∈ 2, y ∈ I. Then SQ,X = S.
240
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
Hence K is exactly the set of all S = SQ,X1 ,...,Xn obtained for arbitrary choices of quantifiers and arguments. In order to obtain a quantification result from SQ,X1 ,...,Xn , we will apply an aggregation operator Ω : K −→ I in the obvious way. Definition 9.7. Consider an aggregation operator Ω : K −→ I. The corresponding QFM FΩ is defined by FΩ (Q)(X1 , . . . , Xn ) = Ω(SQ,X1 ,...,Xn ) , n
for all semi-fuzzy quantifiers Q : P(E) X1 , . . . , Xn ∈ P(E).
−→ I and fuzzy arguments
By the class of FΩ -QFMs, we denote the collection of all QFMs defined in this way. As usual, we must impose conditions to shrink the full class of FΩ to its subclass of FΩ -DFSes.
9.3 Characterisation of the FΩ -Models Definition 9.8. For all S ∈ K, we define S , S ∈ K as follows. ∩{S(γ ) : γ > γ} : γ < 1 S(0) S = S = ∪{S(γ ) : γ < γ} I : γ=1
: :
γ=0 γ>0
for all γ ∈ I. Note 9.9. The definition is slightly asymmetric; we have departed from the usual scheme of defining S (1) = S(1) in this case because the present definition of S (1) = I allows more compact conditions on Ω, and eventually for shorter proofs. Let us further stipulate a definition of S S which will serve to express a monotonicity condition on Ω. Definition 9.10. For all S, S ∈ K, let us say that S S if and only if the following two conditions are valid for all γ ∈ I: 1. for all z ∈ S(γ), there exists z ∈ S (γ) with z ≥ z; 2. for all z ∈ S (γ), there exists z ∈ S(γ) with z ≤ z . It is apparent from this definition that is reflexive and transitive, but not necessarily antisymmetric (i.e. S S and S S does not imply that S = S ). Hence is a preorder. We are now ready to state the conditions on reasonable choices of Ω : K −→ I, in analogy to the conditions (B-1)–(B-5) for MB -models and to the conditions (X-1)–(X-5) for the Fξ -type.
9.3 Characterisation of the FΩ -Models
241
Definition 9.11. Consider Ω : K −→ I. We impose the following conditions on Ω. For all S ∈ K, If there exists a ∈ I with S(γ) = {a} for all γ ∈ I, then Ω(S) = a. (Ω-1) If S (γ) = {1 − z : z ∈ S(γ)} for all γ ∈ I, then Ω(S ) = 1 − Ω(S). (Ω-2) If 1 ∈ S(0) and S(γ) ⊆ {0, 1} for all γ ∈ I, then Ω(S) =
1 2
+ 12 s(0). (Ω-3)
Ω(S) = Ω(S )
(Ω-4)
If S ∈ K satisfies S S , then Ω(S) ≤ Ω(S ).
(Ω-5)
Note 9.12. The only condition which is slighly different from the usual scheme is (Ω-4). The departure from requiring Ω(S ) = Ω(S ) turned out to shorten the proofs. The latter equation is entailed by the above conditions, however. Theorem 9.13. The conditions (Ω-1)–(Ω-5) on Ω : K −→ I are sufficient for FΩ to be a standard DFS. In the following, we consider another construction which elucidates the exact properties of S ∈ K that a conforming choice of Ω can rely on. Definition 9.14. For all S ∈ K, we define S ‡ ∈ K by S ‡ (γ) = {z ∈ I : there exist z , z ∈ S(γ) with z ≤ z ≤ z } for all γ ∈ I. Note 9.15. It is apparent that indeed S ‡ ∈ K. The effect of applying ‡ to S is that of ‘filling the gaps’ in the interior of S. The resulting S ‡ will be a closed, half-open, or open interval. The importance of this construction with respect to FΩ -QFMs stems from the invariance of well-behaved FΩ -QFMs with respect to the gap-filling operation: Theorem 9.16. Suppose Ω : K −→ I is a given mapping such that FΩ satisfies (Z-5). Then Ω(S) = Ω(S ‡ ) , for all S ∈ K. This means that a well-behaved choice of Ω may only depend on sup S(γ), inf S(γ), and the knowledge whether sup S(γ) ∈ S(γ) and inf S(γ) ∈ S(γ). Apart from this, the ‘interior structure’ of S(γ) is irrelevant to the determination of Ω(S). The above gap-filling operation is useful for proving that (Ω-5) be necessary for FΩ to satisfy (Z-5). The other ‘Ω-conditions’ are easily shown to be necessary for FΩ to be a DFS, and require only minor adjustments of the corresponding proofs for Fξ -QFMs that were presented in [57].
242
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
Theorem 9.17. The conditions (Ω-1)–(Ω-5) on Ω : K −→ I are necessary for FΩ to be a DFS. Hence the ‘Ω-conditions’ are necessary and sufficient for FΩ to be a DFS, and all FΩ -DFSes are indeed standard models. In order to prove that the criteria are independent, let us relate Fξ -QFMs to their apparent superclass of FΩ -QFMs. Theorem 9.18. Consider an aggregation mapping ξ : T −→ I. Then Fξ = FΩ , where Ω : K −→ I is defined by Ω(S) = ξ(S , ⊥S ) ,
(9.10)
for all S ∈ K, and (S , ⊥S ) ∈ T is defined by S (γ) = sup S(γ)
(9.11)
⊥S (γ) = inf S(γ)
(9.12)
for all γ ∈ I. This is apparent. Hence all Fξ -QFMs are FΩ -QFMs and all Fξ -DFSes are FΩ -DFSes. The next theorem permits to reduce the independence proof of the conditions on Ω to the independence proof of the conditions imposed on ξ. Theorem 9.19. Suppose ξ : T −→ I is given and Ω : K −→ I is defined by (9.10). Then a. (X-1) is equivalent to (Ω-1); b. (Z-2) is equivalent to (Ω-2); c. (Z-3) is equivalen to (Ω-3); d. 1. the conjunction of (Z-2), (X-4) and (X-5) implies (Ω-4); 2. (Ω-4) implies (X-4); e. (X-5) is equivalent to (Ω-5). Based on this theorem and the known independence of the conditions (X1)–(X-5), it is now easy to prove the desired result concerning independence. Theorem 9.20. The conditions (Ω-1)–(Ω-5) imposed on Ω : K −→ I are independent.
9.5 The Classes of Fω -Models and FΩ -Models Coincide
243
9.4 The Unrestricted Class of Fω -QFMs As has been remarked above, every FΩ -DFS can be defined in terms of the mapping sQ,(X1 ,...,Xn ) : I −→ I and this usually makes a simpler representation. It therefore makes sense to introduce the class of QFMs definable in terms of sQ,(X1 ,...,Xn ) : I −→ I. Theorem 9.21. n Suppose Q : P(E) −→ I is a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E) a choice of fuzzy arguments. Then sQ,X1 ,...,Xn −1 (0) = ∅, i.e. there exists z0 ∈ I with sQ,X1 ,...,Xn (z0 ) = 0. Hence all possible choices of sQ,X1 ,...,Xn are contained in the following set L. Definition 9.22. L ⊆ II is defined by L = {s ∈ II : s−1 (0) = ∅} . The following theorem states that L is the minimum subset of II which contains all possible mappings sQ,X1 ,...,Xn : Theorem 9.23. For all s ∈ L, let us define S : I −→ P(I) by S(γ) = {z ∈ I : γ ≥ s(z)}
(9.13)
for all γ ∈ I. It is apparent that S ∈ K. Let us further suppose that Q : × I) is the fuzzy subset P(2 × I) −→ I is defined by (9.3) and that X ∈ P(2 defined by (9.9). Then sQ,X = s. In order to define quantification results based on sQ,X1 ,...,Xn , we need an aggregation mapping ω : L −→ I. The corresponding QFM Fω is defined in the usual way. Definition 9.24. Let a mapping ω : L −→ I be given. By Fω we denote the QFM defined by Fω (Q)(X1 , . . . , Xn ) = ω(sQ,X1 ,...,Xn ) , n for all semi-fuzzy quantifiers Q : P(E) −→ I and all X1 , . . . , Xn ∈ P(E).
9.5 The Classes of Fω -Models and FΩ -Models Coincide It is obvious from the definition of sQ,X1 ,...,Xn in terms of SQ,X1 ,...,Xn that all Fω -QFMs are FΩ -QFMs, using the apparent choice of Ω : K −→ I,
244
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
Ω(S) = ω(s)
(9.14)
where s(z) = inf{γ ∈ I : z ∈ S(γ)}, see Def. 9.5. It is then clear from Def. 9.7 and Def. 9.24 that FΩ = Fω .
(9.15)
The converse is not true, i.e. it is not the case that all FΩ -QFMs are Fω QFMs. However, if an FΩ -QFMs is sufficiently ‘well-behaved’, then it is also an Fω -QFM. In particular, this is the case for an FΩ -DFS. Theorem 9.25. a. If Ω : K −→ I satisfies (Ω-4), then FΩ = Fω , provided we define ω : L −→ I by ω(s) = Ω(S)
(9.16)
S(γ) = {z ∈ I : γ ≥ s(z)}
(9.17)
for all s ∈ L, where
for all γ ∈ I. b. If Ω : K −→ I does not satisfy (Ω-4), then FΩ is not an Fω -QFM. Therefore an FΩ -QFM is an Fω -QFM if and only if it satisfies (Ω-4). Let us recall that by Th. 9.17, (Ω-4) is necessary for FΩ to be a DFS. This means that we do not lose any models of interest if we restrict attention to the class of those FΩ -QFMs which satisfy (Ω-4), and can therefore be expressed as Fω -QFMs.
9.6 Characterisation of the Fω -Models It is then convenient to switch from (Ω-1)–(Ω-5) to corresponding conditions on ω : L −→ I. To accomplish this, we will first define a preorder ⊆ L × L, which is needed to express a monotonicity condition. Definition 9.26. For all s, s ∈ L, s s if and only if the following two conditions hold: a. for all z ∈ I, inf{s (z ) : z ≥ z} ≤ s(z); b. for all z ∈ I, inf{s(z) : z ≤ z } ≤ s (z ). In the case of Fω -QFMs, we can express the conditions on ω : L −→ I even more succinctly.
9.6 Characterisation of the Fω -Models
245
Definition 9.27. We impose the following conditions on ω : L −→ I. For all s ∈ L, If s−1 ([0, 1)) = {a}, then ω(s) = a. If s (z) = s(1 − z) for all z ∈ I, then ω(s ) = 1 − ω(s). If s(1) = 0 and s−1 ([0, 1)) ⊆ {0, 1}, then ω(s) =
1 2
+ 12 s(0).
If s ∈ L with s s , then ω(s) ≤ ω(s ).
(ω-1) (ω-2) (ω-3) (ω-4)
Theorem 9.28. Let ω : L −→ I be given and suppose that Ω : K −→ I is defined in terms of ω according to (9.14). Then the following relationships hold. a. Ω b. Ω c. Ω d. Ω e. Ω
satisfies satisfies satisfies satisfies satisfies
(Ω-1) if (Ω-2) if (Ω-3) if (Ω-4); (Ω-5) if
and only if ω satisfies (ω-1); and only if ω satisfies (ω-2); and only if ω satisfies (ω-3); and only if ω satisfies (ω-4).
The following theorems are now obvious from the results for Ω. Theorem 9.29. The conditions (ω-1)–(ω-4) are sufficient for Fω to be a standard DFS. Theorem 9.30. The conditions (ω-1)–(ω-4) are necessary for Fω to be a DFS. Theorem 9.31. The conditions (ω-1)–(ω-4) are independent. To sum up, Fω -DFSes comprise all FΩ -DFSes, they are usually easier to define, and simpler conditions (ω-1)–(ω-4) have to be checked. However, the monotonicity condition (ω-4) on ω is somewhat more complicated compared to the monotonicity condition (Ω-5) on Ω. In the following, let us therefore introduce a simpler preorder for expressing monotonicity, which when combined with an additional condition can replace and the corresponding monotonicity condition (ω-4). is defined as follows. Definition 9.32. For all s, s ∈ L, s s if and only if the following two conditions hold: a. for all z ∈ I, there exists z ≥ z with s (z ) ≤ s(z); b. for all z ∈ I, there exists z ≤ z with s(z) ≤ s (z ). In order to state the additional condition, it is necessary to introduce a construction on s ∈ L which corresponds to the gap-filling operation S ‡ defined on S ∈ K.
246
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
Definition 9.33. For all s ∈ L, s‡ : I −→ I is defined by s‡ (z) = max(inf{s(z ) : z ≤ z}, inf{s(z ) : z ≥ z}) , for all z ∈ I. Some basic properties of
‡
are the following.
Theorem 9.34. Let s ∈ L be given. Then a. s‡ ≤ s; b. s‡ ∈ L; c. s‡ is concave, i.e. s‡ (z2 ) ≤ max(s‡ (z1 ), s‡ (z3 )) whenever z1 ≤ z2 ≤ z3 . This ‘concavification’ construction was needed for the proof that ω’s satisfying (ω-4) entails that Ω defined by (9.14) satisfies (Ω-5). However, it will also play its role in defining examples of Fω models, see Def. 9.36 and Def. 9.43. The connection between ‡ and monotonic behaviour of ω becomes visible in the next theorem, which facilitates the proof that a given ω satisfies (ω-4), by reducing it to the ‡ -invariance of ω, and its monotonicity with respect to the simplified preorder . Theorem 9.35. For all ω : L −→ I, the monotonicity condition (ω-4) is equivalent to the conjunction of the following two conditions: a. for all s, s ∈ L with s s , it holds that ω(s) ≤ ω(s ); b. for all s ∈ L, ω(s‡ ) = ω(s).
9.7 Examples of FΩ -Models We will now present four examples of ‘genuine’ Fω -models, i.e. of Fω -DFSes which do not belong to the class of Fξ -DFSes. To this end, it is necessary to introduce some coefficients defined in terms of a given s ∈ L. Definition 9.36. ≤
1
1 ≥2
For all s ∈ L, the coefficients s∗,0 , s∗⊥,0 , s1,∗ , s1⊥,∗ , s∗ 2 , s∗ by
∈ I are defined
9.7 Examples of FΩ -Models
s∗,0 = sup s‡ s∗⊥,0 s1,∗ s1⊥,∗ 1 ≤ s∗ 2 1 ≥ s∗ 2
−1
247
(0)
(9.18)
(0)
(9.19)
([0, 1))
(9.20)
= inf s−1 ([0, 1))
(9.21)
= inf{s(z) : z ≤ 12 }
(9.22)
= inf{s(z) : z ≥ 12 } .
(9.23)
‡ −1
= inf s
−1
= sup s
Based on these coefficients, we can now define the examples of Fω -models. Definition 9.37. By ωM : L −→ I we denote the following mapping, 1 ⊥,0 1 1 ≤2 min(s∗ , 2 + 2 s∗ ) : 1 ωM (s) = ,0 1 1 ≥2 max(s , − s ) : ∗ ∗ 2 2 1 : 2
s∗⊥,0 >
1 2
s∗,0 < else
1 2
for all s ∈ L. The QFM FM is defined in terms of ωM according to Def. 9.24, i.e. FM = FωM . Let us first notice that the QFM FM so defined is indeed a DFS. Theorem 9.38. FM is a standard DFS. Let us remark that FM is indeed a ‘genuine’ Fω -DFS: Theorem 9.39. FM is not an Fξ -DFS, i.e. there exists no ξ : T −→ I with FM = Fξ . In particular, this proves that the Fω -DFSes are really more general than Fξ -DFSes, i.e. the Fξ -DFSes form a proper subclass of the Fω -DFSes. Definition 9.40. By ωP : L −→ I we denote the mapping defined by 1 2 min(s,∗ , 1 + 1 s≤ : s∗⊥,0 > 1 2 2 ∗ ) 1 ≥ ωP (s) = max(s1⊥,∗ , 12 − 12 s∗ 2 ) : s∗,0 < 1 : else 2
1 2 1 2
for all s ∈ L. We define the QFM FP in terms of ωP according to Def. 9.24, i.e. FP = FωP . Theorem 9.41. FP is a standard DFS.
248
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
Let us also observe that FP is a genuine Fω -DFS. Theorem 9.42. FP is not an Fξ -DFS, i.e. there exists no ξ : T −→ I such that FP = Fξ . It is possible to obtain an even more specific DFS by slightly changing the definition of FP . Definition 9.43. By ωZ : L −→ I we denote the mapping defined 1 ≤ ,∗ min(s1 , 12 + 12 s∗ 2 ) : 1 ωZ (s) = ⊥,∗ 1 1 ≥2 max(s , − s ) : ∗ 1 2 2 1 : 2
by s‡
−1
(0) ⊆ [ 12 , 1]
−1
s‡ (0) ⊆ [0, 12 ] else
for all s ∈ L. We define the QFM FZ in terms of ωZ according to Def. 9.24, i.e. FZ = FωZ . Note 9.44. It has been shown in [58, p. 43+] that ωZ is well-defined. Theorem 9.45. FZ is a standard DFS. Again, it is easily shown that FZ is a genuine Fω -DFS. Theorem 9.46. FZ is not an Fξ -DFS, i.e. there exists no ξ : T −→ I such that FZ = Fξ . Definition 9.47. By ωK : L −→ I we denote the mapping defined by ⊥,0 1 ⊥,0 1 min(s∗ , 2 + 2 s(0)) : s∗ > ωK (s) = max(s∗,0 , 12 − 12 s(1)) : s∗,0 < 1 : else 2
1 2 1 2
for all s ∈ L. We define the QFM FK in terms of ωK according to Def. 9.24, i.e. FK = FωK . Theorem 9.48. FK is a standard DFS. Again, it can be asserted that FK is a genuine Fω -DFS. Theorem 9.49. FK is not an Fξ -DFS, i.e. there exists no ξ : T −→ I with FK = Fξ .
9.8 Properties of FΩ -Models
249
9.8 Properties of FΩ -Models Now that the defining conditions of FΩ -DFSes and Fω -DFS have been established and examples of the new classes of models have been given, we will turn to additional properties like propagation of fuzziness. Usually the corresponding conditions will be stated both for the representation in terms of FΩ and in terms of Fω . This provides maximum flexibility in later proofs whether a model at hand does or does not possess these properties. Definition 9.50. For all S, S ∈ K, we say that S is fuzzier (less crisp) than S , in symbols: S c S , if and only if the following conditions are satisfied for all γ ∈ I. for all z ∈ S (γ), there exists z ∈ S(γ) such that z c z ; for all z ∈ S(γ), there exists z ∈ S (γ) such that z c z .
(9.24) (9.25)
Definition 9.51. Let Ω : K −→ I be given. We say that Ω propagates fuzziness if and only if Ω(S) c Ω(S ) whenever S, S ∈ K satisfy S c S . Theorem 9.52. For all Ω : K −→ I, FΩ propagates fuzziness in quantifiers if and only if Ω propagates fuzziness. The following condition permits a simplified check if a given Ω propagates fuzziness. Theorem 9.53. Suppose Ω : K −→ I satisfies (Ω-1)–(Ω-5). Then Ω propagates fuzziness if and only if Ω(S) = Ω(S ‡ ∩ [ 12 , 1]) , for all S ∈ K with S(0) ⊆ [ 12 , 1]. Definition 9.54. Let S, S ∈ K be given. We say that S is less specific than S , in symbols S S , if and only if S(γ) ⊇ S (γ) for all γ ∈ I.
250
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
Definition 9.55. Let Ω : K −→ I be given. We say that Ω propagates unspecificity if and only if Ω(S) c Ω(S ) for every choice of S, S ∈ K with S S . Theorem 9.56. For all Ω : K −→ I, FΩ propagates fuzziness in arguments if and only if Ω propagates unspecificity. The above criterion for Ω propagating unspecificity can be simplified as follows. Theorem 9.57. Suppose Ω : K −→ I satisfies (Ω-1), (Ω-2), (Ω-4) and (Ω-5). Then the following conditions are equivalent: a. Ω propagates unspecificity; b. for all s ∈ K with S(0) ⊆ [ 12 , 1], it holds that Ω(S) = Ω(S ), where S ∈ K is defined by [z∗ , 1] : z∗ ∈ S(γ) S (γ) = (9.26) / S(γ) (z∗ , 1] : z∗ ∈ for all γ ∈ I, and where z∗ = z∗ (γ) abbreviates z∗ = inf S(γ) .
(9.27)
Definition 9.58. For all s, s ∈ L, we say that s is fuzzier (less crisp) than s , in symbols sc s , if and only if for all z ∈ I, there exists z ∈ I with z c z and s (z ) ≤ s(z); for all z ∈ I, there exists z ∈ I with z c z and s(z) ≤ s (z ).
(9.28) (9.29)
Definition 9.59. A mapping ω : L −→ I is said to propagate fuzziness if and only if ω(s) c ω(s ) for all choices of s, s ∈ L with s c s . Theorem 9.60. Suppose ω : L −→ I is ‡ -invariant, i.e. ω(s‡ ) = ω(s) for all s ∈ L. Then Fω propagates fuzziness in quantifiers if and only if ω propagates fuzziness. If ω is well-behaved, then we can further simplify the condition that must be tested for establishing or rejecting that ω propagate fuzziness.
9.8 Properties of FΩ -Models
251
Theorem 9.61. Suppose that ω : L −→ I satisfies (ω-1)–(ω-4). ω propagates fuzziness if and only if for all s ∈ L with s−1 (0) ∩ [ 12 , 1] = ∅, it holds that ω(s) = ω(s ), where s‡ (z) : z ≥ 12 s (z) = (9.30) 1 : z < 12 for all z ∈ I. Definition 9.62. A mapping ω : L −→ I is said to propagate unspecificity if and only if ω(s) c ω(s ) whenever s, s ∈ L satisfy s ≤ s . Theorem 9.63. Let ω : L −→ I be a given mapping. Then Fω propagates fuzziness in arguments if and only if ω propagates unspecificity. Again, it is possible to simplify the condition imposed on ω. Theorem 9.64. Suppose ω : L −→ I satisfies (ω-1), (ω-2) and (ω-4). Then the following conditions are equivalent. a. ω propagates unspecificity; b. for all s ∈ L with s−1 ∩ [ 12 , 1] = ∅, it holds that ω(s) = ω(s ), where s ∈ L is defined by s (z) = inf{s(z ) : z ≤ z}
(9.31)
for all z ∈ I. Now let us apply these criteria to the examples of Fω -models. Theorem 9.65. FM propagates fuzziness in quantifiers. Theorem 9.66. FM propagates fuzziness in arguments. Let us recall from Th. 9.39 that FM is not an Fξ -DFS, in particular not an MB -DFS. Hence the class of MB -DFSes, which propagate fuzziness in both arguments and quantifiers, does not include all standard models with this property. FM is a counterexample which demonstrates that the class of standard DFSes which propagate fuzziness both in quantifiers and arguments is genuinely broader than the class of MB -models. Theorem 9.67. FP propagates fuzziness in quantifiers.
252
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
Theorem 9.68. FP does not propagate fuzziness in arguments. Theorem 9.69. FZ propagates fuzziness in quantifiers. Theorem 9.70. FZ does not propagate fuzziness in arguments. As concerns FK , we obtain the following results. Theorem 9.71. FK does not propagate fuzziness in quantifiers. Theorem 9.72. FK propagates fuzziness in arguments. Hence there are Fω -DFSes beyond Fξ -DFSes that propagate fuzziness in quantifiers, but not in arguments. In particular, the class of standard models that propagate fuzziness in quantifiers but not in arguments is genuinely broader than the class of Fξ -DFSes with this property. It will be shown later that the class of Fω -DFSes with this property is still specificity consistent and investigate its least upper specificity bound. Definition 9.73. A collection Ω ∗ of mappings Ω ∈ Ω ∗ , Ω : K −→ I is called specificity consistent if and only if for all S ∈ K, either {Ω(S) : Ω ∈ K} ⊆ [ 12 , 1] or {Ω(S) : Ω ∈ K} ⊆ [0, 12 ]. Theorem 9.74. Suppose Ω ∗ is a collection of mappings Ω ∈ Ω ∗ , Ω : K −→ I and let F = {FΩ : Ω ∈ Ω ∗ } be the corresponding collection of QFMs. Then F is specificity consistent if and only if Ω ∗ is specificity consistent. Theorem 9.75. Suppose Ω ∗ is a collection of mappings Ω ∈ Ω ∗ , Ω : K −→ I which satisfy (Ω-5), and let F = {FΩ : Ω ∈ Ω ∗ } be the corresponding collection of DFSes. Further suppose that every Ω ∈ Ω ∗ has the additional property that Ω(S) = 12 for all S ∈ K with S(0)∩[ 12 , 1] = ∅ and S(0)∩ [0, 12 ] = ∅. Then F is specificity consistent. Definition 9.76. We say that Ω : K −→ I is fuzzier (less crisp) than Ω : K −→ I, in symbols: Ω c Ω , if and only if Ω(S) c Ω (S) for all S ∈ K. Theorem 9.77. Let Ω, Ω : K −→ I be given mappings and let FΩ , FΩ be the corresponding QFMs defined by Def. 9.7. Then FΩ c FΩ if and only if Ω c Ω .
9.8 Properties of FΩ -Models
253
This criterion for comparing specificity can be further simplified in the frequent case that some basic assumptions can be made on Ω, Ω . Theorem 9.78. Let Ω, Ω : K −→ I be given mappings which satisfy (Ω-2) and (Ω-5). Further suppose that Ω(S) = 12 = Ω (S) whenever S ∈ K has S(0) ∩ [ 12 , 1] = ∅ and S(0) ∩ [0, 12 ] = ∅. Then Ω c Ω if and only if Ω(S) ≤ Ω (S) for all S ∈ K with S(0) ⊆ [ 12 , 1]. Similar criteria can be established in the case of mappings ω : L −→ I. Definition 9.79. A collection ω ∗ of mappings ω ∈ ω ∗ , ω : L −→ I is called specificity consistent if and only if for all s ∈ L, either {ω(s) : ω ∈ L} ⊆ [ 12 , 1] or {ω(s) : ω ∈ L} ⊆ [0, 12 ]. Theorem 9.80. Suppose ω ∗ is a collection of mappings ω ∈ ω ∗ , ω : L −→ I, and let F = {Fω : ω ∈ ω ∗ } be the corresponding collection of QFMs. Then F is specificity consistent if and only if ω ∗ is specificity consistent. Theorem 9.81. Suppose ω ∗ is a collection of mappings ω ∈ ω ∗ , ω : L −→ I which satisfy (ω-1)–(ω-4), and let F = {Fω : ω ∈ ω ∗ } be the corresponding collection of DFSes. Further suppose that every ω ∈ ω ∗ has the additional property that ω(s) = 12 for all s ∈ L with s−1 (0) ∩ [ 12 , 1] = ∅ and s−1 (0) ∩ [0, 12 ] = ∅. Then F is specificity consistent. The following theorems show that the above property is possessed both by Fω -DFSes that propagate fuzziness in quantifiers and by those that propagate fuzziness in arguments: Theorem 9.82. Let ω : L −→ I be a given mapping which satisfies (ω-1)–(ω-4) and suppose that the corresponding DFS Fω propagates fuzziness in quantifiers. Then ω(s) = 12 for all s ∈ L with s−1 (0) ∩ [ 12 , 1] = ∅ and s−1 (0) ∩ [0, 12 ] = ∅. In particular, Theorem 9.83. The collection of Fω -DFSes that propagate fuzziness in quantifiers is specificity consistent. Theorem 9.84. Let ω : L −→ I be a given mapping which satisfies (ω-1)–(ω-4) and suppose that the corresponding DFS Fω propagates fuzziness in arguments. Then ω(s) = 12 for all s ∈ L with s−1 (0) ∩ [ 12 , 1] = ∅ and s−1 (0) ∩ [0, 12 ] = ∅.
254
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
Therefore Theorem 9.85. The collection of Fω -DFSes that propagate fuzziness in arguments is specificity consistent. Definition 9.86. We say that ω : L −→ I is fuzzier (less crisp) than ω : L −→ I, in symbols: ω c ω , if and only if ω(s) c ω (s) for all s ∈ L. Theorem 9.87. Let ω, ω : L −→ I be given mappings and let Fω , Fω be the corresponding QFMs defined by Def. 9.24. Then Fω c Fω if and only if ω c ω . Again, it is possible to simplify the condition in typical situations. Theorem 9.88. Let ω, ω : L −→ I be given mappings which satisfy (ω-2) and (ω-4). Further suppose that ω(s) = 12 = ω (s) whenever s ∈ L satisfies s−1 (0) ∩ [ 12 , 1] = ∅ and s−1 (0) ∩ [0, 12 ] = ∅. Then ω c ω if and only if ω(s) ≤ ω (s) for all s ∈ L −1 with s‡ (0) ⊆ [ 12 , 1]. The precondition of the theorem is e.g. satisfied by the models that propagate fuzziness. Based on this simplified criterion, it is now easy to prove the following results concerning specificity bounds. Theorem 9.89. FZ is the most specific Fω -DFS that propagates fuzziness in quantifiers. Theorem 9.90. FK is the most specific Fω -DFS that propagates fuzziness in arguments. Theorem 9.91. FM is the most specific Fω -DFS that propagates fuzziness both in quantifiers and arguments. As concerns the issue of identifying the least specific model, we obtain the following result which confirms the special role of MU . Theorem 9.92. MU is the least specific Fω -DFS. Finally let us consider continuity properties of FΩ -models. This investigation will be useful to relate the new class of DFSes to its subclass of Fξ -models. To this end, we introduce the following operation . Definition 9.93. For all S ∈ K, S ∈ K is defined by S (γ) = [inf S(γ), sup S(γ)] for all γ ∈ I.
9.9 Chapter Summary
255
Note 9.94. It is apparent from Def. 9.3 that indeed S ∈ K. Theorem 9.95. For all Ω : K −→ I, FΩ is an Fξ -QFM if and only if Ω is Ω(S) = Ω(S ) for all S ∈ K.
-invariant, i.e.
Utilizing this relationship, the following theorem is straightforward. Theorem 9.96. Let Ω : K −→ I be an ‡ -invariant mapping. If FΩ is Q-continuous, then it is an Fξ -QFM, i.e. there exists ξ : T −→ I with FΩ = Fξ . In particular, the theorem is applicable to all FΩ -models. Hence all FΩ -models that are interesting from a practical perspective are already contained in the class of Fξ -QFMs.
9.9 Chapter Summary Summarizing, this chapter was concerned with the construction of a more general type of models. In order to ensure that the new models subsume the known Fξ -DFSes, which form the broadest class of standard models developed in the previous chapters, it was considered best to start from the underlying mechanism that was used to define ξ and to pursue another generalization. We hence observed that the mappings Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn used to define Fξ can be further decomposed into a subsequent application of the threevalued cut mechanism (which generates a supervaluation set of alternative interpretations for each cut level) followed by an aggregation step based on the infimum or supremum. In order to capture the full class of standard models that depend on three-valued cuts, it was straightforward to abstract from the sup/inf-based aggregation step. We therefore started considered those models that can be defined in terms of the ‘raw’ information obtained at the cut levels, i.e. in terms of the result sets SQ,X1 ,...,Xn (γ) which represent the ambiguity range of all possible interpretations of Q given the three-valued cuts of X1 , . . . , Xn at the cut levels γ. This generalization yields a new class of models genuinely broader than Fξ -DFSes, the full class of models definable in terms of three-valued cuts. In developing the theory of these models, we first identified the precise range of possible mappings S = SQ,X1 ,...,Xn that can result from a choice of quantifier Q and fuzzy arguments X1 , . . . , Xn . The resulting set K provides the proper domain to define aggregation operators Ω : K −→ I from which QFMs can then be constructed in the apparent way, i.e. FΩ (Q)(X1 , . . . , Xn ) = Ω(SQ,X1 ,...,Xn ). After introducing FΩ -QFMs, we made explicit the precise conditions on Ω that make FΩ a DFS. In particular, the class of FΩ -DFSes was characterized in terms of a set of necessary and sufficient conditions on Ω, and it was shown
256
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
that these conditions are independent. This analysis also reveals that all FΩ models are in fact standard DFSes. We then focused on an apparent subclass of FΩ -QFMs, the class of Fω QFMs. These are obtained by defining coefficients sQ,X1 ,...,Xn (z) = inf{γ : z ∈ SQ,X1 ,...,Xn (γ)} which extract an important characteristic of the result sets SQ,X1 ,...,Xn (γ). This construction offers the advantage that we no longer need to work with sets of results, as it was the case with the SQ,X1 ,...,Xn (γ)’s, which are subsets of the unit interval. By contrast, we can now focus on scalars sQ,X1 ,...,Xn in the unit range, and a subsequent aggregation by applying the chosen ω : L −→ I. Among other things, this greatly simplifies the definition of models, and therefore all examples of FΩ -DFSes were presented in this succinct format. Noticing that the new coefficients sQ,X1 ,...,Xn are functions of SQ,X1 ,...,Xn which suppress some of the original information, the question then arises if some of the FΩ -models are lost under the new construction. To resolve this issue and gain some insight into the structure of Fω -DFSes, a set of independent conditions that precisely describe the Fω -DFSes in terms of necessary and sufficient criteria on ω was developed. In addition, the Fω -QFMs were related to their superclass of FΩ -QFMs. This analysis revealed that the move from FΩ -QFMs to Fω -QFMs does not result in any loss of intended models, i.e. the classes of FΩ -DFSes and Fω -DFSes coincide. Turning to examples of FΩ -DFSes (or synonymously, Fω -DFSes), the simplified format was utilized to define the four Fω -models FM , FP , FZ and FK , all of which were shown to be ‘genuine’ members which go beyond the class of Fξ -QFMs. We then formulated conditions on Ω and ω which permit an investigation of all characteristic properties of the corresponding models. To this end, we first extended the specificity order to the case of Sc S and s c s . This allowed us to reduce FΩ ’s propagating fuzziness in quantifiers to the requirement that Ω propagate fuzziness, i.e. S c S entails Ω(S)c Ω(S ). Based on a different relation S S defined on K, it was further possible to define a condition of propagating unspecificity on Ω, and to prove that FΩ propagates fuzziness in arguments if and only if Ω propagates unspecificity. In addition, it was shown that both the condition of propagating fuzziness and the condition of propagating unspecificity can be simplified if the considered Ω is well-behaved (in particular if FΩ is a DFS). In this common case, a very elementary test on Ω is sufficient for detecting or rejecting these properties. All of these results have also been transferred to Fω -QFMs, and thus turned into corresponding conditions on ω. After developing the formal apparatus required to investigate propagation of fuzziness in FΩ - and Fω -QFMs, the issue of most specific and least specific models was discussed to some depth. Acknowledging its relevance to the existence of most specific models, we first extended the notion of specificity consistency to collections Ω ∗ of aggregation mappings Ω and proved that the resulting criterion on Ω ∗ precisely captures specificity consistency of the class of QFMs F = {FΩ : Ω ∈ Ω ∗ }. Hence the question whether F has a least upper specificity bound can be decided
9.9 Chapter Summary
257
by looking at the aggregation mappings in Ω ∗ . It was also shown how the criterion can be simplified in common situations. Following this, the question was addressed how a specificity comparison FΩ c FΩ can be reformulated as a condition Ω c Ω imposed on the aggregation mappings. Again, the condition for Ω c Ω can be reduced to a very simple check in many typical situations. All of the above concepts and theorems were then adapted to Fω QFMs, in order to provide similar support for specificity comparison in those cases where the models of interest are defined in terms of an aggregation mapping ω. Based on these preparations, it was easy to prove some results concerning propagation of fuzziness that elucidate the structure of the class of Fω -DFSes, and that relate the examples of Fω -DFSes to the class as a whole. First of all, the full class of Fω -DFSes is not specificity consistent (because its subclass of Fξ -DFSes is known to violate specificity consistency), which means that a ‘most specific Fω -DFS’ does not exist. However, the class of models that propagate fuzziness in quantifiers was shown to be specificity consistent, and the most specific Fω -DFS with this property was also identified, which turned out to be FZ . Recalling that FZ is not an Fξ -QFM, this demonstrates that the class of Fω -DFSes which propagate fuzziness in quantifiers is an extension proper of the class of Fξ -DFSes with the same behaviour. Turning to propagation of fuzziness in quantifiers, it was possible to prove a similar result. The corresponding class of Fω -models was shown to be specificity consistent, and FK was established to be the most specific Fω -DFS with this property. Again, we can conclude from the fact that FK is a ‘genuine’ Fω -DFS that the Fω DFSes contain models which propagate fuzziness in arguments beyond those already known from the study of Fξ -DFSes. We then investigated those standard models that propagate fuzziness both in quantifiers and arguments. The model FM was shown to be the most specific Fω -DFS with these properties. The class of Fξ -DFSes that fulfill both conditions is known to coincide with the class of MB -DFSes. Because FM is not an Fξ -DFS, this proves that there are standard models beyond the MB -type which propagate fuzziness both in quantifiers and arguments. The problem of identifying the greatest lower specificity bound has also been addressed. In fact, the least specific Fω -DFS was proven to coincide with one of the MB -models, namely MU , which was already known to be the least specific MB - and Fξ -DFS. Finally, we also addressed the continuity issue. It is indispensible for applications that the chosen QFM be robust against slight variations in the chosen quantifier and in its arguments, which might e.g. result from noise. In addition, both continuity conditions are desirable in order to account for imperfect knowledge of the precise interpretation of the involved NL quantifier and NL concepts in terms of numeric membership grades. Based on an auxiliary filling construction S , it was shown that every FΩ -QFM which is continuous in quantifiers is in fact an Fξ -QFM. The class of Fω -DFSes which are Q-continuous therefore collapses into the class of Q-continuous Fξ -DFSes,
258
9 The Full Class of Models Defined in Terms of Three-Valued Cuts
and those Q-continuous Fω -DFSes which propagate fuzziness in quantifiers and arguments collapse into the class of MB -DFSes. This proves that all practical models are already contained in the class of Fξ -DFSes, and those practical models which propagate fuzziness both in quantifiers and arguments are contained in the class of MB -DFSes. This justifies the development and thorough analysis of these simpler classes in the previous chapters, because every model of practical interest will belong to one of them. It can thus be expressed through constructions simpler than those used to define FΩ - and Fω -QFMs, which in turn permit a simpler check of the relevant formal properties, like being a DFS, propagation of fuzziness, specificity comparisons, and continuity, and which suggest simple algorithms for implementing quantifiers in the model.
10 The Class of Models Based on the Extension Principle
10.1 Motivation and Chapter Overview In this chapter, an attempt is made to establish a new class of fuzzification mechanisms not constructed from three-valued cuts. Starting from a straightforward definition of argument similarity, we will first introduce the class of QFMs defined in terms of the similarity measure, the class of Fψ -QFMs. It encloses the interesting subclass of Fϕ -QFMs, i.e. the class of models defined in terms of the standard extension principle (which serves to aggregate similarity grades). We will then formalize the necessary and sufficient conditions on the aggregation mappings which make the fuzzification mechanism a DFS. This analysis will demonstrate that the classes of Fψ and Fξ -models are identical. Moreover the classes of FΩ -DFSes and Fψ -DFSes/Fϕ -DFSes coincide as well, which is the main result of this chapter. Because the same class of models is obtained from independent considerations, this provides evidence that it indeed represents a natural class of standard models of fuzzy quantification.
10.2 A Similarity Measure on Fuzzy Arguments To begin with, the similarity grade ΞY1 ,...,Yn (X1 , . . . , Xn ) of the fuzzy arguments (X1 , . . . , Xn ) to a choice of crisp arguments (Y1 , . . . , Yn ) can be defined as follows. Definition 10.1. −→ I is Let E = ∅ be some base set and Y ∈ P(E). The mapping ΞY : P(E) defined by ΞY (X) = min(inf{µX (e) : e ∈ Y }, inf{1 − µX (e) : e ∈ / Y }) for all X ∈ P(E). For n-tuples of arguments Y1 , . . . , Yn ∈ P(E), we define (n) n −→ I by ΞY1 ,...,Yn : P(E) I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 259–276 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
260
10 The Class of Models Based on the Extension Principle n
(n)
ΞY1 ,...,Yn (X1 , . . . , Xn ) = ∧ ΞYi (Xi ) i=1
for all X1 , . . . , Xn ∈ P(E). Whenever n is clear from the context, we shall omit the superscript and write (n) ΞY1 ,...,Yn (X1 , . . . , Xn ) rather than ΞY1 ,...,Yn (X1 , . . . , Xn ). At times, it will be convenient to use the following abbreviation. Let us recall the fuzzy equivalence connective ↔ : I × I −→ I, defined by x ↔ y = (x ∧ y) ∨ (¬x ∧ ¬y) for all x, y ∈ I. In the case that y ∈ {0, 1}, this apparently becomes x : y=1 x↔y= ¬x : y = 0 Now consider a base set E = ∅ and let X ∈ P(E), Y ∈ P(E). Making use of the ↔-connective we can define δX,Y : E −→ I by µX (e) : e∈Y δX,Y (e) = (µX (e) ↔ χY (e)) = (10.1) 1 − µX (e) : e ∈ /Y for all e ∈ E. In terms of δY,E , we can now conveniently reformulate ΞY (X). In particular, we can express ΞY1 ,...,Yn (X1 , . . . , Xn ), where X1 , . . . , Xn ∈ P(E) and Y1 , . . . , Yn ∈ P(E), by ΞY1 ,...,Yn (X1 , . . . , Xn ) = inf{δXi ,Yi (e) : e ∈ E, i = 1, . . . , n} .
(10.2)
In order to illustrate the proposed definition of argument similarity, let us consider an example, which also profits from the more succinct alternative notation. Hence let a two-element base set E = {a, b} be given, and suppose that X ∈ P(E) is defined by µX (a) = 13 , µX (b) = 34 . Then Ξ∅ (X) = min{δX,∅ (a), δX,∅ (b)} = min{1 − µX (a), 1 − µX (b)} = min{ 23 , 14 } = 14 Ξ{a} (X) = min{δX,{a} (a), δX,{a} (b)} = min{µX (a), 1 − µX (b)} = min{ 13 , 14 } = 14 Ξ{b} (X) = min{δX,{b} (a), δX,{b} (b)} = min{1 − µX (a), µX (b)} = min{ 23 , 34 } =
2 3
10.3 The Unrestricted Class of Fψ -QFMs
261
Ξ{a, b} (X) = min{δX,{a, b} (a), δX,{a, b} (b)} = min{µX (a), µX (b)} = min{ 13 , 34 } =
1 3
.
Next we define the set of all compatibility grades which corresponds to a given choice of fuzzy arguments X1 , . . . , Xn . Definition 10.2. n ≥ 0. Then Let E = ∅ be a given base set and X1 , . . . , Xn ∈ P(E), (n) DX1 ,...,Xn ∈ P(I) is defined by (n)
DX1 ,...,Xn = {ΞY1 ,...,Yn (X1 , . . . , Xn ) : Y1 , . . . , Yn ∈ P(E)} . Whenever this causes no ambiguity, the superscript (n) will be omitted, thus (n) abbreviating DX1 ,...,Xn = DX1 ,...,Xn . (0)
Note 10.3. The superscript is only needed to discern D∅ (which corresponds (1) to the empty tuple and thus, no arguments) from D∅ (which corresponds to the empty set supplied as a single argument). Definition 10.4. By D ∈ P(P(I)) we denote the set of all D ∈ P(I) with the following properties: 1. D ∩ [ 12 , 1] = {r+ } for some r+ = r+ (D) ∈ [ 12 , 1]; 2. for all D ⊆ D with D = ∅, inf D ∈ D; 3. if r+ > 12 , then sup D \ {r+ } = 1 − r+ . Theorem 10.5. Suppose E = ∅ is some base set and X1 , . . . , Xn ∈ P(E) are fuzzy subsets of E. Then DX1 ,...,Xn ∈ D. Hence D is large enough to contain all DX1 ,...,Xn . As we shall see later in Th. 10.11, D is indeed the smallest possible subset of P(I) which contains every DX1 ,...,Xn . (The theorem has been delayed because it then becomes a corollary).
10.3 The Unrestricted Class of Fψ -QFMs In order to define the new class of fuzzification mechanisms, we will now relate the similarity information expressed by ΞY1 ,...,Yn (X1 , . . . , Xn ) to the behaviour of a quantifier on its arguments.
262
10 The Class of Models Based on the Extension Principle
Definition 10.6. n Let Q : P(E) −→ I be a given semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E). (n) Then AQ,X1 ,...,Xn : I −→ P(I) is defined by AQ,X1 ,...,Xn (z) = {ΞY1 ,...,Yn (X1 , . . . , Xn ) : (Y1 , . . . , Yn ) ∈ Q−1 (z)} (n)
for all z ∈ I. When n is clear from the context, we will omit the superscript (n) (n), thus abbreviating AQ,X1 ,...,Xn = AQ,X1 ,...,Xn . Note 10.7. Again, the superscript is only needed to eliminate the ambiguity (0) between AQ,∅ , where Q is a nullary quantifier and ∅ the empty tuple, and (1)
AQ,∅ , where Q is a one-place quantifier and ∅ is the empty argument set.
Next let us describe the range of all possible AQ,X1 ,...,Xn . Definition 10.8. By A we denote the set of all mappings A : I −→ P(I) with the following properties: a. ∪{A(z) : z ∈ I} ∈ D; b. for all z, z ∈ I, sup A(z) >
1 2
and sup A(z ) >
1 2
entails that z = z .
In the following, D(A) denotes the set D(A) = ∪{A(z) : z ∈ I} .
(10.3)
In addition, r+ abbreviates r+ (A) = r+ (D(A)). It is then apparent from Def. 10.8.a and Def. 10.4 that there exists z+ = z+ (A) ∈ I with r+ ∈ A(z+ ) .
(10.4)
In the following, z+ is assumed to be an arbitrary but fixed choice of z+ ∈ I which satisfies (10.4) for a considered A ∈ A. Theorem 10.9. n Suppose Q : P(E) −→ I is a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E). Then AQ,X1 ,...,Xn ∈ A. Hence A contains every AQ,X1 ,...,Xn . Theorem 10.10. Let A ∈ A be given and D(A) = ∪{A(z) : z ∈ I}. (0)
0
a. If D(A) = {1}, then A = AQ,∅ , where Q : P({∗}) −→ I is the constant quantifier Q(∅) = z+ .
10.3 The Unrestricted Class of Fψ -QFMs
263
b. If D(A) = {1}, then we can choose some mapping ζ : D(A) −→ I with r ∈ A(ζ(r))
(10.5)
for all r ∈ D(A). If r+ = r+ (A) equals 12 , then r+ ∈ D(A) ∩ [0, 1 − r+ ]. If r+ > 12 , then we recall from Def. 10.8 that sup D(A) \ {r+ } = 1 − r+ . Because D(A) = {1} by assumption, we know that there exists r− ∈ D(A) ∩ [0, 1 − r+ ] and we shall assume an arbitrary × I) by on r− , we define X ∈ P(I : r µX (z, r) = r− : 1 : 2
(10.6)
choice of r− with this property. Based r ∈ A(z) \ {r+ } r∈ / A(z) ∨ r = r+ > r = r+ = 12
1 2
(10.7)
for all z, r ∈ I. For all Y ∈ P(I × I), we abbreviate r = r (Y ) = ΞY (X) z = z (Y ) = inf{z ∈ I : (z, r ) ∈ Y and r = r (Y ) ∈ A(z)} . Based on ζ, we define Q : P(I × I) −→ I by z : r ∈ A(z ) Q(Y ) = / A(z ) ζ(r ) : r ∈
(10.8) (10.9)
(10.10)
for all Y ∈ P(I × I). Then A = AQ,X . We also obtain the following corollary concerning D: Theorem 10.11. For all D ∈ D, (0)
0
a. If D = {1}, then D = D∅ , where ∅ is the empty tuple ∅ ∈ P({∗}) . × I) such that D = DX . b. If D = {1}, then there exists X ∈ P(I Hence D is indeed the smallest subset of P(I) which contains every DX1 ,...,Xn . In order to carry out the desired aggregation, which will turn the compatibility grades into a fuzzification mechanism, we will use aggregation mappings ψ : A −→ I. These can be used to define a QFM in the apparent way, by composing with the AQ,X1 ,...,Xn ’s: Definition 10.12. Let ψ : A −→ I be given. The QFM Fψ is defined by Fψ (Q)(X1 , . . . , Xn ) = ψ(AQ,X1 ,...,Xn ) for all semi-fuzzy quantifiers Q : P(E) −→ I and all X1 , . . . , Xn ∈ P(E). n
264
10 The Class of Models Based on the Extension Principle
This definition spans the full class of QFMs definable in terms of argument similarity, and we will now investigate its well-behaved models, i.e. the class of Fψ -DFSes.
10.4 Characterisation of the Fψ -Models Let us now tackle the goal of characterising the class of Fψ -DFSes, by making explicit the structure of all plausible models. To this end, we need some more notation, for expressing the properties expected from legal choices of ψ. As usual, the goal is that of characterising the new class of models in terms of the necessary and sufficient conditions on the aggregation mapping. In order to describe the desired monotonicity properties, we first define a suitable preorder on A. Definition 10.13. For all A, A ∈ A, we say that A A if and only if the following conditions are satisfied by A, A . a. for all z ∈ I and all r ∈ A(z), there exists z ≥ z with r ∈ A (z ); b. for all z ∈ I and all r ∈ A (z ), there exists z ≤ z with r ∈ A(z). Next we introduce a ‘cut/fill operator’ on A. The invariance of ψ with respect to turns out to be essential for ψ to satisfy (Z-4). Definition 10.14. For all A ∈ A, A ∈ A is defined by A(z) = [0, A(z)] ,
(10.11)
A(z) = min(sup A(z), 12 )
(10.12)
where
for all z ∈ I. Notes 10.15. • It is immediate from the definition of A that A ∈ A, see Def. 10.8. n • For every semi-fuzzy quantifier Q : P(E) −→ I and all X1 , . . . , Xn ∈ P(E), we abbreviate Q,X1 ,...,Xn = AQ,X1 ,...,Xn Q,X1 ,...,Xn = A Q,X1 ,...,Xn .
(10.13) (10.14)
10.4 Characterisation of the Fψ -Models
•
265
has been used to express . In fact, both are In the above definition, definable in terms of each other, because conversely A(z) = sup A(z) ,
(10.15)
for all A ∈ A and z ∈ I. The cut/fill operator is of special relevance to the characterisation of Fψ DFSes because -invariance ensures that (Z-6) be valid. In order to define the conditions on ψ succinctly, it is useful to stipulate the following abbreviation. For all A ∈ A, NV(A) = {z ∈ I : A(z) = ∅}
(10.16)
We have now introduced all notation required to express the conditions on admissible choices of ψ. Definition 10.16. Let ψ : A −→ I be given. The conditions (ψ-1)–(ψ-5) are defined as follows. For all A, A ∈ A, If D(A) = {1}, then ψ(A) = z+ .
(ψ-1)
If A(z) = A (1 − z) for all z ∈ I, then ψ(A) = 1 − ψ(A ).
(ψ-2)
If NV(A) ⊆ {0, 1} and r+ ∈ A(1), then ψ(A) = 1 − sup A(0).
(ψ-3)
If A A , then ψ(A) ≤ ψ(A ).
(ψ-4)
ψ(A) = ψ(A).
(ψ-5)
Let us first notice that the above set of conditions indeed captures the essential requirements on ψ which ensure that Fψ be a DFS. This is straightforward from the following observation on the behaviour of ψ for two-valued quantifiers. Theorem 10.17. If ψ : A −→ I satisfies (ψ-2) and (ψ-3), then Fψ coincides with all standard DFSes on two-valued quantifiers, i.e. for every standard DFS F and twon valued quantifier Q : P(E) −→ {0, 1}, it holds that Fψ (Q) = F(Q). Based on this theorem, it is then easy to prove the following result. Theorem 10.18. The conditions (ψ-1)–(ψ-5) on ψ : A −→ I are sufficient for Fψ to be a standard DFS. The converse claim is also true, i.e. it can be shown that the underlying mapping ψ : A −→ I satisfies (ψ-1)–(ψ-5) whenever Fψ is a DFS. Theorem 10.19. The conditions (ψ-1)–(ψ-5) on ψ : A −→ I are necessary for Fψ to be a DFS.
266
10 The Class of Models Based on the Extension Principle
In addition, the system of conditions (ψ-1)–(ψ-5) is known to be minimal in the sense that none of the conditions can be expressed in terms of the remaining ones: Theorem 10.20. The conditions (ψ-1)–(ψ-5) are independent. This condition warrants that there is no redundant effort in proofs that a considered ψ : A −→ I makes Fψ a standard DFS.
10.5 The Classes of Fψ -Models and Fω -Models Coincide Let us now establish the central result of this chapter, that the new class of Fψ -DFSes coincides with the full class of models definable in terms of threevalued cuts, i.e. the class of FΩ - or synonymously, Fω -DFSes. It is here that we need the construction of Fω -QFMs, which provides the link between the models defined in terms of three-valued cuts and those defined in terms of argument similarity. To this end, let us now see how AQ,X1 ,...,Xn relates to the coefficient sQ,X1 ,...,Xn that was used to define Fω -QFMs. Theorem 10.21. n Then Let Q : P(E) −→ I be a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E). for all z ∈ I, sQ,X1 ,...,Xn (z) = s(AQ,X1 ,...,Xn )(z) , where s(A) ∈ L, A ∈ A is defined by s(A)(z) = max(0, 1 − 2 · sup A(z))
(10.17)
for all z ∈ I. As a by-product of this result, we now discover that all Fω -QFMs are in fact Fψ -QFMs. In particular, all FΩ - and Fω -DFSes constitute a subclass of the new class of Fψ -models. Theorem 10.22. Every Fω -QFM is an Fψ -QFM, i.e. for all ω : L −→ I, there exists ψ : A −→ I with Fω = Fψ . ψ is defined by ψ(A) = ω(s(A))
(10.18)
for all A ∈ A. As to the converse subsumption, let us recall from Th. 10.19 that every choice of ψ which makes Fψ a DFS satisfies (ψ-5). As we will now show, this entails that the unrestricted class of Fψ -QFMS, although considerably broader than the class of Fω -QFMs, does not introduce any new DFSes compared to those that already belong to the class of Fω -DFSes. To see this, let us notice the following relationship between s(A) and A.
10.5 The Classes of Fψ -Models and Fω -Models Coincide
267
Theorem 10.23. Let A ∈ A be given. Then A(z) =
− 12 s(A)(z)
(10.19)
s(A)(z) = 1 − 2A(z) ,
(10.20)
1 2
and
for all z ∈ I. Based on this relationship, it is then apparent that all -invariant Fψ -QFMs are in fact Fω -QFMs. Theorem 10.24. Suppose that ψ : A −→ I satisfies (ψ-5). Then Fψ is an Fω -QFM, i.e. F ψ = Fω provided we define ω : L −→ I by ω(s) = ψ(As ),
(10.21)
As (z) = [0, 12 − 12 s(z)]
(10.22)
for all s ∈ L, where
for all z ∈ I. In particular, all Fψ -DFSes are Fω -DFSes. The next theorem characterizes the precise subclass of Fψ -QFMs that can be represented as Fω -QFMs. Theorem 10.25. The Fω -QFMs are exactly those Fψ -QFMs that depend on a mapping ψ : A −→ I which satisfies (ψ-5). Note 10.26. Compared to the previous theorem, this demonstrates that only those Fψ -QFMs can be represented as Fω -QFMs, that are defined from invariant choices of ψ. In the following, let us hence assume that (ψ-5) be valid. It is then easily shown how the conditions (ω-1)–(ω-4) imposed on ω relate to the conditions (ψ-1)–(ψ-5) imposed on the corresponding ψ. These dependencies are made explicit in the next theorem. Theorem 10.27. Let ω : L −→ I be given and suppose that ψ : A −→ I is defined by (10.18). Then the following relationships hold.
268
a. ω b. ω c. ω d. ω e. ψ
10 The Class of Models Based on the Extension Principle
satisfies satisfies satisfies satisfies satisfies
(ω-1) if (ω-2) if (ω-3) if (ω-4) if (ψ-5).
and and and and
only only only only
if if if if
ψ ψ ψ ψ
satisfies satisfies satisfies satisfies
(ψ-1) (ψ-2); (ψ-3); (ψ-4);
Note 10.28. The theorem was of invaluable help for proving Th. 10.20 (which was anticipated here for the sake of fluent presentation). It lets us reduce the independence proof of the new set of ‘ψ-conditions’ to the known theorem on the independence of the ‘ω-conditions’.
10.6 The Unrestricted Class of Fϕ -QFMs In the following, we will discuss a slight reformulation of the aggregation mechanism which shows that the Fψ -DFSes coincide with the models defined in terms of the standard extension principle. Thus the considered class of models is theoretically significant, because it evolves from the fundamental principle that underlies fuzzy set theory. In order to define the class of those QFMs that depend on the extension principle, let us consider the following basic construction. Definition 10.29. For all A ∈ A, we denote by fA : I −→ I the mapping defined by fA (z) = sup A(z) for all z ∈ I. It is apparent from (10.12) that A(z) = min(fA (z), 12 ) ,
(10.23)
for all A ∈ A and z ∈ I. In addition, it is obvious from Def. 10.14 that A can be defined in terms of fA , i.e. there exists g such that A = g(fA ) for all A ∈ A. In turn, we conclude that every ψ which makes Fψ a DFS, can be defined in terms of fA because every such ψ is -invariant by Th. 10.19, and thus ψ(A) = ψ(A) = ψ(g(fA )). In other words, we do not lose any models of interest if we restrict attention to those QFMs that are a function of fA . (The precise relationship between the resulting classes of models will later be described in Th. 10.36 and Th. 10.37). Let us now introduce the constructions necessary to define the new class of QFMs. Definition 10.30. n Consider a semi-fuzzy quantifier Q : P(E) −→ I and a choice of fuzzy ar(n) gument sets X1 , . . . , Xn ∈ P(E). By fQ,X1 ,...,Xn = fQ,X1 ,...,Xn : I −→ I we denote the mapping defined by
10.6 The Unrestricted Class of Fϕ -QFMs
269
fQ,X1 ,...,Xn = fAQ,X1 ,...,Xn , i.e. fQ,X1 ,...,Xn (z) = sup AQ,X1 ,...,Xn (z) for all z ∈ I. Notes 10.31. (n)
• Again, the superscript (n) in fQ,X1 ,...,Xn is usually omitted when no ambiguity arises. • fQ,X1 ,...,Xn (z) expresses a measure of the maximal similarity of (X1 , . . . , Xn ) n to those (Y1 , . . . , Yn ) ∈ P(E) which are mapped to Q(Y1 , . . . , Yn ) = z. Next we describe the range of all possible fA . Definition 10.32. By X ∈ P(II ) we denote the set of all mappings f : I −→ I with the following properties: a. Im f ∩ [ 12 , 1] = {r+ } for some r+ = r+ (f ) ≥ 12 ; b. If z+ = z+ (f ) ∈ I is chosen such that f (z+ ) = r+ , then f (z) ≤ 1 − r+ for all z = z+ . Theorem 10.33. n For all A ∈ A, fA ∈ X. In particular, if Q : P(E) is a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E), then fQ,X1 ,...,Xn ∈ X. Theorem 10.34. For all f ∈ X, there exists A ∈ A with f = fA . In particular, there exist n with f = fQ,X1 ,...,Xn . Q : P(E) −→ I and X1 , . . . , Xn ∈ P(E) Hence X is indeed the range of all possible fA and fQ,X1 ,...,Xn . We can therefore define the class of QFMs computable from fQ,X1 ,...,Xn , called Fϕ -QFMs, in the apparent way. Definition 10.35. Let ϕ : X −→ I be given. The QFM Fϕ is defined by Fϕ (Q)(X1 , . . . , Xn ) = ϕ(fQ,X1 ,...,Xn ) , n
for all semi-fuzzy quantifiers Q : P(E) X1 , . . . , Xn ∈ P(E).
−→ I and all fuzzy arguments
The Fϕ -QFMs comprise the class of those fuzzification mechanisms which can be defined from the argument similarity grades by applying the extension principle. This is because fQ,X1 ,...,Xn is obtained from the standard extension principle in the following way. We start from a semi-fuzzy n quantifier Q: P(E) −→ I. By applying the extension principle, we obtain
270
10 The Class of Models Based on the Extension Principle
ˆ ˆˆ n n ˆ : P(P(E) is the Q ) −→ P(I). Hence for a given V ∈ P(P(E) ), Q(V ) ∈ P(I) fuzzy subset defined by µ ˆˆ
Q(V )
(z) = sup{µV (Y1 , . . . , Yn ) : (Y1 , . . . , Yn ) ∈ Q−1 (z)}
for all z ∈ I, see Def. 3.23. Given a choice of fuzzy arguments X1 , . . . , Xn ∈ P(E), we now express V = VX1 ,...,Xn in terms of argument similarity, viz µV (Y1 , . . . , Yn ) = ΞY1 ,...,Yn (X1 , . . . , Xn ) , for all Y1 , . . . , Yn ∈ P(E). It is then apparent from Def. 10.30 that fQ,X1 ,...,Xn (z) = µ ˆˆ
Q(V )
(z)
ˆˆ for all z ∈ I. Because V = VX1 ,...,Xn represents argument similarity, Q is obtained from Q by applying the standard extension principle, fQ,X1 ,...,Xn is deˆ ˆ and VX ,...,X , and Fϕ (Q)(X1 , . . . , Xn ) = ϕ(fQ,X ,...,X ) fined by composing Q 1 n 1 n is a function of fQ,X1 ,...,Xn , this proves the claim that every Fϕ is defined from the argument similarity grades by applying the extension principle. Noticing that no additional assumptions were made in defining fQ,X1 ,...,Xn , which ˆˆ merely composes similarity assessment and the extended Q, this demonstrates that the Fϕ -QFMs are precisely the QFMs definable in terms of argument similarity and the standard extension principle. The Fϕ -QFMs thus constitute an interesting class of fuzzification mechanisms.
10.7 The Classes of Fϕ -Models and Fψ -Models Coincide Before going into the details of the new class of models, and disclosing the structure of its well-behaved models, let us first make two observations, which establish the precise relationship between Fϕ -QFMs and their apparent superclass of Fψ -QFMs. Theorem 10.36. All Fϕ -QFMs are Fψ -QFMs, i.e. Fϕ = Fψ , provided that ψ : A −→ I is defined in dependence on ϕ : X −→ I by ψ(A) = ϕ(fA ) ,
(10.24)
for all A ∈ A. Conversely, one can assert the following. Theorem 10.37. Suppose that ψ : A −→ I satisfies (ψ-5). Then Fψ = Fϕ , where ϕ : X −→ I is defined by
10.8 Characterisation of the Fϕ -Models
ϕ(f ) = ψ(Af ) ,
271
(10.25)
for all f ∈ X, and Af (z) =
[0, f (z)] [0, 1 − f (z)] ∪ {f (z)}
: :
f (z) ≤ f (z) >
1 2 1 2
(10.26)
for all z ∈ I. These relationships are straightforward from the structure of the involved base constructions, and the earlier remark on equation (10.23). Recalling from Th. 10.19 that the condition (ψ-5) is necessary for Fψ to be a DFS, this substantiates that the two classes of Fϕ -DFSes and Fψ -DFSes indeed coincide, and merely provide different views on the models definable in terms of the extension principle.
10.8 Characterisation of the Fϕ -Models Let us now impose a number of conditions on admissable choices of ϕ. Firstly we define a pre-eorder on X, again needed to express a monotonicity condition. Definition 10.38. For all f, f ∈ X, we write f f if and only if the following conditions are satisfied for f, f . a. for all z ∈ I, sup{f (z ) : z ≥ z} ≥ f (z); b. for all z ∈ I, sup{f (z) : z ≤ z } ≥ f (z ). We can now state the conditions that must be obeyed by ϕ in order to make Fϕ a DFS. Definition 10.39. Let ϕ : X −→ I be given. The conditions (ϕ-1)–(ϕ-5) are defined as follows. For all f, f ∈ X, If f −1 ((0, 1]) = {z+ } and f (z+ ) = 1, then ϕ(f ) = z+ .
If f (z) = f (1 − z) for all z ∈ I, then ϕ(f ) = 1 − ϕ(f ). −1
If f ((0, 1]) ⊆ {0, 1} and f (1) ≥ then ϕ(f ) = 1 − f (0).
1 2,
If f (z) = min(f (z),
1 2)
(ϕ-2) (ϕ-3)
If f f , then ϕ(f ) ≤ ϕ(f ).
(ϕ-1)
(ϕ-4)
for all z ∈ I, then ϕ(f ) = ϕ(f ).
(ϕ-5)
The proof that these conditions describe precisely the intended class of models becomes feasible once we notice the close relationship between the ‘ϕconditions’ and corresponding ‘ψ-conditions’.
272
10 The Class of Models Based on the Extension Principle
Theorem 10.40. Let ϕ : X −→ I be given and suppose that ψ : A −→ I is defined by (10.24). Then a. ϕ b. ϕ c. ϕ d. ϕ e. ϕ
satisfies satisfies satisfies satisfies satisfies
(ϕ-1) (ϕ-2) (ϕ-3) (ϕ-4) (ϕ-5)
if if if if if
and and and and and
only only only only only
if if if if if
ψ ψ ψ ψ ψ
satisfies satisfies satisfies satisfies satisfies
(ψ-1); (ψ-2); (ψ-3); (ψ-4); (ψ-5).
The following theorems are then straightforward from the previous results on Fψ -QFMs: Theorem 10.41. If ϕ : X −→ I satisfies (ϕ-1)–(ϕ-5), then Fϕ is a standard DFS. Theorem 10.42. Consider ϕ : X −→ I. If Fϕ is a DFS, then ϕ satisfies (ϕ-1)–(ϕ-5). Theorem 10.43. The conditions (ϕ-1)–(ϕ-5) are independent.
10.9 An Alternative Measure of Argument Similarity In [53, pp. 66-78], a first attempt was made to define DFSes in terms of the extension principle. The construction of these models was motivated by the fuzzification mechanism which Gaines [49] proposed as a ‘foundation of fuzzy reasoning’. This basic mechanism was then fitted to the purpose of defining DFSes. Because the resulting approach also relies on the extension principle, but utilizes a different notion of argument compatibility, the question arises how this ‘Gainesian approach’ relates to the Fϕ -QFMs defined in terms of the extension principle. In order to answer this question, let us recall some concepts needed to define the new models. First we define the compatibility θ(x, y) of a gradual truth value x ∈ I with a crisp truth value y ∈ 2 = {0, 1}. Definition 10.44. θ : I × 2 −→ I is defined by
2x θ(x, y) = 2 − 2x 1
for all x ∈ I, y ∈ {0, 1}.
: : :
x ≤ 12 , y = 1 x ≥ 12 , y = 0 else
10.9 An Alternative Measure of Argument Similarity
273
Hence a gradual truth value x ≤ 12 is considered fully compatible with ‘false’ (y = 0), but only gradually compatible with ‘true’ (y = 1), and a gradual truth value x ≥ 12 is considered fully compatible with ‘true’ (y = 1), but only gradually compatible with ‘false’ (y = 0). θ can be applied to compare mem bership grades µX (e) (X ∈ P(E) a fuzzy subset of E) with ‘crisp’ membership values χY (e) (i.e. ‘Is e ∈ Y ?’, Y ∈ P(E) crisp), where e ∈ E is some element of the universe. This suggests the following definition of the compatibility Θ(X, Y ) of a fuzzy subset X ∈ P(E) with a crisp subset Y ∈ P(E). Definition 10.45. −→ I is defined Let E be a nonempty set. The mapping Θ = ΘE : P(E)×P(E) by Θ(X, Y ) = inf{θ(µX (e), χY (e)) : e ∈ E} , for all X ∈ P(E), Y ∈ P(E). Notes 10.46. The compatibility Θ(X, Y ) of a fuzzy set X ∈ P(E) with a crisp set Y ∈ P(E) is therefore the minimal degree of element-wise compatibility of the membership function of X and the characteristic function of Y . • To present an example, let us reconsider the two-element base set E = {a, b}, and again suppose that X ∈ P(E) is defined by µX (a) = 13 , µX (b) = 3 4 . In this case, the compatibility grades become
•
Θ(X, ∅) = min{θ(µX (a), χ∅ (a)), θ(µX (b), χ∅ (b))} = min{θ( 13 , 0), θ( 43 , 0)} = min{1, 12 } =
1 2
Θ(X, {a}) = min{θ( 13 , 1), θ( 43 , 0)} = min{1, 12 } =
1 2
Θ(X, {b}) = min{θ( 13 , 0), θ( 43 , 1)} = min{1, 1} =1 Θ(X, {a, b}) = min{θ( 13 , 1), θ( 43 , 1)} = min{ 23 , 1} =
2 3
.
z (X1 , . . . , Xn ), the compatibility of Q : Based on Θ(X, Y ), we now define Q n P(E) −→ I to the gradual truth value z ∈ I, given a choice (X1 , . . . , Xn ) ∈ n of fuzzy argument sets. P(E)
274
10 The Class of Models Based on the Extension Principle
Definition 10.47. n Suppose Q : P(E) −→ I is a semi-fuzzy quantifier and z ∈ I. The fuzzy n −→ I is defined by z : P(E) quantifier Q n
z (X1 , . . . , Xn ) = sup{min Θ(Xi , Yi ) : Y = (Y1 , . . . , Yn ) ∈ Q−1 (z)} , Q i=1
n
. for all (X1 , . . . , Xn ) ∈ P(E) In [53, p. 71], it was shown that the fuzzification mechanism proposed by z . In addition, three exGaines can naturally be expressed in terms of Q amples were developed which illustrate how DFSes can be defined from z (X1 , . . . , Xn ))z∈I . However, these models have subsequently been shown (Q to be MB -DFSes. The next theorem establishes that all such models, which z (X1 , . . . , Xn ))z∈I , are in fact Fϕ -DFSes. are defined as a function of (Q Theorem 10.48. Consider a QFM F. Then the following statements are equivalent: a. F is an Fϕ -QFM which satisfies (ϕ-5); z (X1 , . . . , Xn ). b. F is a function of the coefficients Q z are exactly the Fϕ -QFMs which Hence the QFMs defined in terms of Q satisfy (ϕ-5). We conclude from Th. 10.42 that the ‘Gainesian’ models defined z coincide with the models defined in terms of the extension in terms of Q principle, i.e. with the Fϕ -DFSes.
10.10 Chapter Summary To sum up, this chapter has presented a different construction of QFMs and developed the corresponding formal apparatus. The departure from the threevalued cut scheme was necessary because this mechanism has now been fully exploited by the introduction of FΩ -QFMs. The models G, G ∗ and G∗ defined in a previous publication on DFS theory [53] resulted from an earlier effort to define models in terms of the fuzzification mechanism proposed by Gaines [49]. These models, though, turned out to be MB -DFSes, and no systematic attempt was made to develop a general class of models from the ‘Gainesian’ fuzzification mechanism. In principle, the Gainesian mechanism is a good starting point due to its foundation in the extension principle of fuzzy set theory. However, the assumed compatibility measure (of a gradual to a crisp truth value; of a fuzzy subset to a crisp set) was considered somewhat awkward, and raised some concerns that the required definitions and theorems would become more complicated than necessary to capture the target class of standard models. In this chapter we therefore started from a simpler measure which quantizes the similarity of fuzzy subsets to given crisp sets
10.10 Chapter Summary
275
ΞY (X) and corresponding tuples of arguments ΞY1 ,...,Yn (X1 , . . . , Xn ). It was then necessary to introduce the set of similarity grades DX1 ,...,Xn generated from a choice of fuzzy subsets X1 , . . . , Xn under the similarity measure and to characterize its range of possible values, D. Then the key construction was introduced which assigns to each potential quantification result z the set of similarity grades AQ,X1 ,...,Xn (z) generated by those choices of crisp Y1 , . . . , Yn with Q(Y1 , . . . , Yn ) = z. After characterising the range A of possible AQ,X1 ,...,Xn ’s, the class of QFMs definable in terms of argument similarity was introduced in the apparent way, based on aggregation mappings ψ : A −→ I. Next the precise conditions on ψ were analysed under which the resulting QFM Fψ becomes a DFS. The proposed system of conditions (ψ-1)–(ψ-5) was shown to be necessary and sufficient for Fψ to be a DFS, and all Fψ -DFSes were proven to be standard models. In addition the independence of the criteria was established. Next we turned to the issue of relating the new class of Fψ -DFSes to the known class of FΩ /Fω -DFSes. It came as a surprise that every Fψ -DFS is in fact an Fω -DFS (and vice versa), i.e. the class of Fψ -DFSes coincides with the class of Fω -DFSes. Noticing that the two classes of models arose from constructions which are conceptually very different and motivated independently, this finding confirms that the Fω -DFSes (or synonymously, Fψ -DFSes) form a natural class of standard models of fuzzy quantification that perhaps even comprise the full class of standard DFSes. The latter hypothesis calls for the future development of analytic tools for a deeper investigation of these models in order to locate their precise place within the standard models. The remainder of the chapter was concerned with the class of models defined in terms of the extension principle. To this end, a mapping fA was derived from each A ∈ A. By composing these mappings with AQ,X1 ,...,Xn we defined the new base construction, that of fQ,X1 ,...,Xn . For each potential quantification result z, fQ,X1 ,...,Xn (z) expresses the maximum similarity of the fuzzy arguments X1 , . . . , Xn to a choice of crisp arguments Y1 , . . . , Yn ∈ P(E) subject to the condition that Q(Y1 , . . . , Yn ) = z. Next the set X was introduced and shown to describe precisely the set of those mappings f that occur as f = fQ,X1 ,...,Xn for a choice of Q and X1 , . . . , Xn . Hence X is the proper domain of aggregation operators ϕ : X −→ I which span the new class of Fϕ -QFMs in the usual way, i.e. Fϕ (Q)(X1 , . . . , Xn ) = ϕ(fQ,X1 ,...,Xn ). It has been shown that the resulting fuzzification mechanisms are exactly the QFMs definable in terms of the standard extension principle, which is applied to the similarity grades obtained for the arguments of the quantifier. The move from the base construction AQ,X1 ,...,Xn to the new construction fQ,X1 ,...,Xn means a great simplification because we now deal with a single scalar fQ,X1 ,...,Xn (z) in the unit range, rather than sets of such scalars AQ,X1 ,...,Xn (z). It is hence worthwhile studying this construction of models although no new DFSes are introduced compared to the full class of Fψ -DFSes. Interestingly, the converse is also true, and in fact no models are lost when restricting attention to the subclass of Fϕ -DFSes. This is because every Fψ -DFS is known to satisfy (ψ-5), which entails that ψ(A) can be computed from fA , which underlies
276
10 The Class of Models Based on the Extension Principle
the definition of Fϕ -QFMs. Due to the close relationship between AQ,X1 ,...,Xn and the derived fQ,X1 ,...,Xn , the precise conditions on ϕ which make Fϕ a DFS are apparent from the corresponding conditions (ψ-1)–(ψ-5) imposed on ψ. By adapting these conditions, it was easy to obtain a set of necessary and sufficient conditions (ϕ-1)–(ϕ-5) imposed on ϕ, and to prove that these conditions are independent. Finally we reviewed the fuzzification mechanism proposed by Gaines [49] and its reformulation as a base construction for QFMs proposed in [53]. In the course of this investigation, it was shown that all of the resulting QFMs are Fϕ -QFMs and thus definable in terms of argument similarity and the extension principle. Conversely, all ‘reasonable’ choices of Fϕ where ϕ satisfies at least (ϕ-5) can be expressed as ‘Gainesian’ QFMs, and thus be reduced to a mechanism claimed to provide a ‘foundation of fuzzy reasoning’ [49].
11 Implementation of Quantifiers in the Models
11.1 Motivation and Chapter Overview By abstracting from computational considerations in the first part of this book, it became possible to develop a theory of fuzzy quantification which rests on an axiomatic foundation. These theoretical advances will only be significant in practice, however, if the theory is also computational and thus permits an efficient implementation of quantifiers in the new models. A model of potential relevance to applications must show sufficient robustness against small changes in the data. We will therefore focus on practical models which are both Q-continuous and arg-continuous (see Def. 6.1 and Def. 6.2). Recalling from Th. 9.9 that the class of Fξ -DFSes contains all known models with these properties, we will thus be concerned with the implementation of quantifiers in models of the Fξ -type only. Due to the diversity of these models, one cannot expect a ‘generic’ implementation which fits all examples. However, we can try to develop techniques which will prove useful as building-blocks for implementing arbitrary Fξ -models. These techniques will be concerned with common patterns found in all Fξ -DFSes. For example, consider the joint constructive principle underlying these models, i.e. the quantification result is always computed by applying an aggregation mapping ξ to the upper and lower bounds Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn . The development of algorithms for computing these upper and lower bounds, then, is of obvious utility to the implementation of quantifiers in arbitrary Fξ -models. The chapter on implementation is organized as follows. We will start with a discussion of simple two-valued quantifiers which share the same interpretation across all standard models. These quantifiers can be expressed in terms of the membership grades of the arguments and the connectives min, max and ¬. We can use such quantifiers to compare two fuzzy sets by cardinality, to assess their degree of being equal, etc. Next we turn to the implementation of unary quantifiers on finite base sets which are no longer required to be two-valued. This investigation will reveal an important relationship between quantitativity and cardinality assessments. Based on this analysis, we I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 277–353 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
278
11 Implementation of Quantifiers in the Models
will then show that for the considered quantifiers, the model MCX can be expressed in terms of a measure of fuzzy cardinality. This reformulation will be useful, among other things, to establish the interpretation of quantifiers like “between ten and twenty” or “exactly five” in the standard models. Having considered these basic examples, we will focus on the development of algorithms for implementing more complex quantifiers. To this end, we first explain how the mappings Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn on which every Fξ -DFS is based, can be given a finite representation. The problem is that in practice, one can consider only a limited number of cutting levels, rather than the continuous range of all γ ∈ I. It will be shown that the reduction to a finite sample is always possible if the base set is finite. This means that every Fξ -DFS can be expressed in terms of a finite sample of the cut levels. The prototypical models M, MCX and Fowa will serve to demonstrate this reformulation. In this way, we obtain the basic computational procedures for implementing quantifiers in these models. Some efficiency improvements are necessary to make these methods useful in practice. Specifically, we will show that Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn can be computed from cardinality coefficients sampled from X1 , . . . , Xn and their Boolean combinations. We will also explain how these coefficients can be efficiently computed from histogram information. By combining these techniques, we obtain practical algorithms for implementing quantifiers in the chosen models. These algorithms are surprisingly simple, but sufficiently powerful to cover absolute quantifiers like “at least ten”, quantifiers of exception like “all except ten”, proportional quantifiers like “most” and cardinal comparatives like “more than”. Some practical examples will also be presented along with measured processing times, which demonstrate that these algorithms are pretty fast, and ready for use in practical applications.
11.2 ‘Simple’ Quantifiers Before turning to more complex cases, let us first consider the standard quantifiers. The following results are straightforward from the representation of the universal and existential quantifiers developed in Th. 4.55 and Th. 4.61. It is sufficient to notice that the considered quantifiers are constructed from the logical ones in terms of negation, antonym, or unions/intersections of arguments, to which every model is known to conform by Th. 4.20, Th. 4.19, (Z-4) and Th. 24, respectively. Theorem 11.1. In every standard DFS F and for all E = ∅,
11.2 ‘Simple’ Quantifiers
279
F(∃)(X) = sup{µX (e) : e ∈ E} F(∀)(X) = inf{µX (e) : e ∈ E} F(all)(X1 , X2 ) = inf{max(1 − µX1 (e), µX2 (e)) : e ∈ E} F(some)(X1 , X2 ) = sup{min(µX1 (e), µX2 (e)) : e ∈ E} F(no)(X1 , X2 ) = inf{max(1 − µX1 (e), 1 − µX2 (e)) : e ∈ E} for all X, X1 , X2 ∈ P(E). These interpretations of the elementary quantifiers are quite satisfactory. Next we shall discuss the NL quantifier “at least k” and its variants. It is convenient to start with the unrestricted version of the quantifier, and to transfer these results in a subsequent step to the case of restricted quantification which involves two arguments. Hence let us focus on this type of quantifiers: Definition 11.2. Suppose E = ∅ is a base set and k ∈ N. The quantifier [≥ k] : P(E) −→ 2 is defined by 1 : |Y | ≥ k [≥ k](Y ) = 0 : else for all Y ∈ P(E). [≥ k] is a standard quantifier because in the crisp case, it can be expressed as a Boolean combination involving the existential and universal quantifiers, see introduction p.6. As to the interpretation of [≥ k] in the models, let us first establish generic bounds on the possible quantification results, which are valid in arbitrary DFSes, i.e. not limited in scope to the standard models. Based on the coefficients µ[j] (X) stipulated in Def. 7.91, which denote the j-th largest membership grade of the fuzzy subset X, these upper and lower bounds on the interpretation of [≥ k] can be expressed as follows. Theorem 11.3. Let E = ∅ be a given finite base set and k ∈ N. Then in every DFS F, ... ∧ µ[k] (X) ≤ F([≥k])(X) ≤ µ[k] (X) ∨ ... ∨ µ[m] (X) , µ[1] (X) ∧ for all X ∈ P(E), where m = |E|. In the case of a standard model, the upper and lower bounds coincide, be and ∨ then become min and max, respectively. Consequently, the cause ∧ above theorem uniquely determines the interpretation of [≥ k] in the common models (this is apparent from the fact that the µ[j] (X) form a non-increasing sequence). In addition, it is a rather straightforward task to extend this analysis to base sets of infinite cardinality as well. Summing up, the following result can then be proven for the regular models.
280
11 Implementation of Quantifiers in the Models
Theorem 11.4. Suppose that F is a standard DFS, E = ∅ is a nonempty base set and k ∈ N. Then F([≥k])(X) = sup{α ∈ I : |X≥α | ≥ k} , for all X ∈ P(E). In particular, if E is finite, then F([≥k])(X) = µ[k] (X) . Note 11.5. Recalling the notion of FG-count [197], the theorem thus asserts that F([≥k])(X) = µFG-count(X) (k) . The analysis of [≥k] also reveals how some derived NL quantifiers are interpreted in the models. By decomposing the two-place quantifier “at least k” into at least k = [≥k]∩, and by further utilizing that all models are compatible with intersections of argument sets, we can now deduce that F(at least k)(X1 , X2 ) = sup{α ∈ I : |(X1 ∩ X2 )≥α | ≥ k} , for all X1 , X2 ∈ P(E), which in the finite case again becomes F(at least k)(X1 , X2 ) = µ[k] (X1 ∩ X2 ) . Next we consider some apparent derivations of “at least k”: the quantifiers “more than k”, “less than k” and “at most k”. In terms of the known construction of external negation, all of these quantifiers can be reduced to “at least k”, i.e. more than k = at least k+1 less than k = 1 − at least k at most k = 1 − more than k (this is immediate from the definition of these quantifiers, see Def. 2.3). Owing to Th. 4.20, these quantifiers can thus be computed from the above interpretation of “at least k” in the apparent ways. Finally let us consider some examples of ‘simple’ quantifiers which are not directly available in NL, but still useful to express relationships between fuzzy subsets and to compare these by cardinality. The following definition introduces these quantifiers and also stipulates the corresponding notation. Definition 11.6. Let E = ∅ be some base set. For finite E, we define quantifiers [card ≥], 2 [card >], [card =] : P(E) −→ 2 by
11.2 ‘Simple’ Quantifiers
[card ≥](Y1 , Y2 ) = [card >](Y1 , Y2 ) = [card =](Y1 , Y2 ) =
281
1 0
: :
|Y1 | ≥ |Y2 | else
(11.1)
1 0
: :
|Y1 | > |Y2 | else
(11.2)
1 0
: :
|Y1 | = |Y2 | else
(11.3) 2
for all Y1 , Y2 ∈ P(E). In addition, we define the quantifier eq : P(E) −→ I by eq(Y1 , Y2 ) =
1 0
: :
Y1 = Y2 Y1 = Y2
(11.4)
for all Y1 , Y2 ∈ P(E). (In this case, no assumptions on the finiteness of E are necessary). Notes 11.7. •
These quantifiers serve the following purposes: [card ≥](Y1 , Y2 ) checks if the cardinality of Y1 is at least as large as the cardinality of Y2 ; [card >](Y1 , Y2 ) checks if the cardinality of Y1 exceeds that of Y2 ; the quantifier [card =](Y1 , Y2 ) checks if Y1 and Y2 have the same cardinality; and eq(Y1 , Y2 ) checks if Y1 and Y2 are identical. • Some further quantifiers which capture set-theoretic notions have already been introduced without mentioning their set-theoretic counterparts. In particular, the quantifier all(Y1 , Y2 ) checks ‘subsethood’ (inclusion) of Y1 in Y2 ; no(Y1 , Y2 ) checks Y1 and Y2 for being disjoint; and πe (Y ) checks membership of the element e in a set Y . As pointed out by W.V. Quine, this close relationship between the universal quantifier and subsethood has long been known in logic, and “figured already in Peirce’s 1870 algebra of absolute and relative terms, thus even antedating any coherent logic of the variable itself” [71, p. 355]. It is interesting to consider the corresponding fuzzy quantifiers, which generalize these kinds of comparisons to fuzzy sets: Theorem 11.8. Let E = ∅ be some finite base set and X1 , X2 ∈ P(E). Then in every standard DFS F, a. F([card ≥])(X1 , X2 ) = max{min(µ[k] (X1 ), 1 − µ[k+1] (X2 )) : 0 ≤ k ≤ |E|}; b. F([card >])(X1 , X2 ) = max{min(µk (X1 ), 1 − µk (X2 )) : 1 ≤ k ≤ |E|}; c. F([card =])(X1 , X2 ) = max{min{µ[k] (X1 ), 1 − µ[k+1] (X1 ), µ[k] (X2 ), 1 − µ[k+1] (X2 )} : 0 ≤ k ≤ |E|}.
282
11 Implementation of Quantifiers in the Models
Among other things, the resulting fuzzy quantifiers are useful for evaluating statements like “The number of X1 ’s which are X2 ’s is larger than the number of X3 ’s which are X4 ’s”. The interpretation of this statement can now be calculated thus, F([card >])(X1 ∩ X2 , X3 ∩ X4 ) . This is straightforward from Th. 11.8, Th. 4.16 and Th. 4.24. Theorem 11.9. Let E = ∅ be some base set and X1 , X2 ∈ P(E). In every standard DFS F, F(eq)(X1 , X2 ) = min(inf{min(µX1 (e), µX2 (e)) : min(µX1 (e), µX2 (e)) ≥ 1 − max(µX1 (e), µX2 (e))}, inf{1 − max(µX1 (e), µX2 (e)) : 1 − max(µX1 (e), µX2 (e)) > min(µX1 (e), µX2 (e))}) . We shall see below how further examples of simple quantifiers, like “between r and s”, can be implemented in the standard models. This analysis rests on some general observations, which we will now make.
11.3 Direct Implementation of Special Quantifiers in MC X In this section we take a closer look at quantitative (automorphism-invariant) one-place quantifiers. We notice that the quantitative unary quantifiers on finite base sets are exactly those quantifiers that only depend on cardinality information: Theorem 11.10. A one-place semi-fuzzy quantifier Q : P(E) −→ I on a finite base set E = ∅ is quantitative if and only if there exists a mapping q : {0, . . . , |E|} −→ I such that Q(Y ) = q(|Y |), for all Y ∈ P(E). q is defined by q(j) = Q(Yj )
(11.5)
for j ∈ {0, . . . , |E|}, where Yj ∈ P(E) is an arbitrary subset of cardinality |Yj | = j. Notes 11.11. •
In particular, if the quantifier has extension, then there exists µQ : N −→ I such that for all finite base sets E = ∅, q(j) = µQ (j) where j ∈ {0, . . . , |E|}.
11.3 Direct Implementation of Special Quantifiers in MCX
•
283
The above result can also be likened to Mostowski’s analysis of two-valued automorphism-invariant quantifiers, which can be expressed as Q(Y ) = T (ξ0 , ξ1 ), where ξ0 = |E \ Y | and ξ1 = |Y |, see Mostowski [116, p. 13]. Assuming a fixed choice of base set (i.e. ‘quantifiers restricted to E’ in Mostowski’s terminology), ξ0 can obviously be eliminated noticing that ξ0 = |E| − ξ1 .
A similar analysis is also possible for fuzzy quantifiers. Theorem 11.12. : P(E) Let Q −→ I be a unary fuzzy quantifier on a base set of finite cardinality |E| = m. Then the following are equivalent. is quantitative; a. Q b. There exists a mapping g : Im −→ I such that Q(X) = g(µ[1] (X), µ[2] (X), . . . , µ[m] (X))
(11.6)
for all X ∈ P(E). In particular, every quantitative unary fuzzy quantifier can be expressed in terms of FG-count(X), see (1.4). Returning to semi-fuzzy quantifiers, let us now consider the properties of the mapping q : {0, . . . , |E|} −→ I for some special types of quantifiers. Theorem 11.13. A quantitative unary semi-fuzzy quantifier Q : P(E) −→ I on a finite base set is convex if and only if there exists jpk ∈ {0, . . . , m} such that q(") ≤ q(u) for all " ≤ u ≤ jpk , and q(") ≥ q(u) for all jpk ≤ " ≤ u, where q : {0, . . . , |E|} −→ I is the mapping defined by (11.5). Theorem 11.14. A quantitative one-place semi-fuzzy quantifier Q : P(E) −→ I on a finite base set is nondecreasing (nonincreasing) if and only if the mapping q defined by (11.5) is nondecreasing (nonincreasing). Let us now simplify the formulas for ⊥Q,X (γ) and Q,X (γ) in the case of quantitative Q, which is also useful for the median-based models because Qγ (X) = med 1 (Q,X (γ), ⊥Q,X (γ)) 2
by Th. 8.9. To this end, we define q min (", u) = min{q(k) : " ≤ k ≤ u} q
max
(", u) = max{q(k) : " ≤ k ≤ u}
for all 0 ≤ " ≤ u ≤ |E|. We can then assert the following.
(11.7) (11.8)
284
11 Implementation of Quantifiers in the Models
Theorem 11.15. For every quantitative one-place semi-fuzzy quantifier Q : P(E) −→ I on a finite base set, all X ∈ P(E) and γ ∈ I, ⊥Q,X (γ) = q min (", u) Q,X (γ) = q max (", u) Qγ (X) = med 1 (q min (", u), q max (", u)) , 2
abbreviating " = |Xγmax |.
min |X|γ
max
and u = |X|γ
min
, where |X|γ
max
= |Xγmin | and |X|γ
=
Recalling the above theorems Th. 11.13 and Th. 11.14, some simplifications can be made. In the case that Q is convex, the formulas for computing q min and q max reduce to : " > jpk q(") : u < jpk q min (", u) = min(q("), q(u)) , q max (", u) = q(u) (11.9) q(jpk ) : " ≤ jpk ≤ u In the frequent situation that Q is monotonic, these expressions can be further simplified to q min (", u) = q("), q
min
(", u) = q(u),
q max (", u) = q(u) q
max
(", u) = q(")
if Q nondecreasing
(11.10)
if Q nonincreasing.
(11.11)
Up to this point, we have investigated some properties of semi-fuzzy quantifiers on finite base sets, assuming quantitativity and only one argument for simplicity. Next we consider how fuzzy quantification involving these quantifiers can be effected in practice. It must thus be shown how expressions of the form F(Q)(X), where Q is a quantitative one-place quantifier on some finite base set E = ∅ and X ∈ P(E) is a fuzzy argument, can be implemented in a given model F. In the case of MCX , it is possible to state an explicit formula with fixed structure which directly computes quantification results for such expressions. Specifically, the interpretation of the considered quantifiers is reduced to a calculation involving only the usual fuzzy propositional connectives min, max, 1 − x, which are applied to cardinality coefficients determined by the following new notion of fuzzy interval cardinality: Definition 11.16. For every fuzzy subset X ∈ P(E), the fuzzy interval cardinality Xiv ∈ P(N × N) is defined by min(µ[] (X), 1 − µ[u+1] (X)) : " ≤ u µXiv (", u) = for all ", u ∈ N. 0 : else (11.12)
11.3 Direct Implementation of Special Quantifiers in MCX
285
Notes 11.17. •
Intuitively, µXiv (", u) expresses the degree to which X has between " and u elements. Consequently, it is dubbed a ‘fuzzy interval cardinality’ because it assigns a membership grade to each interval of integers, i.e. to every closed range " ≤ k ≤ u of numbers k ∈ N, where " ≤ u. Existing fuzzy cardinality measures, by contrast, assign a membership grade to each individual integer, but not to ranges of integers. • It is apparent from (11.12) that Xiv can be expressed in terms of the FG-count, noticing that µ[j] (X) = µFG-count(X) (j). Obviously, this does not mean that a proposal for fuzzy quantification based on Xiv is just a variant of the FG-count approach. The relevance of the proposed fuzzy interval cardinality to MCX is revealed by the following theorem. Theorem 11.18. For every quantitative one-place quantifier Q : P(E) −→ I on a finite base set and all X ∈ P(E), MCX (Q)(X) = max{min(µXiv (", u), q min (", u)) : 0 ≤ " ≤ u ≤ |E|} = min{max(1 − µXiv (", u), q max (", u)) : 0 ≤ " ≤ u ≤ |E|} . Notes 11.19. • Among other things, the theorem shows that the cardinality-based approach to fuzzy quantification can be recovered in the case of quantitative unary quantifiers on finite domains if we resort to MCX for modelling fuzzy quantification (which has excellent properties anyway). It should be emphasized that absolutely no assumptions regarding monotonicity or other properties of Q are necessary; the obtained results are guaranteed to be plausible (in the sense formalized by the theory) for arbitrary and totally unrestricted choices of quantifiers as long as Q is quantitative. The fuzzy interval cardinality stipulated above therefore achieves the first formalization of fuzzy cardinality for fuzzy sets, which gives a provably satisfying account of fuzzy quantification. In particular, the approach fully covers Zadeh’s quantifiers of the first kind, including all non-monotonic examples. • As noted above, Xiv can be expressed in terms of FG-count(X). However, the converse claim is equally true. Specifically, it is instructive to notice that µFG-count(X) (j) = µXiv (j, |E|) ,
µFE-count(X) (j) = µXiv (j, j) .
This explains why with general quantifiers µQ , the FG-count approach and the FE-count approach yield reasonable results in certain cases (i.e. those in which they coincide with MCX ) but fail in others.
286
11 Implementation of Quantifiers in the Models
Further results on the interpretation of two-valued quantifiers in standard models are easily proven from this representation of MCX , recalling the earlier Th. 5.22. For example, consider the unary convex quantifier [≥r : ≤s] defined as follows. Definition 11.20. Let E = ∅ be some base set and r, s ∈ N, r ≤ s. The quantifier [≥r : ≤s] : P(E) −→ 2 is defined by 1 : r ≤ |Y | ≤ s [≥r : ≤s](Y ) = 0 : else for all Y ∈ P(E). The quantifier is apparently useful for interpreting statements like “Between ten and twenty of the married persons have children”. Let us now make explicit the concrete interpretation of [≥r : ≤s] in the standard models. The following theorem is rather straightforward from the achieved representation of MCX , and the known fact that all standard models coincide on two-valued quantifiers: Theorem 11.21. Let E = ∅ be a finite base set and r, s ∈ N, where r ≤ s. Further sup pose that F is a given standard DFS. Then F([≥r : ≤s]) : P(E) −→ I is the fuzzy quantifier defined by F([≥r : ≤s])(X) = µXiv (r, s) = min(µ[r] (X), 1 − µ[s+1] (X)) for all X ∈ P(E). This analysis of F([≥r : ≤s]) is readily extended to the convex natural language quantifiers “between r and s” that were introduced in Def. 2.3. Recalling the operation of intersecting arguments defined in Def. 4.23, the two-place quantifier “between r and s” becomes between r and s = [≥r : ≤s]∩, i.e. between r and s(Y1 , Y2 ) = [≥r : ≤s](Y1 ∩ Y2 ) for all Y1 , Y2 ∈ P(E). Now applying Th. 4.24, we directly obtain from the above Th. 11.21 that in every standard DFS, between r and s(X1 , X2 ) = µX1 ∩X2 iv (r, s) = min(µ[r] (X1 ∩ X2 ), 1 − µ[s+1] (X1 ∩ X2 )) , for all X1 , X2 ∈ P(E), where E = ∅ is again assumed to be finite. The latter result also covers the quantifier “exactly k”, which can be expressed as exactly k = between k and k. Therefore
11.4 The Core Algorithms for General Quantifiers
287
F(exactly k)(X1 , X2 ) = µX1 ∩X2 iv (k, k) = min(µ[k] (X1 ∩ X2 ), 1 − µ[k+1] (X1 ∩ X2 )) = µFE-count(X1 ∩X2 ) (k) in all standard models.
11.4 The Core Algorithms for General Quantifiers In the previous section, it was shown that MCX (Q)(X) can be reduced to a fuzzy propositional formula built from constants q min (", u), q max (", u) ∈ I sampled from the quantifier, and from cardinality coefficients µXiv (", u) ∈ I obtained from the argument. For the important proportional kind and other two-place quantifiers, however, there is no known reduction of MCX (Q)(X1 , X2 ) to a closed-form expression involving some notion of (relative) fuzzy cardinality. This makes it necessary to develop a more general, iterative procedure for computing quantification results. The need for such a general procedure is also obvious in the case of the other models. For example, there is no apparent method of directly computing M(Q)(X) or Fowa (Q)(X) from a formula of fixed structure like that presented in Th. 11.18, even in the simplest case of a quantitative unary quantifier on a finite domain. When trying to implement quantifiers in such general Fξ -DFSes, the first hindrance that we face is this. According to Def. 8.5, these models are defined by Fξ (Q)(X1 , . . . , Xn ) = ξ(, ⊥) , where = Q,X1 ,...,Xn and ⊥ = ⊥Q,X1 ,...,Xn are obtained from the threevalued cuts of the argument sets at all cutting levels γ ∈ I. The point at issue is that the cutting parameter γ takes its values in a continuous range. An implementation on digital computers, however, can only consider a finite sample of relevant cut levels Γ = {γ0 , . . . , γm }, along with the corresponding results of and ⊥ at these levels. In order to solve this problem, we will now take a closer look at the specific shape of and ⊥ for a certain type of fuzzy sets, which includes all fuzzy arguments on finite base sets and certain classes of fuzzy sets defined on base sets of transfinite cardinality. We will show that in this case, the desired reduction to a finite sample of (γ) and ⊥(γ) is always possible. Moreover, we will describe the fundamental computational procedures for Fowa , M and MCX by reformulating these prototypical models in such a way that they operate on the chosen finite sample only. In this way, it becomes possible to compute quantification results of arbitrary quantifiers on finite base sets in the prototype models. In order to develop the desired finite representation of and ⊥, we need some further notation. Hence let some base set E = ∅ be given (not re We shall denote by quired to be finite) and further let X1 , . . . , Xn ∈ P(E).
288
11 Implementation of Quantifiers in the Models
A(X1 , . . . , Xn ) ∈ P(I) the set of membership grades assumed by one of the Xi ’s. Hence A(X1 , . . . , Xn ) = ∪{Im µXi : i ∈ {1, . . . , n}} ,
(11.13)
i.e. A(X1 , . . . , Xn ) = {µXi (e) : e ∈ E, i ∈ {1, . . . , n}}. In dependence on A(X1 , . . . , Xn ), we can further define the corresponding set of three-valued cut levels Γ (X1 , . . . , Xn ) ∈ P(I) according to Γ (X1 , . . . , Xn ) = {2α − 1 : α ∈ A(X1 , . . . , Xn ) ∩ [ 12 , 1]} ∪{1 − 2α : α ∈ A(X1 , . . . , Xn ) ∩ [0, 12 )} ∪{0, 1} .
(11.14)
Notes 11.22. •
It is obvious that for finite base sets, Γ (X1 , . . . , Xn ) is always finite as well. • It should be pointed out that Γ (X1 , . . . , Xn ) always includes the boundary cases γ = 0 and γ = 1, which we enforced by an explicit union with {0, 1} in the defining equation for Γ (X1 , . . . , Xn ). These boundary cases have been included into Γ (X1 , . . . , Xn ) because knowing that 0 ∈ Γ (X1 , . . . , Xn ) and 1 ∈ Γ (X1 , . . . , Xn ) will considerably simplify the presentation of the later algorithms which operate on Γ (X1 , . . . , Xn ). • Obviously, Γ (X1 , . . . , Xn ) can always be decomposed into a union of the components Γ (Xi ), i.e. Γ (X1 , . . . , Xn ) = Γ (X1 ) ∪ · · · ∪ Γ (Xn ) .
(11.15)
The following observation on the behaviour of Γ (•) for Boolean combinations will later prove useful when implementing proportional quantifiers. Theorem 11.23. Let E = ∅ be a given base set. Then a. for all X ∈ P(E), Γ (¬X) = Γ (X); b. for all X1 , X2 ∈ P(E), Γ (X1 ∩ X2 ) ⊆ Γ (X1 , X2 ) and Γ (X1 ∪ X2 ) ⊆ Γ (X1 , X2 ) . Now suppose that Γ (X1 , . . . , Xn ) is finite and that Γ ⊇ Γ (X1 , . . . , Xn ) is a finite superset of Γ (X1 , . . . , Xn ).1 Knowing that {0, 1} ⊆ Γ (X1 , . . . , Xn ), Γ 1
The use of the finite superset Γ ⊇ Γ (X1 , . . . , Xn ), rather than Γ (X1 , . . . , Xn ), has shown itself more convenient for describing the algorithms based on integerarithmetics.
11.4 The Core Algorithms for General Quantifiers
289
can then be written as Γ = {γ0 , . . . , γm } where 0 = γ0 < γ1 < · · · < γm−1 < γm = 1. In dependence on γ0 , . . . , γm , we shall define derived coefficients γ 0 , . . . , γ m−1 according to γj =
γj +γj+1 2
(11.16)
for all j ∈ {0, . . . , m − 1}. In the following, it will be convenient to introduce a succinct notation for the results of Q,X1 ,...,Xn (γ j ) and ⊥Q,X1 ,...,Xn (γ j ) that are observed at each γ j . Hence let us stipulate that j = Q,X1 ,...,Xn (γ j )
(11.17)
⊥j = ⊥Q,X1 ,...,Xn (γ j )
(11.18)
for j ∈ {0, . . . , m − 1}. When discussing MB -models, it will also be useful to have a shorthand notation for Qγ j (X1 , . . . , Xn ). We therefore abbreviate Cj = Qγ j (X1 , . . . , Xn ) = med 1 (j , ⊥j )
(11.19)
2
for all j ∈ {0, . . . , m − 1}, where the equalities are immediate from Th. 8.9, (11.17) and (11.18). Theorem 11.24. n Let Q : P(E) −→ I be a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E) a choice of fuzzy arguments such that Γ (X1 , . . . , Xn ) is finite. Further let 0 = γ0 < γ1 < · · · < γm−1 < γm = 1 be given such that Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ). Then for all j ∈ {0, . . . , m − 1} and all γ ∈ (γj , γj+1 ),
Q,X1 ,...,Xn (γ) = j ⊥Q,X1 ,...,Xn (γ) = ⊥j Qγ (X1 , . . . , Xn ) = Cj . In other words, Q,X1 ,...,Xn , ⊥Q,X1 ,...,Xn : I −→ I reduce to simple step functions (with a finite number of steps), which are locally constant in the open intervals (γj , γj+1 ), j = 0, . . . , m − 1. In turn, the mapping Qγ (X1 , . . . , Xn ) which underlies the construction of MB -QFMs also reduces to a simple step function. The relevance of these observations manifests itself in the next theorem. Theorem 11.25. n
Let Q : P(E) −→ I be a semi-fuzzy quantifier and (γj )j∈{0,...,m} a finite I-valued sequence such that 0 = γ0 < γ1 < · · · < γm−1 < γm = 1 .
290
11 Implementation of Quantifiers in the Models
Further suppose that (, ⊥), ( , ⊥ ) ∈ T satisfy (γ) = (γ ) ⊥(γ) = ⊥ (γ ) for all j ∈ {0, . . . , m − 1} and γ, γ ∈ (γj , γj+1 ).2 Then ξ(, ⊥) = ξ( , ⊥ ) for every choice of ξ : T −→ I which satisfies (X-2), (X-4) and (X-5). Note 11.26. The theorem states that the results of the step functions and ⊥ at the finite number of interval boundaries are inessential, provided that ξ satisfies (X-2), (X-4) and (X-5). In other words, ξ(, ⊥) is fully determined by the finite sample of γj , j = (γ j ) and ⊥j = ⊥(γ j ), because any choice of ( , ⊥ ) ∈ T with (γ) = j and ⊥ (γ) = ⊥j for all j ∈ {0, . . . , m − 1} and γ ∈ (γj , γj+1 ) will reproduce the desired result ξ( , ⊥ ) = ξ(, ⊥). Let us now put this result into context and highlight its significance to the models of fuzzy quantification. We know from Th. 8.8 that every Fξ -DFS is constructed from a choice of ξ : T −→ I which satisfies the critical conditions n (X-2), (X-4) and (X-5). Now suppose that Q : P(E) −→ I is a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E) are chosen such that Γ (X1 , . . . , Xn ) is finite. By combining the above theorems Th. 11.24 and Th. 11.25, it then becomes apparent that Fξ (Q)(X1 , . . . , Xn ) can be computed from the finite number of 3-tuples (γj , j , ⊥j ), which are obtained from an arbitrary sample 0 = γ0 < γ1 < · · · < γm−1 < γm = 1 with {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ). Recalling that Qγ (X1 , . . . , Xn ) = med 1 (Q,X1 ,...,Xn (γ), ⊥Q,X1 ,...,Xn (γ)) and Cj = med 1 (j , ⊥j ), this demon2
2
strates in particular that Qγ (X1 , . . . , Xn ) can be specified with sufficient detail by listing the finite number of pairs (γj , Cj ). These carry all the necessary information to compute the quantification results in a given MB -DFS. Based on the improved analysis of Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn achieved in the above theorems, the model Fowa can now be expressed in a form which lends itself better to computation. Theorem 11.27. n a Let Q : P(E) −→ I be a given semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E) choice of fuzzy arguments such that Γ (X1 , . . . , Xn ) is finite. Further suppose 2
Hence , and ⊥, ⊥ are pairs of step functions which are allowed to differ at the boundaries {γj : j ∈ {0, . . . , m}} of the steps, but which are required to coincide everywhere else.
11.4 The Core Algorithms for General Quantifiers
291
that Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ) is given, where 0 = γ0 < γ1 < · · · < γm−1 < γm = 1. Then Fowa (Q)(X1 , . . . , Xn ) =
1 2
m−1
(γj+1 − γj )(j + ⊥j ) .
(11.20)
j=0
Next we will consider the prototypical examples of MB -DFSes that were chosen for implementation, i.e. M and MCX . In order to exploit Th. 11.24 for a computational description of these models as well, we first need some observations on the possible shapes of Cj and corresponding quantification results in the MB models. Theorem 11.28. n Let Q : P(E) −→ I be a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E) a choice of fuzzy arguments such that Γ (X1 , . . . , Xn ) is finite. Further let Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ) be given with 0 = γ0 < γ1 < · · · < γm−1 < γm = 1. If C0 = 12 , then
MB (Q)(X1 , . . . , Xn ) =
1 2
in every MB -DFS. In order to achieve a further simplification, let us introduce additional abbreviations J ∗ = {j ∈ {0, . . . , m − 1} : Cj = 12 } min J ∗ : J ∗ = ∅ j∗ = m : J∗ = ∅
(11.21) (11.22)
It is obvious from Def. 7.26 that for all f ∈ B, γ ≥ γ results in f (γ ) c f (γ). Hence f (γ) = 12 for some γ ∈ I entails that f (γ ) = 12 for all γ > γ as well. Recalling from Th. 7.28 that (Qγ (X1 , . . . , Xn ))γ∈I ∈ B, we conclude that in fact Cj =
1 2
(11.23)
for all j ≥ j ∗ . Combining this with some apparent observations on the fuzzy median med 1 , we can now assert the following. 2
Theorem 11.29. n
Suppose that Q : P(E) −→ I is a given semi-fuzzy quantifier, and X1 , . . . , Xn ∈ P(E) are given fuzzy arguments such that Γ (X1 , . . . , Xn ) is a finite subset of I. Further assume a choice of Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ) with 0 = γ0 < γ1 < · · · < γm−1 < γm = 1.
292
11 Implementation of Quantifiers in the Models
a. If ⊥0 > 12 , then Cj =
⊥j 1 2
: :
j < j∗ j ≥ j∗
: :
j < j∗ j ≥ j∗
for all j ∈ {0, . . . , m − 1}. b. If 0 < 12 , then Cj =
j 1 2
for all j ∈ {0, . . . , m − 1}. Based on these preparations, it is now easy to decompose M(Q)(X1 , . . . , Xn ) into a simple weighted summation: Theorem 11.30. n
Let a semi-fuzzy quantifier Q : P(E) −→ I and fuzzy arguments X1 , . . . , Xn ∈ P(E) be given, and suppose that Γ (X1 , . . . , Xn ) is finite. Further assume a choice of Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ) with 0 = γ0 < γ1 < · · · < γm−1 < γm = 1. Then ∗ j −1 (γ + 12 (1 − γj ∗ ) : ⊥0 > 12 j+1 − γj )⊥j j=0 1 : ⊥0 ≤ 12 ≤ 0 M(Q)(X1 , . . . , Xn )= 2 ∗ j −1 (γj+1 − γj )j + 12 (1 − γj ∗ ) : 0 < 12 j=0
The computation of MCX is even simpler compared to Fowa and M, which require a summation in order to determine the final outcome of quantification from the results obtained at the individual j’s. In fact, it is sufficient for calculating MCX (Q)(X1 , . . . , Xn ) to determine the minimal choice of j ∈ {0, . . . , m − 1} which satisfies a certain inequality. Theorem 11.31. n Let Q : P(E) −→ I be a semi-fuzzy quantifier and X1 , . . . , Xn ∈ P(E) a choice of fuzzy arguments such that Γ (X1 , . . . , Xn ) is finite. Further let Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ) be a subset of I with 0 = γ0 < γ1 < · · · < γm−1 < γm = 1. In the case that C0 > 12 , let us abbreviate
Bj = 2⊥j − 1 , while in the case that C0 < 12 , we stipulate
(11.24)
11.5 Refinement for Quantitative Quantifiers
Bj = 1 − 2j
293
(11.25)
for all j ∈ {0, . . . , m − 1}. Let us further abbreviate J = {j ∈ {0, . . . , m − 1} : Bj ≤ γj+1 } = min J . Then MCX (Q)(X1 , . . . , Xn ) =
1 2+
1 2
max(γ , B )
1 2
max(γ , B )
1
2 1 2
−
: : :
⊥0 > ⊥0 ≤ 0 <
(11.26) (11.27)
1 2 1 2 1 2
≤ 0 .
To sum up, ⊥Q,X1 ,...,Xn , Q,X1 ,...,Xn and Qγ (X1 , . . . , Xn ) can be specified with sufficient detail by a finite number of γj , j , ⊥j , from which the quantification results Fξ (Q)(X1 , . . . , Xn ) of arbitrary Fξ -DFSes can be computed. For discussing the MB -type, it proved convenient to utilize a derived coefficient Cj = med 1 (j , ⊥j ) instead of j and ⊥j . The representation of 2
these Cj ’s was further simplified based on the known monotonicity properties of (Qγ (X1 , . . . , Xn ))γ∈I ∈ B. The analysis of Q,X1 ,...,Xn , ⊥Q,X1 ,...,Xn and Qγ (X1 , . . . , Xn ) made it possible to develop more explicit formulations of the prototypical models Fowa , M and MCX , which lend themselves directly to implementation. In the remainder of the chapter, we will be concerned with further refinements which improve efficiency compared to the analysis achieved so far. These efficiency improvements will not require any changes to the computation procedures themselves. By contrast, they rest on the observation that the analysis achieved in theorems Th. 11.27, Th. 11.30 and Th. 11.31 still depends on j , ⊥j and/or Cj , and it is these coefficients which now receive more attention. This will also provide the necessary support for implementing further models of the Fξ - and MB -type, which are all known to be expressible in terms of j and ⊥j by Th. 11.25.
11.5 Refinement for Quantitative Quantifiers In this section, we will be concerned with the issue of computing j = n Q,X1 ,...,Xn (γ j ) and ⊥j = ⊥Q,X1 ,...,Xn (γ j ) efficiently, where Q : P(E) −→ I is assumed to have a finite domain E = ∅. In principle, Def. 8.1 permits the direct computation of j and ⊥j based on an exhaustive search of the maximum and minimum3 of SQ,X1 ,...,Xn (γ j ), i.e. j = max Sj and ⊥j = min Sj , where 3
The use of min and max rather than inf and sup is possible because SQ,X1 ,...,Xn (γ j ) is finite. This is immediate from Def. 9.1 and the finiteness of the base set.
294
11 Implementation of Quantifiers in the Models
Sj = SQ,X1 ,...,Xn (γ j ) = {Q(Y1 , . . . , Yn ) : (Y1 , . . . , Yn ) ∈ Tγ j (X1 , . . . , Xn )} . However, the exhaustive search of the minimum and maximum becomes impractical for domains of realistic size. In the worst case, the number of elements in Sj can be exponential in the size of the domain. Therefore the effort for directly computing the minimum and maximum soon becomes prohibitive. In some cases, it is possible to shortcut the exhaustive search. For exn ample, consider a quantifier Q : P(E) −→ I which is nondecreasing in all arguments. The coefficients j and ⊥j can then be expressed as j = max max min min Q((X1 )γ j , . . . , (Xn )γ j ) and ⊥j = Q((X1 )γ j , . . . , (Xn )γ j ). It is thus sufficient to consider a single choice of Yi ∈ Tγ j (Xi ), and all other choices of Yi are known to be irrelevant. In particular, this approach will work for n = 1, i.e. in the case of nondecreasing unary quantifiers Q : P(E) −→ I. It therefore renders possible the efficient evaluation of fuzzy measures in the models of interest. In the remaining cases where this a priori simplification of j and ⊥j is not applicable, a careful analysis of other regularities of NL quantifiers is necessary in order to identify possible simplifications, from which an efficient implementation of j and ⊥j can be derived. The particular regularity that will be assumed in the following is that of quantitativity, see Def. 4.41 Consequently, we will restrict attention to those semi-fuzzy quantifiers n Q : P(E) −→ I on a finite domain E = ∅ which also exhibit automorphisminvariance, i.e. 1 ), . . . , β(Y n )) Q(Y1 , . . . , Yn ) = Q(β(Y for all automorphisms β of E and Y1 , . . . , Yn ∈ P(E). How can we exploit this regularity in order to simplify the computation of j and ⊥j ? To this end, let us observe that j and ⊥j are computed from Sj , which in turn is computed n from the set Tγ j (X1 , . . . , Xn ) ⊆ P(E) defined by Def. 7.15 This suggests that n we can spare unnecessary work, if we define an equivalence relation on P(E) which identifies argument tuples (Y1 , . . . , Yn ), (Y1 , . . . , Yn ) ∈ Tγ j (X1 , . . . , Xn ) with Q(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn ) . n
n
Hence let ∼ ⊆ P(E) × P(E) denote the following relation, (Y1 , . . . , Yn ) ∼ (Y1 , . . . , Yn )
⇔
1 ), . . . , β(Y n )) . there exists an automorphism β of E s.th. (Y1 , . . . , Yn ) = (β(Y n
It is apparent that ∼ is indeed an equivalence relation on P(E) . In addition, we can conclude from (Y1 , . . . , Yn ) ∼ (Y1 , . . . , Yn ) that Q(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn ), because Q is assumed to be automorphism-invariant. This means that we can avoid redundant effort in computing j and ⊥j in the n following way. For each (Y1 , . . . , Yn ) ∈ P(E) , let (Y1 , . . . , Yn )∗ denote a representative under ∼, which is linked to the given (Y1 , . . . , Yn ) by (Y1 , . . . , Yn ) ∼
11.5 Refinement for Quantitative Quantifiers
295
(Y1 , . . . , Yn )∗ . Due to the fact that Q is automorphism invariant, we then know that Q(Y1 , . . . , Yn ) = Q(Y1 , . . . , Yn )∗ . Now let T ∗ denote the set of all representatives for argument tuples in Tγ j (X1 , . . . , Xn ), i.e. T ∗ = {(Y1 , . . . , Yn )∗ : (Y1 , . . . , Yn ) ∈ Tγ j (X1 , . . . , Xn )} . Clearly Sj = {Q(Y1 , . . . , Yn ) : (Y1 , . . . , Yn ) ∈ Tγ j (X1 , . . . , Xn )} = {Q(Y1 , . . . , Yn )∗ : (Y1 , . . . , Yn ) ∈ Tγ j (X1 , . . . , Xn )} = {Q(Z1 , . . . , Zn ) : (Z1 , . . . , Zn ) ∈ T ∗ }
(11.28)
and |T ∗ | ≤ |Tγ j (X1 , . . . , Xn )| . Consequently, the search of the maximum and minimum can now be restricted to the small(er) number of representatives in T ∗ , and need not exhaust the total collection of alternatives in Tγ j (X1 , . . . , Xn ). In practice, we will proceed another step and eliminate these representatives as well, in favour of a purely numerical scheme which rests on cardinality information only. The promise of such a scheme is that of avoiding any reference to set-based information (like Tγ j (X1 , . . . , Xn ) or T ∗ ), which might be hard to represent in a uniform way and awkward for processing purposes. By contrast, the cardinal numbers on which the numerical scheme will operate are easy to represent and permit very fast processing on digital computers. For the development of such a scheme, we can utilize a special regularity observed with all quantitative quantifiers, which establishes a link between quantitativity (automorphism-invariance) and definability of the quantifier in terms of cardinalities sampled from the arguments. For example, consider the quantitative unary quantifiers anticipated in Sect. 11.3. It was shown there that every quantifier of the considered type can be expressed in terms of cardinality information, i.e. there is a choice of q : {0, . . . , |E|} −→ I such that Q(Y ) = q(|Y |) for all Y ∈ P(E). Based on this representation, we developed efficient reformulations of j = Q,X (γ j ) and ⊥j = ⊥Q,X (γ j ), which reduce to simple coefficients q max (", u) and q min (", u), respectively. The goal of the present section is to abstract from this special case and develop a similar representation for arbitrary quantifiers, which are only assumed to be quantitative and declared on a finite base set. We will show that the previous analysis of unary quantifiers can be broadened to a fully general result concerning quantitative quantifiers on finite base sets. In particular, the computation of j , ⊥j and Cj from cardinality information is always possible if Q belongs to this type of quantifier, which eliminates the need to operate on the original set-based data in Tγ j (X1 , . . . , Xn ). For the main class of NL quantifiers which are two-place and conservative (like the absolute and proportional kinds), this basic approach will then be further refined in order to achieve optimal performance for all quantifiers of relevance to applications.
296
11 Implementation of Quantifiers in the Models
To begin with, it is well-known from TGQ that the quantitative twovalued quantifiers on finite base sets are exactly those quantifiers which can be computed from the cardinalities of the arguments and their Boolean combinations [9, p. 471]. As will be shown now, this characterisation of quantitative quantifiers on finite base sets generalizes to semi-fuzzy quantifiers, i.e. the quantitative semi-fuzzy quantifiers on finite domains are precisely those semifuzzy quantifiers which solely depend on the cardinality of their arguments (or Boolean combinations thereof). In order to state the theorem, we need some more notation. Hence let a finite base set E = ∅ and n ∈ N be given. For every choice of "1 , . . . , "n ∈ {0, 1}, we define the Boolean combination Φ1 ,...,n by (1 )
Φ1 ,...,n (Y1 , . . . , Yn ) = Y1
∩ · · · ∩ Yn(n )
(11.29)
for all Y1 , . . . , Yn ∈ P(E), where Y () =
Y ¬Y
: "=1 : "=0
(11.30)
for Y ∈ P(E) and " ∈ {0, 1}. Based on these abbreviations, we can now assert the following. Theorem 11.32. n
A semi-fuzzy quantifier Q : P(E) −→ I on a finite base set E = ∅ is quantitative if and only if Q can be computed from the cardinalities of its arguments and their Boolean combinations, i.e. there exist Boolean expressions Φ1 (Y1 , . . . , Yn ), . . . , ΦK (Y1 , . . . , Yn ) for some K ∈ N, and a mapping q : {0, . . . , |E|}K −→ I such that Q(Y1 , . . . , Yn ) = q(|Φ1 (Y1 , . . . , Yn )|, . . . , |ΦK (Y1 , . . . , Yn )|) ,
(11.31)
for all Y1 , . . . , Yn ∈ P(E). In particular, Q can be expressed as Q(Y1 , . . . , Yn ) = q(c), where c : {0, 1}n −→ {0, . . . , |E|} is defined by c1 ,...,n = |Φ1 ,...,n (Y1 , . . . , Yn )| for all "1 , . . . , "n ∈ {0, 1}, see (11.29). Notes 11.33. • For convenience, we will assume that the Boolean expressions Φi (Y1 , . . . , Yn ) are constructed from the arguments Y1 , . . . , Yn by forming unions ∪, intersections ∩ and complementation ¬; of course, a minimal set of operations op ∈ {∩, ¬} would be sufficient as well. It is not required that all variables Y1 , . . . , Yn actually participate in every considered Φi . In particular, trivial combinations like Φ1 (Y1 , Y2 , Y3 ) = Y2 and more complex ones like Φ2 (Y1 , Y2 , Y3 ) = (Y1 ∩ ¬Y2 ) ∪ Y3 are equally admissible.
11.5 Refinement for Quantitative Quantifiers
•
297
The second part of the theorem establishes a worst-case analysis in terms of complexity. It asserts that for unrestricted quantifiers, the number of Boolean combinations required to express Q in terms of cardinalities is bounded by 2n , where n is the arity of the quantifier.
It is instructive to compare the general result of Th. 11.32 to the earlier analysis of quantitative one-place quantifiers that was given in Sect. 11.3. In the new notation, the special case covered there can now be represented in terms of K = 1, Φ(Y ) = Y , c = |Φ(Y )| = |Y | and q : {0, . . . , |E|} −→ I. It is then guaranteed by Th. 11.10 that indeed Q(Y ) = q(c) = q(|Y |), provided that q is defined according to (11.5). Obviously, things will not remain that simple once we consider a more general type of quantifiers. However, most NL quantifiers will not require the worst-case analysis presented in Th. 11.32. These quantifiers are far from being ‘unrestricted’, and they often permit considerable simplifications due to their regular structure. In particular, most NL quantifiers are known to be two-place and conservative, so it is worthwhile studying this type of quantifiers. Hence let us start by discussing general two-place quantifiers, and subsequently taylor this analysis to the conservative examples. It is well-known from TGQ that 2 every quantitative two-valued quantifier Q : P(E) −→ 2 on a finite base set can be computed from a = |Y1 \ Y2 |, b = |Y2 \ Y1 |, c = |Y1 ∩ Y2 | and d = |E\(Y1 ∪Y2 )|, where Y1 , Y2 ∈ P(E) are the arguments of Q, c.f. [10, p. 457]. This result obviously transfers to the more general situation of semi-fuzzy quantifiers: Theorem 11.34. n Let Q : P(E) −→ I be a two-place quantifier on a finite base set E = ∅. If Q is quantitative, then Q(Y1 , Y2 ) can be expressed in terms of a = |Y1 \ Y2 |, b = |Y2 \ Y1 |, c = |Y1 ∩ Y2 | and d = |E \ (Y1 ∪ Y2 )|, for any choice of crisp arguments Y1 , Y2 ∈ P(E). Note 11.35. The theorem merely instantiates the general framework introduced above for n = 2 (actually, it is an apparent corollary to Th. 11.32). Its sole purpose is that of stipulating symbols a, b, c, d which refer to the cardinality coefficients sampled from Φ1 (Y1 , Y2 ) = Y1 \ Y2 , Φ2 (Y1 , Y2 ) = Y2 \ Y1 , Φ3 (Y1 , Y2 ) = Y1 ∩ Y2 , and Φ4 (Y1 , Y2 ) = E \ (Y1 ∪ Y2 ). Things become more interesting once additional assumptions are imposed on Q. In fact, it has been shown by van Benthem [9, p. 446], [10, p. 458] that in the case of a conservative two-valued quantifier, the coefficients a and c are sufficient for determining the quantification result Q(Y1 , Y2 ), which then becomes independent of b and d. Again, this result generalizes to semi-fuzzy quantifiers. Theorem 11.36. n Let Q : P(E) −→ I be a quantitative two-place quantifier on a finite base set
298
11 Implementation of Quantifiers in the Models
E = ∅. If Q is conservative, then Q(Y1 , Y2 ) is fully determined by a = |Y1 \Y2 | and c = |Y1 ∩ Y2 |, i.e. there exists q : {0, . . . , |E|}2 −→ I such that Q(Y1 , Y2 ) = q(a, c) for all Y1 , Y2 ∈ P(E). In other words, we can omit the coefficients b and d in practice, and limit ourselves to computing a and c. For our purposes, it is beneficial to slightly reformulate this result and replace the relevant cardinality coefficients a and c by another choice of coefficients c1 = |Y1 | and c2 = |Y1 ∩ Y2 |. We can then assert the following corollary to the previous theorem. Theorem 11.37. n
Let Q : P(E) −→ I be a quantitative two-place quantifier on a finite base set E = ∅. If Q is conservative, then Q(Y1 , Y2 ) is fully determined by |Y1 | and |Y1 ∩ Y2 |, for all Y1 , Y2 ∈ P(E). Notes 11.38. •
2
Hence every quantitative and conservative quantifier Q : P(E) −→ I on a finite base set can be represented in terms of K = 2, Φ1 (Y1 , Y2 ) = Y1 , Φ2 (Y1 , Y2 ) = Y1 ∩ Y2 , c1 = |Φ1 (Y1 , Y2 )| = |Y1 | and c2 = |Φ2 (Y1 , Y2 )| = |Y1 ∩ Y2 |, based on a suitable choice of q : {0, . . . , |E|}2 −→ I with Q(Y1 , Y2 ) = q(c1 , c2 )
•
(11.32)
for all Y1 , Y2 ∈ P(E). In particular, every proportional quantifier belongs to this general class. For example, the proposed definition of the proportional quantifier “almost all” already conforms to the above scheme, cf. (2.8) and (11.32). In this case, the mapping q becomes µalmost all (c2 /c1 ) : c1 > 0 q(c1 , c2 ) = 1 : else for all c1 , c2 ∈ {0, . . . , |E|}. Based on this choice of q, the quantifier can then be rewritten almost all(Y1 , Y2 ) = q(c1 , c2 ).
The proposed representation of quantifiers by a mapping q : {0, . . . , |E|}K −→ I, which is possible for every quantitative Q on a finite base set, provides a suitable point of departure for developing the required algorithms that will implement j and ⊥j more efficiently. Based on these preparations, we will now consider the set SQ,X1 ,...,Xn (γ) defined by Def. 9.1, from which the coefficients Q,X1 ,...,Xn (γ) = sup SQ,X1 ,...,Xn (γ) and ⊥Q,X1 ,...,Xn (γ) = inf SQ,X1 ,...,Xn (γ) are calculated, see (9.1) and (9.2). Due to these dependencies, a faster method for computing SQ,X1 ,...,Xn (γ) will also speed up the implementation
11.5 Refinement for Quantitative Quantifiers
299
of Q,X1 ,...,Xn (γ) and ⊥Q,X1 ,...,Xn (γ). In order to achieve this improvement, let us observe that SQ,X1 ,...,Xn (γ) can now be computed from q (rather than Q): SQ,X1 ,...,Xn (γ) = {Q(Y1 , . . . , Yn ) : (Y1 , . . . , Yn ) ∈ Tγ (X1 , . . . , Xn )} = {q(c1 , . . . , cK ) : (Y1 , . . . , Yn ) ∈ Tγ (X1 , . . . , Xn ),
by Def. 7.15
c1 = |Φ1 (Y1 , . . . , Yn )|, . . . , cK = |ΦK (Y1 , . . . , Yn )|} = {q(c1 , . . . , cK ) : (c1 , . . . , cK ) ∈ {(|Φ1 (Y1 , . . . , Yn )|, . . . ,
by (11.32)
|ΦK (Y1 , . . . , Yn )|) : (Y1 , . . . , Yn ) ∈ Tγ (X1 , . . . , Xn )} , i.e. SQ,X1 ,...,Xn (γ) = {q(c1 , . . . , cK ) : (c1 , . . . , cK ) ∈ Rγ (X1 , . . . , Xn )} ,
(11.33)
where Rγ (X1 , . . . , Xn ) = RγΦ1 ,...,ΦK (X1 , . . . , Xn ) ⊆ {0, . . . , |E|}K is defined by Rγ (X1 , . . . , Xn ) = {(|Φ1 (Y1 , . . . , Yn )|, . . . , |ΦK (Y1 , . . . , Yn )|) : (Y1 , . . . , Yn ) ∈ Tγ (X1 , . . . , Xn )} .
(11.34)
This is quite obvious. By replacing the instances of quantification Q(Y1 , . . . , Yn ) with instances of q(c1 , . . . , cK ), a similar gain in performance is achieved as in (11.28) which made use of equivalence classes. Conceptually, we can view the move to q(c1 , . . . , cK ), cr = |Φr (Y1 , . . . , Yn )|, as involving the obvious equivalence relation (Y1 , . . . , Yn ) ≈ (Y1 , . . . , Yn ) ⇔ |Φr (Y1 , . . . , Yn )| = |Φr (Y1 , . . . , Yn )| for all r ∈ {1, . . . , K}. Clearly (Y1 , . . . , Yn ) ≈ (Y1 , . . . , Yn ) entails that Q(Y1 , . . . , Yn ) = q(c1 , . . . , ck ) = Q(Y1 , . . . , Yn ) for all (Y1 , . . . , Yn ), (Y1 , . . . , Yn ) ∈ Tγ (X1 , . . . , Xn ), because in this case, |Φr (Y1 , . . . , Yn )| = cr = |Φr (Y1 , . . . , Yn )| for all r = 1, . . . , K. Due to the fact that SQ,X1 ,...,Xn (γ) is now computed from Rγ (X1 , . . . , Xn ), it is ensured that only one instance of q(c1 , . . . , cK ) must be computed in this case, rather than treating Q(Y1 , . . . , Yn ) and Q(Y1 , . . . , Yn ) separately. Compared to the earlier approach based on representatives under ∼, the new method has the benefit of eliminating the set-based information (i.e., equivalence classes and their representatives) altogether, and reducing the computation of the set SQ,X1 ,...,Xn (γ) and of the derived coefficients Q,X1 ,...,Xn (γ), ⊥Q,X1 ,...,Xn (γ) and Qγ (X1 , . . . , Xn ) to simple computations on cardinal numbers. Specifically, these quantities now become
300
11 Implementation of Quantifiers in the Models
Q,X1 ,...,Xn (γ) = max{q(c1 , . . . , cK ) : (c1 , . . . , cK ) ∈ Rγ (X1 , . . . , Xn )} , (11.35) ⊥Q,X1 ,...,Xn (γ) = min{q(c1 , . . . , cK ) : (c1 , . . . , cK ) ∈ Rγ (X1 , . . . , Xn )} (11.36) and Qγ (X1 , . . . , Xn ) = m 12 {q(c1 , . . . , cK ) : (c1 , . . . , cK ) ∈ Rγ (X1 , . . . , Xn )} (11.37) which is straightforward from Def. 7.18, (9.1), (9.2) and Def. 7.18. In the above expressions, min and max have been used, rather than the infimum and supremum required by (9.1) and (9.2). This is again possible because the base set E, and consequently the set SQ,X1 ,...,Xn (γ), is known to be finite. Usually it will be sufficient to consider those choices of γ = γ j only, that are derived from a given set Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ) according to (11.16). The associated coefficients j , ⊥j and Cj , which are indexed by j ∈ {0, . . . , m − 1} rather than γ j , now become j = max{q(c1 , . . . , cK ) : (c1 , . . . , cK ) ∈ Rj } ⊥j = min{q(c1 , . . . , cK ) : (c1 , . . . , cK ) ∈ Rj } Cj = med 1 (j , ⊥j ) = m 12 {q(c1 , . . . , cK ) : (c1 , . . . , cK ) ∈ Rj } , 2
where Rj = Rγ j (X1 , . . . , Xn ) . This is apparent from (11.17) and (9.1); (11.18) and (9.2); and finally (11.19) and (8.1) Notes 11.39. •
As indicated by the superscripts, the relation RγΦ1 ,...,ΦK depends on the assumed choice of Boolean combinations Φ1 , . . . , ΦK from which the cardinality coefficients cj = |Φj (Y1 , . . . , Yn )| are sampled. For the sake of readability, though, the superscript will generally be suppressed whenever Φ1 , . . . , ΦK are clear from the context. • At this point, it is instructive to return to the simple example of unary quantifiers, i.e. K = 1, Φ(Y ) = Y , c = |Φ(Y )| = |Y | under the proposed analysis. In this case, the relation Rγ (X) and the associated Rj become Rγ (X) = {k : |Xγmin | ≤ k ≤ |Xγmax |} Rj = {k : "(j) ≤ k ≤ u(j)} , where "(j) = |Xγmin | and u(j) = |Xγmax |. j j
11.5 Refinement for Quantitative Quantifiers
•
301
Let us notice in advance that it is often not necessary to unfold all tuples in the relation Rj , thus considering all quantification results. Quite the reverse, the monotonicity properties of the quantifiers of interest often permit the restriction to a small number of coefficients derived from Rj . Usually, these coefficients can be directly computed from cardinality data, which achieves an additional cut in processing times compared to an approach which does not utilize such a priori knowledge. An example of the possible simplifications has already been given in Sect. 11.3. To be specific, equation (11.9) illustrates how the computation of Q,X and ⊥Q,X can be simplified in the case of a convex quantifier; it is then sufficient to inspect only three cases, i.e. q("), q(u) and q(jpk ), rather than the full range {", " + 1, . . . , u − 1, u}. A further simplification has been demonstrated for monotonic quantifiers. In this case, the computation of the quantities of interest can even be reduced to only two instances, q(") and q(u), that must be evaluated. Comparable savings in effort can be achieved when analysing more complex quantifiers (e.g. proportional) in this way, see 11.9 below.
In the following, we will show that the relation Rγ (X1 , . . . , Xn ) can always max be computed from the upper and lower cardinality bounds ur = |Zr |γ min
and "r = |Zr |γ of Boolean combinations Z1 = Ψ1 (X1 , . . . , Xn ), . . . , ZL = ΨL (X1 , . . . , Xn ) of the arguments X1 , . . . , Xn . In order to state this as a theorem and develop the formal machinery for carrying out the proofs, we need some further notation. Hence suppose that E = ∅ is some base set, X ∈ P(E) a fuzzy subset of E and v ∈ {0, 1}. We then define the fuzzy subset v X ∈ P(E) by X : v=1 X v = (11.38) ¬X : v = 0 , where ¬X is the standard fuzzy complement of X. Using this notation X v , we can conveniently write polynomials of negated and non-negated fuzzy sets, v v e.g. min-terms like X1 1 ∩· · ·∩Xn n where v0 , . . . , vn ∈ {0, 1}. However, minterms involving a fixed number of variables are not sufficient for our current purposes, because we are now dealing with fuzzy sets rather than crisp sets, and consequently it is no longer the case that every Boolean combination can be reduced to a disjunction of min-terms (i.e., to disjunctive normal form). Therefore we will consider Boolean expressions of the following more general form, v
v
Xi1 1 ∩ · · · ∩ Xiww
where 1 ≤ i1 < · · · < iw ≤ n and vt ∈ {0, 1} for all t ∈ {1, . . . , w}. Compared to the earlier fixed-length min-terms, it has now been made possible that only a selection of the Xi participate in the conjunction, while others are omitted.
302
11 Implementation of Quantifiers in the Models
In the following, we will develop a more compact notation for these expressions. Hence let V denote the set of mappings, V = {V : A −→ {0, 1}|A ⊆ {1, . . . , n}} .
(11.39)
Abusing notation, we will identify each V ∈ V and its graph, i.e. V will be written as V = {(i1 , v1 ), . . . , (iw , vw )}, where 1 ≤ i1 < i2 < · · · < iw ≤ n and vt = V (it ) ∈ {0, 1} for all t ∈ {1, . . . , w}. Based on this representation of V , we can define a corresponding Boolean combination of the fuzzy subsets by X1 , . . . , Xn ∈ P(E) v
v
ZV = ΨV (X1 , . . . , Xn ) = Xi1 1 ∩ · · · ∩ Xiww .
(11.40)
In dependence on V , we further abbreviate min
(11.41)
max |ZV |γ
(11.42)
"V = |ZV |γ uV =
for all V ∈ V, where γ ∈ I is a given choice of the cutting parameter. It is these Boolean combinations ZV = ΨV (X1 , . . . , Xn ) from which the cardinality information expressed by the upper and lower cardinality bounds uV and "V will be sampled. However, it is not a trivial task to express the relation Rγ (X1 , . . . , Xn ) in terms of the cardinality coefficients "V and uV , V ∈ V. we therefore take an intermediate step, and first show how the cardinalities of min max arbitrary Boolean combinations of the (Xi )γ and (Xi )γ can be computed from the known coefficients "V and uV , V ∈ V. Hence let E = ∅ be some base set and X ∈ P(E) be a fuzzy subset of E. For p ∈ {0, ∗, 1, +, −}, we define ¬(Xγmax ) : p=0 max min : p=∗ Xγ ∩ ¬Xγ [p] min : p=1 X = Xγ (11.43) Xγmax : p = + ¬(X min ) : p=− γ In order to express the relation Rγ (X1 , . . . , Xn ), it is sufficient to know the cardinality of all Boolean combinations X1 [p1 ] ∩ · · · ∩ Xn [pn ] where p ∈ {0, ∗, 1}. This will first be ascertained for the most fine-grained choice of Rγ (X1 , . . . , Xn ), which is sampled from all min-terms of the variables Y1 , . . . , Yn : Theorem 11.40. Let E = ∅ be a finite base set and X1 , . . . , Xn ∈ P(E). Further let γ ∈ I n be given. For all p = (p1 , . . . , pn ) ∈ {0, ∗, 1} , we abbreviate
11.5 Refinement for Quantitative Quantifiers
303
D(p) = {d ∈ {0, 1}n : (d1 , p1 ), . . . , (dn , pn ) ∈ {(0, 0), (1, 1), (0, ∗), (1, ∗)}} (11.44) [p1 ]
S(p) = X1
∩ · · · ∩ Xn[pn ]
c(p) = |S(p)| =
[p ] |X1 1
(11.45)
∩ ··· ∩
Xn[pn ] |
(11.46)
We further define Λ = {(λp )p∈{0,∗,1}n | for all p ∈ {0, ∗, 1}n , λp : {0, 1}n −→ {0, . . . , |E|} λp (d) = c(p) and λp (d) = 0 for all d ∈ / D(p)} . satisfies d∈{0,1}n
(11.47) Φ
The relation R = Rγ 0,...,0
,...,Φ1,...,1
(X1 , . . . , Xn ) generated by all min-terms (d1 )
Φd1 ,...,dn (Y1 , . . . , Yn ) = Y1
∩ · · · ∩ Yn(dn )
and corresponding cardinality coefficients cd1 ,...,dn = |Φd1 ,...,dn (Y1 , . . . , Yn )| according to (11.34) can then be expressed as R = {c : {0, 1}n −→ {0, . . . , |E|} | there exists (λp )p∈{0,∗,1}n ∈ Λ λp (d)} . such that for all d ∈ {0, 1}n , cd =
(11.48)
p∈{0,∗,1}n
[p ]
In other words, R can be computed from the cardinalities of the sets X1 1 ∩ [p ] · · · ∩ Xn n , where p = (p1 , . . . , pn ) ∈ {0, ∗, 1}n . This result is important because R is the ‘universal’ relation assumed in the worst-case analysis of Th. 11.32 which can express every quantitative Q. As shown in the next theorem, the above reduction to the universal relation R is sufficient to compute every other relation Rγ (X1 , . . . , Xn ) = RγΦ1 ,...,ΦK (X1 , . . . , Xn ) as well, which results from a choice of Boolean combinations Φ1 (Y1 , . . . , Yn ), . . . , ΦK (Y1 , . . . , Yn ): Theorem 11.41. and suppose that Let E = ∅ be a finite base set, X1 , . . . , Xn ∈ P(E), Φ1 (Y1 , . . . , Yn ), . . . , ΦK (Y1 , . . . , Yn ) are Boolean combinations of the crisp variables Y1 , . . . , Yn ∈ P(E). We shall further abbreviate R = RγΦ0,...,0 ,...,Φ1,...,1 (X1 , . . . , Xn )
R =
Φ ,...,ΦK Rγ 1 (X1 , . . . , Xn ) (d )
(d )
(11.49) (11.50)
where Φd1 ,...,dn (Y1 , . . . , Yn ) = Y1 1 ∩ · · · ∩ Yn n , see (11.30). In addition, we will use cardinality coefficients cd1 ,...,dn = |Φd1 ,...,dn (Y1 , . . . , Yn )|. Then
304
11 Implementation of Quantifiers in the Models
R = {(c1 , . . . , cK ) : (c0,...,0 , . . . , c1,...,1 ) ∈ R, ct =
cd , t = 1, . . . , K} ,
d∈Dt
(11.51) where Dt = {d = (d1 , . . . , dn ) ∈ {0, 1}n : Φt (Y1 , . . . , Yn ) ∩ Φd (Y1 , . . . , Yn ) = ∅} (11.52) for all t ∈ {1, . . . , K}. Hence j and ⊥j can be computed from some choice of R , and R can be computed from the most fine-grained, or ‘universal’ relation R, which in turn can [p ] [p ] be computed from the cardinalities |X1 1 ∩· · ·∩Xn n |, pi ∈ {0, ∗, 1}. The next goal is that of computing the cardinality of X1 [p1 ] ∩· · ·∩Xn [pn ] from the cardinality coefficients uV and "V , V ∈ V. In order to accomplish this, we need additional intermediate representations. Rather than fixed-length Boolean combinations of the above type, i.e. X1 [p1 ] ∩ · · · ∩ Xn [pn ] , we therefore introduce more flexible Boolean combinations which take the following form, [p ]
[p ]
Xi1 1 ∩ · · · ∩ Xiww
where 1 ≤ i1 < i2 < · · · < iw ≤ n and pt ∈ {0, 1, ∗, +, −} for all t ∈ {1, . . . , w}. The relevant argument positions it and corresponding pt can again be viewed as the graph P = {(i1 , p1 ), . . . , (iw , pw )} of a mapping P : A −→ {0, 1, ∗, +, −}, where A ⊆ {1, . . . , n} = {it : t = 1, . . . , w} and P (it ) = pt . We can therefore capture all intended combinations of the arguments by letting P range over the following collection, P∗ = {P : A −→ {0, 1, ∗, +, −}|A ⊆ {1, . . . , n}} .
(11.53)
Based on a given P ∈ P∗ and its graph representation P = {(i1 , p1 ), . . . , (iw , pw )}, we can now define the associated crisp set S(P ) ∈ P(E) and its cardinality c(P ) by [p ]
[p ]
S(P ) = Xi1 1 ∩ · · · ∩ Xiww c(P ) = |S(P )| =
[p ] |Xi1 1
∩ ··· ∩
(11.54) [p ] Xiww | .
(11.55)
For reasons that will soon become clear, we also define the following restricted range P, in which P is allowed to assume values in {0, 1, +, −} only: P = {P : A −→ {0, 1, +, −}|A ⊆ {1, . . . , n}} .
(11.56)
Compared to P∗ , P thus comprises those mappings P ∈ P∗ which satisfy P (i) = ∗ for all i ∈ Dom P . The following theorem asserts that the restricted range P is sufficient for computing c(P∗ ) for all P∗ ∈ P∗ , by presenting a suitable reduction:
11.5 Refinement for Quantitative Quantifiers
305
Theorem 11.42. Then for all P∗ ∈ P∗ , Let E = ∅ be a finite base set and X1 , . . . , Xn ∈ P(E).
σ(P ) · c(P ) , (11.57) c(P∗ ) = P ∈P
where the integer-valued mapping σ = σP∗ : P −→ Z is inductively defined as follows. Given P∗ ∈ P, let i∗ = max{i ∈ Dom P∗ : P∗ (i) = ∗} .
(11.58)
In dependence on i∗ , we stipulate a. If i∗ = 0, then
σP∗ (P ) =
1 0
: :
P = P∗ P = P∗
(11.59)
for all P ∈ P. b. If i∗ > 0, then σP∗ (P ) = σP (P ) − σP (P )
(11.60)
P = (P \ {(i∗ , ∗)}) ∪ {(i∗ , +)} P = (P \ {(i∗ , ∗)}) ∪ {(i∗ , 1)} .
(11.61) (11.62)
for all P ∈ P, where
Hence c(P∗ ) can indeed be computed from the cardinalities c(P ), P ∈ P. Having shown that the case p = ∗ can be eliminated, we shall assume in the following that p ∈ {0, 1, +, −} only, thus focussing on the case that P ∈ P. It [p ] [p ] remains to be shown that the cardinalities c(P ) = |Xi1 1 ∩ · · · ∩ Xiww | can be computed from the quantities uV and "V , V ∈ V sampled from X1 , . . . , Xn . In the following, we will introduce some new concepts which simplify the description of the necessary calculations. For p ∈ {0, 1, +, −}, we define its ‘polarity’ pol(p) ∈ {0, 1} and its ‘type’, type(p) ∈ {max, min}, by 1 : p ∈ {1, +} pol(p) = (11.63) 0 : p ∈ {0, −} max : p ∈ {+, −} type(p) = (11.64) min : p ∈ {0, 1} The polarity of p is 0 if X [p] is negated, i.e. in the cases X [0] = ¬(Xγmax ) and X [−] = ¬(Xγmin ), while the polarity is one in the non-negated cases max X [1] = Xγmin and X [+] = Xγmax . The type of p is ‘max’ if X [p] = (X pol(p) )γ max i.e. in the cases X [+] = Xγmax and X [−] = ¬(Xγmin ) = (¬X)γ , see
306
11 Implementation of Quantifiers in the Models min
Th. 7.17. The type of p is ‘min’ if X [p] = (X pol(p) )γ
and thus in the
min
cases X [0] = ¬(Xγmax ) = (¬X)γ and X [1] = Xγmin . Combining the results for the individual cases, we obtain that type(p)
X [p] = (X pol(p) )γ
(11.65)
for all p ∈ {0, 1, +, −}. The following theorem completes the intended reduction, by showing how c(P ), P ∈ P can be computed from the cardinality coefficients "V and uV , V ∈ V. Theorem 11.43. Let E = ∅ be a finite base set, X1 , . . . , Xn ∈ P(E) and γ ∈ I. Then for all P ∈ P,
c(P ) = ζP (V, min) · "V + ζP (V, max) · uV , (11.66) V ∈V
V ∈V
where the integer-valued mapping ζ = ζP : V × {min, max} −→ Z is inductively defined as follows. Recalling the graph representation P = {(i1 , p1 ), . . . , (iw , pw )} of P , where pt = P (it ), let i∗ = max{i ∈ Dom P : type(P (i)) = type(pw )} .
(11.67)
In dependence on i∗ , we stipulate a. If i∗ = 0, then 1 ζP (V, y) = 0
: :
V = {(i1 , pol(p1 )), . . . , (iw , pol(pw ))}, y = type(pw ) else (11.68)
for all V ∈ V and y ∈ {min, max}. b. If i∗ > 0, then ζP (V, y) = ζP (V, y) − ζP (V, y)
(11.69)
for all V ∈ V and y ∈ {min, max}, where P = P \ {(i∗ , p∗ )} P = P ∪ {(i∗ , p )} p∗ = P (i∗ )
(11.70) (11.71) (11.72)
and 0 1 p = + −
: : : :
p∗ p∗ p∗ p∗
=+ =− =0 = 1.
(11.73)
11.5 Refinement for Quantitative Quantifiers
307
Subsequently applying Th. 11.40 to Th. 11.43, every relation Rγ (X1 , . . . , Xn ) min can be computed from a choice of cardinality coefficients "r = |Zr |γ and max ur = |Zr |γ , which are sampled from suitable Boolean combinations Zr = Ψr (X1 , . . . , Xn ) of the arguments. All of the theorems are constructive in style, i.e. they also present a specific choice of Boolean combinations and show how the relation Rγ (X1 , . . . , Xn ) can be computed from the resulting quantities "r and ur . In order to spell out a procedure which covers all possible Rγ (X1 , . . . , Xn ) in full generality, it was of course necessary to make worst-case assumptions in terms of complexity of the quantifiers to be modelled. In fact, the number of Boolean combinations required for the fully general analysis amounts to |V | = 3n . Although the arity n of NL quantifiers is very small in practice (usually n = 2 or n = 3), this indicates that the generic solution might not be optimal from the performance point of view, and should rather be viewed as a proof of concept, and as a point of departure for developing more efficient, dedicated solutions for typical classes of quantifiers. The potential for simplification can be demonstrated nicely for conservative quantifiers, which we will now analyse in some more depth. Our goal is to develop a simple and efficient way of computing Rγ (X1 , X2 ). Hence let 2 Q : P(E) −→ I be a two-place quantifier on a finite base set which is both quantitative and conservative. As has been shown in Th. 11.37 above, we can describe the quantifier in terms of a mapping q : {0, . . . , |E|}2 −→ I, based on an analysis in terms of K = 2, Φ1 (Y1 , Y2 ) = Y1 , Φ2 (Y1 , Y2 ) = Y1 ∩ Y2 , c1 = |Y1 | and c2 = |Y1 ∩ Y2 | for all Y1 , Y2 ∈ P(E). The corresponding relation Rγ (X1 , X2 ), X1 , X2 ∈ P(E), can then be determined from Z1 = Ψ1 (X1 , X2 ) = X1 , Z2 = Ψ2 (X1 , X2 ) = X1 ∩ X2 and Z3 = Ψ3 (X1 , X2 ) = X1 ∩ ¬X2 in the following way. Theorem 11.44. are fuzzy Let E = ∅ be a finite base set and suppose that X1 , X2 ∈ P(E) subsets of E. Further let γ ∈ I, and suppose that Rγ (X1 , X2 ) ⊆ {0, . . . , |E|} × {0, . . . , |E|}, is the relation defined by Rγ (X1 , X2 ) = {(|Y1 |, |Y1 ∩ Y2 |) : Y1 ∈ Tγ (X1 ), Y2 ∈ Tγ (X2 )} . min
min
Then Rγ (X1 , X2 ) can be computed from "1 = |X1 |γ , "2 = |X1 ∩ X2 |γ , "3 = |X1 ∩ viz
min ¬X2 |γ ,
u1 =
max |X1 |γ ,
u2 = |X1 ∩
max X2 |γ ,
max
and u3 = |X1 ∩ ¬X2 |γ
,
Rγ (X1 , X2 ) = {(c1 , c2 ) : "1 ≤ c1 ≤ u1 , max("2 , c1 − u3 ) ≤ c2 ≤ min(u2 , c1 − "3 )} . Note 11.45. In this case, the ‘worst-case analysis’ for n = 2 results in 32 = 9 combinations of X1 , X2 that must possibly be considered for computing R,
308
11 Implementation of Quantifiers in the Models
i.e. X1 ∩ X2 , X1 ∩ ¬X2 , ¬X1 ∩ X2 , ¬X1 ∩ ¬X2 , X1 , ¬X1 , X2 , ¬X2 , and E. Of these, ¬X1 and ¬X2 are obviously redundant, because their cardinality bounds can be computed from the cardinality bounds of X1 and X2 . The full domain E can also be eliminated, because it bears no information about X1 and X2 . This leaves a total of 6 combinations which are potentially necessary for computing Rγ (X1 , X2 ). Hence the conservativity of Q shrinks down this set of required combinations by one half, i.e. only the combinations X1 , X1 ∩ X2 and X1 ∩ ¬X2 must actually be considered. Let us now put this theorem into context. The earlier Th. 11.37 states that a conservative quantifier can be computed from the cardinality of Φ1 (Y1 , Y2 ) = Y1 and Φ2 (Y1 , Y2 ) = Y1 ∩ Y2 . Hence Q,X1 ,X2 and ⊥Q,X1 ,X2 can be computed from that choice of Rγ (X1 , X2 ) assumed in the above theorem. In other words, the theorem ensures that if Q is conservative, then there exist mappings q min , q max : {0, . . . , |E|}4 −→ I such that Q,X1 ,X2 (γ) = q max ("1 , "2 , "3 , u1 , u2 , u3 )
(11.74)
⊥Q,X1 ,X2 (γ) = q
(11.75)
min
("1 , "2 , "3 , u1 , u2 , u3 )
and γ ∈ I, where the coefficients "1 , "2 , "3 , u1 , u2 and for all X1 , X2 ∈ P(E) u3 are defined as above. This is obvious from the Th. 11.44 if we define these mappings as follows, q max ("1 , "2 , "3 , u1 , u2 , u3 ) = max{q(c1 , c2 ) : (c1 , c2 ) ∈ R} q min ("1 , "2 , "3 , u1 , u2 , u3 ) = min{q(c1 , c2 ) : (c1 , c2 ) ∈ R} , where R ⊆ {0, . . . , |E|} × {0, . . . , |E|} is the relation R = {(c1 , c2 ) : "1 ≤ c1 ≤ u1 , max("2 , c1 − u3 ) ≤ c2 ≤ min(u2 , c1 − "3 )} . We shall see below in Sect. 11.9 how this analysis of conservative quantifiers can be developed into an efficient implementation of proportional quantifiers in the models. Other classes of quantifiers can be analysed in a similar way. This will be demonstrated below in Sect. 11.10, where the method is extended to cardinal comparatives like “more. . . than”. The proposed scheme of analysing quantitative quantifiers on finite base sets therefore enables us to implement the relevant NL quantifiers in all models of interest. As witnessed by Th. 11.44, for example, the proposed solution of computing j and ⊥j from a relation Rj = Rγ j on cardinals achieves a very simple description which is also computationally straightforward. In the next section, we will be min concerned with the remaining issue of computing the quantities "r = |Zr |γ j max and ur = |Zr |γ j efficiently, to which the computation of Rj has been reduced. Upon solving this problem, all necessary ingredients will then be available for implementing quantifiers in the models. We will finally put all the pieces together and restate the complete algorithms in pseudo-code. In particular, this self-contained presentation is intended as a point of departure for concrete implementations.
11.6 Computation of Cardinality Bounds
309
11.6 Computation of Cardinality Bounds Given Q and X1 , . . . , Xn , we are now able to compute Q,X1 ,...,Xn , ⊥Q,X1 ,...,Xn max and Qγ (X1 , . . . , Xn ) from the upper cardinality bounds ur = |Zr |γ and min
lower cardinality bounds "r = |Zr |γ of Boolean combinations Z1 , . . . , ZL of the arguments. We have therefore succeeded in reformulating the computation of Q,X1 ,...,Xn , ⊥Q,X1 ,...,Xn and Qγ (X1 , . . . , Xn ) in such a way that it depends only on cardinality information expressed by the tuple of coefficients ("1 , . . . , "L , u1 , . . . , uL ), see Th. 11.40 to Th. 11.43 and equations (11.35), (11.36), (11.37), respectively. Once the cardinality bounds "1 , . . . , "L , u1 , . . . , uL are known, the subsequent computations can dispense with any direct reference to the fuzzy sets X1 , . . . , Xn , which might be too large and awkward to operate upon. The definition of the cardinality coefficients "r and ur , however, still depends on X1 , . . . , Xn , which means that up to this point, we have merely shifted the burden of dealing with the X1 , . . . , Xn into the computation of these coefficients. When computing quantification results, it is necessary to calculate the ur and "r for every considered choice of the cutting parameter γ that controls the three-valued cuts. Due to the fact that these coefficients must be computed repeatedly, their computation should be implemented as efficiently as possible. Consequently, the above strategy will now be re-iterated in order to ban any direct reference to X1 , . . . , Xn from the computation of the cardinality bounds "r and ur . Basically, a pre-processing step will be applied which extracts the relevant information from the given fuzzy subsets, and the extracted cardinality information will then be represented in a way which lends itself to efficient processing of the "r and ur . In order to discuss the intermediate representation extracted from the fuzzy sets, and in order to construct a computation procedure which operates on this representation, it is sufficient to consider each Zr in isolation. We will thus focus on one fuzzy set at a time, which for simplicity we denote by X, which is supposed to be chosen from {Z1 , . . . , ZL }. In addition, we will assume that a sample of cutting levels Γ = {γ0 , . . . , γm } be given, equipped with the usual properties, i.e. 0 = γ0 < γ1 < · · · < γm = 1 and Γ (X) ⊆ Γ . It is then convenient to index the results of the cardinality bounds for each γ j by j. We will thus denote by "(j) and u(j) the lower and upper cardinality bounds of the considered fuzzy set X ∈ P(E) in the j-th iteration, i.e. min
(11.76)
max |X|γ j
(11.77)
"(j) = |X|γ j u(j) =
for j ∈ {0, . . . , m − 1}. Rather than computing the coefficients "(j) and u(j) in each iteration j = 0, . . . , m − 1 from scratch, it would be advantageous to devise a procedure which computes the values of these coefficients in the current iteration, i.e. "(j) and u(j), from their respective values in the previous
310
11 Implementation of Quantifiers in the Models
iteration, i.e. from "(j − 1) and u(j − 1). This re-use of earlier computations promises a cut in processing times due to the elimination of redundant effort. For the sake of efficiency, this procedure should not refer directly to the fuzzy set X from which the coefficients "(j) and u(j) are sampled. By contrast, all computations should rest on cardinality information only. A suitable mechanism which extracts exactly those aspects of a fuzzy subset that characterize its cardinality, is provided by the histogram.4 Definition 11.46. By the histogram of a fuzzy subset X ∈ P(E) (where E is some finite set), we denote the mapping HistX : I −→ N defined by HistX (z) = |µ−1 X (z)| = |{e ∈ E : µX (e) = z}|
(11.78)
for all z ∈ I. Notes 11.47. • It is apparent from the assumed finiteness of E that HistX is well-defined, i.e. the term |µ−1 X (z)| always refers to a finite cardinality. • The proposed histogram representation can be likened to Rescher’s truth status statistics ξz , which it generalizes from the m-valued to the continuous valued case, see [137, p. 201]. Let us now recall the notation A(X) = {µX (e) : e ∈ E} introduced in (11.13). The finiteness of E obviously entails that A(X) be a finite set as well. In addition, it is quite clear that A(X) is the support of HistX , i.e. A(X) = {z ∈ I : HistX (z) = 0} . In particular, HistX has finite support. This is important because having finite support, HistX can be uniquely determined from a finite representation. To see this, let us consider a finite set A ⊇ A(X); because A(X) is finite, such sets are known to exist. We can then restrict HistX to the finite sample of z ∈ A, by defining a corresponding mapping HA : A −→ N according to HA (z) = HistX (z) 4
This is apparent because two fuzzy subsets X ∈ P(E), X ∈ P(E ) of finite base sets E, E = ∅ have the same histograms, i.e. HistX = HistX , if and only ˆ ˆ if there exists a bijection β : E −→ E such that X = β(X) (and thus also ˆ −1 ˆ X = β (X)). Consequently, histograms capture exactly the cardinality aspect of fuzzy sets, because two finite sets have the same cardinality if and only if there exists a bijection between them.
11.6 Computation of Cardinality Bounds
311
for all z ∈ A. It is obvious that HistX can be recovered from HA , because A ⊇ A(X) grants that the support of HistX is contained in A. Therefore z ∈ I \ A entails that HistX (z) = 0. This proves that knowing HA is sufficient to determine HistX (z) in the full range of all z ∈ I, because HistX (z) can be expressed in terms of HA according to the obvious rule HA (z) : z ∈ A HistX (z) = (11.79) 0 : else for all z ∈ I. The benefit gained by this representation is that the ‘generating’ mapping HA : A −→ N is declared on a finite domain. Therefore HA can be specified by listing the finite number of pairs in its graph, i.e. as a finite list which comprises all pairs in {(z, HistX (z)) : z ∈ A}. Obviously, this demonstrates that the histogram of X has a finite specification, and is thus suitable for representation on digital computers (modulo the finite precision of floating point numbers, of course). Having introduced the histogram-based representation of the considered fuzzy subset X, we will now connect this representation with the given set of three-valued cut levels Γ ⊇ Γ (X), in order to achieve an incremental computation of the cardinality coefficients. Hence suppose that Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , . . . , Xn ) is given, where 0 = γ0 < γ1 < · · · < γm−1 < γm = 1. Let us define AΓ ∈ P(I) by AΓ = { 12 + 12 γ : γ ∈ Γ } ∪ { 12 − 12 γ : γ ∈ Γ } .
(11.80)
It is apparent from the finiteness of Γ that AΓ has finite cardinality as well. Furthermore, AΓ clearly satisfies the desired condition that AΓ ⊇ AΓ (X) ⊇ A(X) .
(11.81)
This is obvious from Γ ⊇ Γ (X), (11.13), (11.14) and (11.80). By the above reasoning, we then know that the support of HistX is contained in AΓ , i.e. the histogram of X can be represented by HAΓ : AΓ −→ N, see (11.79). For notational convenience, we will not refer to HAΓ directly. It will be advantageous to split HAΓ into two mappings H + and H − , which let us access the histogram information in terms of j rather than γ j . These mappings H + , H − : {0, . . . , m} −→ I are defined by H + (j) = HistX ( 12 + 12 γj ) = |{e ∈ E : µX (e) = −
H (j) =
HistX ( 12
−
1 2 γj )
= |{e ∈ E : µX (e) =
1 2 1 2
+ 12 γj }|
(11.82)
1 2 γj }|
(11.83)
−
This is just a convenient representation of HAΓ , because H + and H − are apparently sampled from HAΓ , and because the original mapping HAΓ can be recovered from H + and H − as follows, + H (j) : z = 12 + 12 γj HAΓ (z) = H − (j) : z = 12 − 12 γj
312
11 Implementation of Quantifiers in the Models
for all z ∈ AΓ . Based on the mappings H + and H − , which must be computed from X in a preprocessing step, it is now possible to formulate an incremental procedure which computes the current cardinality bounds "(j) and u(j) in the j-th iteration from the cardinality bounds "(j − 1) and u(j − 1) computed in the previous iteration and from the cardinality information stored in H + (j) and H − (j). Theorem 11.48. Let E = ∅ be a finite base set and X ∈ P(E) a fuzzy subset of E. Further let Γ = {γ0 , . . . , γm } ⊇ Γ (X) be given such that 0 = γ0 < γ1 < · · · < γm−1 < γm = 1. Then "(0) =
m
H + (k)
k=1
u(0) = "(0) + H + (0) , and "(j) = "(j − 1) − H + (j) u(j) = u(j − 1) + H − (j) for all j ∈ {1, . . . , m − 1}. Obviously, neither the formulas for initialisation nor those for updating refer to the original fuzzy set; it is only the histogram of X that must be known for computing the cardinality bounds. We have therefore succeeded in eliminating the fuzzy set X from the procedure for computing "(j) and u(j). Unlike the fuzzy sets themselves, their histograms all look alike. This greatly facilitates processing, because the histograms of arbitrary fuzzy sets can easily be represented in a uniform way. Due to the fact that the update rules described in Th. 11.48 only involve a single lookup operation, i.e. one access to the histogram data in H + (j) or H − (j), and one additional arithmetic operation (sum), the proposed method results in a very fast implementation. The presentation of a histogram-based update rule in Th. 11.48 completes the analysis of fuzzy quantification in Fξ -DFSes and the subsequent discussion of possible efficiency improvements. From a more general perspective, we have shown that Fξ (Q)(X1 , . . . , Xn ) can always be expressed as a function Fξ (Q)(X1 , . . . , Xn ) = Ξ(HistZ1 , . . . , HistZL ) which depends only on the histograms of Boolean combinations Z1 , . . . , ZL of the arguments. However, this should not lure us into defining fuzzy quantifiers as functions of histograms in the spirit of Rescher’s treatment of
11.7 Implementation of Unary Quantifiers
313
quantifiers in many-valued logic, cf. [136] and [137, pp. 197-206]. This is because the histogram-based analysis rests on the assumption of automorphisminvariance, a property shared by most, but not all, NL quantifiers. When trying to define linguistic quantifiers directly in terms of Ξ(HistZ1 , . . . , HistZL ), the correspondence problem discussed in Chap. 2 returns with full force. Consequently, the analysis of certain quantifiers in terms of histograms is only possible in special cases, and it would not admit a comprehensive treatment of fuzzy NL quantification. In addition, it makes no provisions as to the systematicity of interpretation.
11.7 Implementation of Unary Quantifiers This section finally presents the complete algorithms for quantitative oneplace quantifiers on finite base sets. In principle, the necessary techniques for implementing these quantifiers in the prototypical models Fowa , M and MCX have already been described. Nevertheless, it is presumably helpful to collect the results developed in the previous sections and show how the different parts of the implementation work in concert. Hence let E = ∅ be a finite base set and suppose that Q : P(E) −→ I is quantitative. Further let X ∈ P(E) be a given fuzzy argument. We then know from Th. 11.27, Th. 11.30 and Th. 11.31 that Fowa (Q)(X), M(Q)(X) and MCX (Q)(X) can be computed from quantities j , ⊥j sampled from some choice of Γ = {γ0 , . . . , γm } ⊇ Γ (X), where 0 = γ0 < γ1 < · · · < γm = 1. For convenience, we will assume in the following that Γ = Γ (X). Recalling (11.14), (11.82) and (11.83), it is then apparent how H + , H − as well as γj , j = 0, . . . , m can be computed from the fuzzy set X. We will write H+ [j] and H− [j] to denote H + (j) and H − (j), respectively. An (m + 1)-dimensional array G will be used to access the γj ’s, i.e. G[j] denotes γj , j = 0, . . . , m. In particular, this means that G[0] = 0 and G[m] = 1, cf. (11.14). The resulting algorithms which compute Fowa (Q)(X), M(Q)(X) and MCX (Q)(X) from these data are depicted in Table 11.1, 11.2 and 11.3, respectively. Starting with j = 0, each of the algorithms enters a main loop which increments j in each iteration. This lets us compute the current value of the cardinality min max bounds " = |X|γ j and u = |X|γ j in the j-th iteration from their values in the previous iteration and from the histogram information stored in H+ and H− as described in Th. 11.48. The updating of these coefficients takes place in the code fragment with the heading ‘update clauses’. The quantities j = Q,X (γ j ), ⊥j = ⊥Q,X (γ j ) and Cj = med 1 (j , ⊥j ) can then be computed from 2
q min (", u) and q max (", u), as described in Th. 11.15. Apart from incorporating the update rule for the cardinality coefficients " and u, the algorithms merely reflect the earlier analysis in Th. 11.27, Th. 11.30 and Th. 11.31. In these algorithms, the fuzzy set X ∈ P(E) was supplied as the input parameter, which directly reflects the expected input-output behaviour (i.e.
314
11 Implementation of Quantifiers in the Models
Table 11.1. Algorithm for evaluating quantitative one-place quantifiers in Fowa (based on floating-point arithmetics) Algorithm for computing Fowa (Q)(X) (floating-point arithmetics) INPUT: X Compute G, H+ , H− and m; // see text // initialize , u m H+ [j]; := j=1
u := + H+ [0]; sum = G[1] * (q min (,u) + q max (,u)); for( j := 1; j<m; j := j + 1 ) { // update clauses for and u := - H+ [j]; u := u + H− [j]; sum := sum + (G[j+1] - G[j]) * (q min (,u) + q max (,u)); } return sum / 2; END
Table 11.2. Algorithm for evaluating quantitative one-place quantifiers in M (based on floating-point arithmetics) Algorithm for computing M(Q)(X) (floating-point arithmetics) INPUT: X Compute G, H+ , H− and m; // see text // initialize , u m H+ [j]; := j=1
u := + H+ [0]; := q max (,u); ⊥ := q min (,u); if( ⊥ > 12 ) { sum := G[1] * ⊥; for( j := 1; j<m; j := j + 1 ) { // update clauses for and u := - H+ [j]; u := u + H− [j]; ⊥ := q min (,u); if( ⊥ ≤ 12 ) break; sum := sum + (G[j+1] - G[j]) * ⊥; } } else if( < 12 ) { sum := G[1] * ; for( j := 1; j<m; j := j + 1 ) { // update clauses for and u := - H+ [j]; u := u + H− [j]; := q max (,u); if( ≥ 12 ) break; sum := sum + (G[j+1] - G[j]) * ; } } else { return 12 ; } return sum + 12 * (1 - G[j]); END
X is the input, and F(Q)(X) is the returned output). The histogram infor-
11.7 Implementation of Unary Quantifiers
315
Table 11.3. Algorithm for evaluating quantitative one-place quantifiers in MCX (based on floating-point arithmetics) Algorithm for computing MCX (Q)(X) (floating-point arithmetics) INPUT: X Compute G, H+ , H− and m; // see text // initialize , u m H+ [j]; := j=1
u := + H+ [0]; := q max (,u); ⊥ := q min (,u); j := 0; if( ⊥ > 12 ) { B := 2 * ⊥ - 1; while( B > G[j+1] ) { j := j + 1; // update clauses for and u := - H+ [j]; u := u + H− [j]; B := 2 * q min (,u) - 1; } return 12 + 12 max(B, G[j]); } else if( < 12 ) { B := 1 - 2 * ; while( B > G[j+1] ) { j := j + 1; // update clauses for and u := - H+ [j]; u := u + H− [j]; B := 1 - 2 * q max (,u); } return 12 - 12 max(B, G[j]); } return 12 ; END
mation in H+ and H− and the array G which collects the γj ’s, must then be computed from X in the obvious way. However, it also makes sense to consider a variant of these algorithms where the algorithms expect H+ , H− and G as their inputs, rather than the fuzzy set X from which these coefficients have been sampled. This separation of the histogram extraction from the core algorithms will abstract from any specifics concerning the representation of the fuzzy set X (simply because the resulting histograms can be represented in a uniform way, which need not be the case for different types of fuzzy subsets and heterogeneous domains). By separating the histogram extraction from the main algorithms, it becomes possible to develop a generic solution which proves useful for implementing quantifiers in arbitrary domains. The three algorithms presented so far resort to floating-point arithmetics, i.e. the membership grades µX (e) and consequently the γj ’s are allowed to assume arbitrary values in the unit interval. In some cases, however, it can be advantageous to use a reformulation in terms of integer arithmetics. Roughly speaking, the integer-based solution will be beneficial if the size of the domain is large compared to the number of supported membership grades. This will be the case e.g. in fuzzy image processing, where the size of the domain (number
316
11 Implementation of Quantifiers in the Models
of image pixels) easily reaches one million and more; and where a coarse discernment of intensities is often sufficient (e.g., one byte or 256 intensity grades per pixel). Let us now show how the basic algorithms can be fitted to integer arithmetics. In this case, X ∈ P(E) is no longer allowed to assume arbitrary membership grades µX (e) ∈ I. By contrast, there is a fixed choice of m ∈ N \ {0} such that all membership values of the fuzzy argument set X satisfy µX (e) ∈ Um for all e ∈ E, where Um
1 m − 1 = 0, , . . . , ,1 . m m
(11.84)
(11.85)
If X ∈ P(E) satisfies (11.84), i.e. X is admissible for the chosen m , then we can represent the histogram of X as an (m + 1)-dimensional array H : {0, . . . , m } −→ N, defined by k H[k] = {e ∈ E : µX (e) = } m for all k = 0, . . . , m . For simplicity, we shall further assume that m is even, (i.e. m = 2m for a given m ∈ N \ {0}). We can then choose Γ = {j/m : j = 0, . . . , m} ,
(11.86)
which is known from (11.84) to include Γ (X). In this case, H+ [j] and H− [j] can be determined from H as follows, H+ [j] = HistX ( 12 + j/m ) = H[m + j] H− [j] = HistX ( 12 − j/m ) = H[m − j] for j = 0, . . . , m. Thus, we can utilize Th. 11.48 for computing the cardinality coefficients "(j) and u(j) from the array H. The integer-based implementation can dispense with an array G which explicitly stores the relevant values of γj . This is because every γ ∈ Γ can be directly expressed as γj = j/m, j ∈ {0, . . . , m}. The integer-based algorithms for implementing unary quantifiers in Fowa , M and MCX are shown in Table 11.4, 11.5 and 11.6, respectively. In fact, these algorithms are rather similar to their floatingpoint equivalents, because they are also based on Th. 11.27, Th. 11.30 and Th. 11.31; and because they also profit from the incremental update of " and u as described in Th. 11.48. However, some slight optimizations have been made which reflect that Γ is now defined by (11.86). Unlike the floating-point case, this choice of Γ makes it possible that both H+ [j] = H[m + j] = 0 and H− [j] = H[m − j] = 0 in the current iteration j, which entails that no update of ", u, q min (", u) and q max (", u) is necessary (see Th. 11.48 for justification).
11.7 Implementation of Unary Quantifiers
317
Table 11.4. Algorithm for evaluating quantitative one-place quantifiers in Fowa (based on integer arithmetics) Algorithm for computing Fowa (Q)(X) (integer arithmetics) INPUT: X // initialise H, , u H := HistX ; m := H[m+j]; j=1
u := + H[m]; cq := q min (,u) + q max (,u); sum := cq; for( j := 1; j<m; j := j + 1 ) { ch := false; // "change" // update clauses for and u if( H[m+j] = 0 ) { := - H[m+j]; ch := true; } if( H[m-j] = 0 ) { u := u + H[m-j]; ch := true; } if( ch ) // one of or u has changed { cq := q min (,u) + q max (,u); } sum := sum + cq; } return sum / m’; // where m’ = 2*m END
In order to avoid the vacuous recomputation of " and u in this situation, it pays off to check if the above criterion applies, and consequently to perform an update of q min (", u) and q max (", u) only when it is known to be necessary. The variables ch (‘change’) or nc (‘no change’) are used to recognize this situation. ch is set to ‘true’ only if at least one of ", u has changed in the current iteration, which forces q min (", u) and q max (", u) to be recomputed. The flag nc is set to true only if none of the coefficients " and u has changed in the current iteration, which can be used to control the updating of q min (", u) and q max (", u) in a similar way. Apart from these modifications which aim at performance optimization, the algorithm for MCX has undergone some further changes. Compared to Th. 11.31, the recognition of the ‘stop condition’ has been changed, which is replaced with a simpler criterion in the final algorithm (see Table 11.6). This modification is apparent from the use of integer arithmetics, because the possible membership grades are now tied to the set of all k/m , k ∈ {0, . . . , m }, see (11.84). The reduced number of arithmetic operations necessary to check the modified stopping condition, results in a further speed-up of the algorithm for computing MCX (Q)(X). A further simplification of the algorithms for M(Q)(X) and MCX (Q)(X) is possible in the frequent case that Q is monotonic. For example, if Q is nondecreasing, then q min (", u) = q(") and q max (", u) = q(u), see (11.10). We can therefore omit the updating of u in the first for-loop and likewise omit the updating of " in the second for-loop of the algorithms, because only the remaining coefficient is actually needed for computing q min (", u) or q max (", u) in these cases.
318
11 Implementation of Quantifiers in the Models
Table 11.5. Algorithm for evaluating quantitative one-place quantifiers in M (based on integer arithmetics) Algorithm for computing M(Q)(X) (integer arithmetics) INPUT: X // initialise H, , u H := HistX ; m := H[m+j]; j=1
u := + H[m]; := q max (,u); ⊥ := q min (,u); if( ⊥ > 12 ) { sum := ⊥; for( j := 1; j<m; j := j + 1 ) { nc:= true; // "no change" // update clauses for and u if( H[m+j] = 0 ) { := - H[m+j]; nc:= false; } if( H[m-j] = 0 ) { u := u + H[m-j]; nc:= false; } if( nc) { sum := sum + ⊥; continue; } // one of or u has changed ⊥ := q min (,u); if( ⊥ ≤ 12 ) break; sum := sum + ⊥; } } else if( < 12 ) { sum := ; for( j := 1; j<m; j := j + 1 ) { nc:= true; if( H[m+j] = 0 ) { := - H[m+j]; nc:= false; } if( H[m-j] = 0 ) { u := u + H[m-j]; nc:= false; } if( nc) { sum := sum + ; continue; } // one of or u has changed := q max (,u); if( ≥ 12 ) break; sum := sum + ; } } else { return 12 ; } return (sum + 12 *(m-j)) / m; END
11.8 Implementation of Absolute Quantifiers and Quantifiers of Exception Unary quantifiers, like those discussed in the previous section, usually do not express directly at the NL surface. Nevertheless, the modelling and implementation of these quantifiers proves invaluable for treating important classes of quantifiers in NL: Those which depend on an absolute count (i.e., absolute quantifiers), and those which specify the admissible exceptions to a general rule (i.e., quantifiers of exception). Here we shall formally define these classes of quantifiers and explain how quantifying expressions involving such quantifiers can be evaluated.
11.8 Implementation of Absolute Quantifiers and Quantifiers of Exception
319
Table 11.6. Algorithm for evaluating quantitative one-place quantifiers in MCX (based on integer arithmetics) Algorithm for computing MCX (Q)(X) (integer arithmetics) INPUT: X // initialise H, , u H := HistX ; m := H[m+j]; j=1
u := + H[m]; := q max (,u); ⊥ := q min (,u); if( ⊥ > 12 ) { for( j := 1; j<m; j := j + 1 ) { ch := false; // "change" // update clauses for and u if( H[m+j] = 0 ) { := - H[m+j]; ch := true; } if( H[m-j] = 0 ) { u := u + H[m-j]; ch := true; } if( ch ) // one of or u has changed { ⊥ := q min (,u); } if( ⊥ ≤ m+j ) { return (m+j)/m’; } } return 1; } else if( < 12 ) { for( j := 1; j<m; j := j + 1 ) { ch := false; // update clauses for and u if( H[m+j] = 0 ) { := - H[m+j]; ch := true; } if( H[m-j] = 0 ) { u := u + H[m-j]; ch := true; } if( ch ) // one of or u has changed { := q max (,u); } if( ≥ m-j ) { return (m-j)/m’; } } return 0; } return 12 ; END
2
Definition 11.49. For every two-place semi-fuzzy quantifier Q : P(E) −→ I, • Q is called absolute if there exists a quantitative unary quantifier Q : P(E) −→ I such that Q = Q ∩, i.e. Q(Y1 , Y2 ) = Q (Y1 ∩ Y2 ) for all Y1 , Y2 ∈ P(E). In particular, Q(Y1 , Y2 ) = q(|Y1 ∩ Y2 |) •
(11.87)
for some q : {0, . . . , |E|} −→ I. Q is called a quantifier of exception if there exists an absolute quantifier 2 Q : P(E) −→ I such that Q = Q ¬, i.e. Q(Y1 , Y2 ) = Q (Y1 , ¬Y2 ) for all Y1 , Y2 ∈ P(E). In particular Q(Y1 , Y2 ) = q(|Y1 ∩ ¬Y2 |) for some q : {0, . . . , |E|} −→ I.
(11.88)
320
11 Implementation of Quantifiers in the Models Table 11.7. Examples of quantifiers of exception Quantifier all all except exactly k all except about k all except at most k
Antonym (absolute) no exactly k about k at most k
Notes 11.50. •
Absolute quantifiers are of course well-known both in fuzzy set theory and in TGQ; the above definition only serves to describe more precisely the class of quantifiers handled by the algorithms to follow. As already mentioned in the introduction, absolute quantifiers are called ‘quantifiers of the first kind’ in Zadeh’s publications on fuzzy quantifiers, e.g. [197, p. 149]. In TGQ, these quantifiers are normally called ‘absolute’, but Keenan and Moss [91, p. 98] also use the term ‘cardinal determiner’. • Quantifiers of exception are not mentioned in the literature on fuzzy quantifiers, but these quantifiers are well-known to TGQ, see e.g. [92]. The term ‘exception determiner’ is also common [91, p. 123]. To give one more example of the first type, the two-place quantifier “about 50” is an absolute quantifier. Some examples of quantifiers of exception are presented in Table 11.7. The DFS axioms ensure that F(Q)(X1 , X2 ) = F(Q )(X1 ∩ X2 ) ,
(11.89)
whenever Q = Q ∩ is an absolute quantifier and X1 , X2 ∈ P(E). Similarly if Q = Q ∩¬ is a quantifier of exception, then F(Q)(X1 , X2 ) = F(Q )(X1 ∩ ¬X2 ) ,
(11.90)
for all X1 , X2 ∈ P(E), where Q : P(E) −→ I is the unary base quantifier from which Q is constructed. Owing to these reductions, we can therefore use the algorithms for computing F(Q )(X), F ∈ {MCX , M, Fowa } to evaluate absolute quantifiers and quantifiers of exception. 2 For example, consider the absolute quantifier about 50 : P(E) −→ I. According to the above definition, we can then reduce the absolute quantifier to an intersection about 50 = [∼50]∩ , where [∼50] : P(E) −→ I is a suitable unary quantifier. A possible choice of [∼50] is the following. [∼50](Y ) = SZ(|Y |, 35, 45, 55, 85)
11.9 Implementation of Proportional Quantifiers
321
for all Y ∈ P(E), where ‘SZ’ is Zadeh’s SZ-function [197, p. 184] (which has roughly the shape of a smoothened trapezoid), supposed to be nonzero in the range 35..85 and to reach full membership in the range 45..55. Owing to (11.89), we can then use the fuzzy quantifier F([∼50]) : P(E) −→ I in order to interpret “about 50” in the chosen model, e.g. for F = MCX , MCX (about 50)(X1 , X2 ) = MCX ([∼50])(X1 ∩ X2 ) , for all X1 , X2 ∈ P(E). On the other hand, consider the quantifier of exception “all except k”, which can be defined by all except k(Y1 , Y2 ) = [≤ k](Y1 ∩ ¬Y2 ) for all Y1 , Y2 ∈ P(E), i.e. all except k = [≤ k]∩¬. Let us now see how MCX ([≤ k]) can be used for implementing MCX (all except k). Following the generic solution presented in (11.90), the computation of the quantifying expression MCX (all except k)(X1 , X2 ) then reduces to MCX (all except k)(X1 , X2 ) = MCX ([≤ k])(X1 ∩ ¬X2 ) = 1 − µ[k+1] (X1 ∩ ¬X2 ) , for all X1 , X2 ∈ P(E). Similar examples of practical relevance can be treated along the same lines. Noticing that the assumptions which this analysis makes on F are valid in every DFS, this reduction to the unary case is possible in arbitrary models.
11.9 Implementation of Proportional Quantifiers Proportional quantifiers like “almost all”, “most” or “many” are probably the most relevant from an application point of view: in many cases, it is not the absolute numbers, but rather some proportion or ratio of quantities that raises interest. Hence let us define this basic type of quantifiers formally, which lets us communicate about such proportions: Definition 11.51. 2 A two-place semi-fuzzy quantifier Q : P(E) −→ I on a finite base set is called proportional if there exist f : I −→ I and v0 ∈ I such that f (c2 /c1 ) : c1 = 0 Q(Y1 , Y2 ) = q(c1 , c2 ) = v0 : else for all Y1 , Y2 ∈ P(E), where c1 = |Y1 | and c2 = |Y1 ∩ Y2 |. Notes 11.52. • For example, the proposed definition of “almost all” in terms of equation (2.8) fits into this scheme, where f (z) = S(x, 0.7, 0.9) and v0 = 1.
322
11 Implementation of Quantifiers in the Models
•
Usually f and v0 can be chosen independently of E, i.e. Q has extension; see Sect. 4.14. • Let us also observe that all proportional quantifiers are conservative by definition, because they can be expressed in terms of |Y1 | and |Y1 ∩ Y2 |; see Th. 11.37.
Due to the fact that all proportional quantifiers are conservative, we already know in principle how Q,X1 ,X2 and ⊥Q,X1 ,X2 can be computed from q and X1 , X2 . The earlier theorem Th. 11.44 elucidates the precise shape of the relation Rγ (X1 , X2 ), from which we obtain Q,X1 ,X2 (γ) = max{q(c1 , c2 ) : (c1 , c2 ) ∈ Rγ (X1 , X2 )} ⊥Q,X1 ,X2 (γ) = min{q(c1 , c2 ) : (c1 , c2 ) ∈ Rγ (X1 , X2 )} . In particular, the theorem shows how Rγ (X1 , X2 ) can be computed from coefmin min ficients "1 , "2 , "3 , u1 , u2 , u3 which denote the cardinality of (X1 )γ , (X1 ∩ X2 )γ , min
max
max
max
(X1 ∩ ¬X2 )γ , (X1 )γ , (X1 ∩ X2 )γ and (X1 ∩ ¬X2 )γ , respectively. Based on this analysis, it has already been shown in (11.74) and (11.75) that Q,X1 ,X2 (γ) = q max ("1 , "2 , "3 , u1 , u2 , u3 ) and ⊥Q,X1 ,X2 (γ) = q min ("1 , "2 , "3 , u1 , u2 , u3 ) , where q min , q max : {0, . . . , |E|}6 −→ I are defined by q max ("1 , "2 , "3 , u1 , u2 , u3 ) = max{q(c1 , c2 ) : "1 ≤ c1 ≤ u1 , max(c1 − u3 , "2 ) ≤ c2 ≤ min(c1 − "3 , u2 )} (11.91) and q min ("1 , "2 , "3 , u1 , u2 , u3 ) = min{q(c1 , c2 ) : "1 ≤ c1 ≤ u1 , max(c1 − u3 , "2 ) ≤ c2 ≤ min(c1 − "3 , u2 )} . (11.92) In practice, we must specify a choice of Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , X2 ) with 0 = γ0 < γ1 < · · · < γm = 1, which makes the formulas for computing quantification results in Fowa , M and MCX applicable (see Th. 11.27, Th. 11.30 and Th. 11.31). Based on the above analysis, we are then prepared to compute the quantities j = Q,X1 ,X2 (γ j ) and ⊥j = ⊥Q,X1 ,X2 (γ j ) from the coefficients "r and ur obtained at a given cut level γ j , j ∈ {0, . . . , m − 1}. This results in the basic computation procedure for proportional quantifiers (or more generally, all conservative quantifiers). The method is not necessarily optimal in terms of efficiency, however. This is because for typical quantifiers, it is not
11.9 Implementation of Proportional Quantifiers
323
necessary to systematically inspect all pairs (c1 , c2 ) ∈ Rj in order to compute j and ⊥j . By contrast, j and ⊥j can often be computed directly from f , v0 and the given cardinality coefficients. In the following, we will consider the typical case of a proportional quantifier which is nondecreasing in its second argument. Examples comprise “more than half”, “at least 10 percent”, “almost all”, “many” etc. For these quantifiers, we show how q min and q max can be computed more efficiently if one circumvents the explicit computation of the set {q(c1 , c2 ) : "1 ≤ c1 ≤ c2 , max(c1 − u3 , "2 ) ≤ c2 ≤ min(c1 − "3 , u2 )} . By avoiding the direct maximisation/minimisation over all q(c1 , c2 ), we shortcut an operation which can become computationally costly. Apart from simplifying the computation of j and ⊥j , we also show that some of the coefficients "1 , "2 , "3 , u1 , u2 and u3 are superfluous for nondecreasing f , i.e. when q, and in turn Q, are nondecreasing in their second argument. This permits a further simplification of the algorithms in some cases, because we need not keep track of the irrelevant coefficients. Theorem 11.53. 2 Let E = ∅ be a finite base set and Q : P(E) −→ I a proportional quantifier on E, i.e. f (c2 /c1 ) : c1 = 0 Q(Y1 , Y2 ) = q(c1 , c2 ) = : c1 = 0 v0 for all Y1 , Y2 ∈ P(E), where c1 = |Y1 |, c2 = |Y1 ∩ Y2 |, f : I −→ I and v0 ∈ I. Further suppose that f is nondecreasing, i.e. Q is nondecreasing in its second argument. Then Q,X1 ,X2 (γ) = q max ("1 , "3 , u1 , u2 ) ⊥Q,X1 ,X2 (γ) = q min ("1 , "2 , u1 , u3 ) for all X1 , X2 ∈ P(E) and γ ∈ I, where the cardinality coefficients "1 , "2 , "3 , u1 , u2 , u3 are defined as in Th. 11.44, and q min , q max : {0, . . . , |E|}4 −→ I are piecewise defined as follows. 1. "1 > 0. Then q min ("1 , "2 , u1 , u3 ) = f ("2 /("2 + u3 )). 2. "1 = 0. a. "2 + u3 > 0. Then q min ("1 , "2 , u1 , u3 ) = min(v0 , f ("2 /("2 + u3 ))). b. "2 + u3 = 0. i. u1 > 0. Then q min ("1 , "2 , u1 , u3 ) = min(v0 , f (1)). ii. u1 = 0. Then q min ("1 , "2 , u1 , u3 ) = v0 . For q max ("1 , "3 , u1 , u2 ), we obtain:
324
11 Implementation of Quantifiers in the Models
1. "1 > 0. Then q max ("1 , "3 , u1 , u2 ) = f (u2 /(u2 + "3 )). 2. "1 = 0. a. u2 + "3 > 0. Then q max ("1 , "3 , u1 , u2 ) = max(v0 , f (u2 /(u2 + "3 ))). b. u2 + "3 = 0. i. u1 > 0. Then q max ("1 , "3 , u1 , u2 ) = max(v0 , f (0)). ii. u1 = 0. Then q max ("1 , "3 , u1 , u2 ) = v0 . Notes 11.54. •
If v0 ≤ f (1), then min(v0 , f (1)) = v0 , i.e. we need not distinguish 2.b.i and 2.b.ii in the computation of q min . • If f (0) ≤ v0 , then 2.b.i and 2.b.ii need not be distinguished in the computation of q max . • The computation of convex proportional quantifiers like “about 60%” or “between 20% and 40%” is also straightforward, see [51, p. 113]. We will now elaborate the above analysis into complete algorithms for computing proportional quantifiers in Fowa , M and MCX . Hence let a proportional quantifier Q be given, and suppose that Q is specified in terms of the mapping f : I −→ I and constant v0 ∈ I. In order to demonstrate the intended opti mizations, we will assume that f is nondecreasing. Further let X1 , X2 ∈ P(E) be a given choice of fuzzy arguments. In order to implement Q in the prototypical models, we must choose some Γ = {γ0 , . . . , γm } ⊇ Γ (X1 , X2 ) with 0 = γ0 < γ1 < · · · < γm = 1, m ∈ N \ {0}. For developing the solution based on floating-point arithmetics, it is again convenient to assume the minimal choice of Γ , i.e. Γ = Γ (X1 , X2 ). The above rule for computing q min ("1 , "2 , u1 , u3 ) and q max ("1 , "3 , u1 , u2 ) reduces j and ⊥j to a function of coefficients "r , ur , r ∈ {1, 2, 3} sampled from Z1 = Ψ1 (X1 , X2 ) = X1 , Z2 = Ψ2 (X1 , X2 ) = X1 ∩ X2 and Z3 = Ψ3 (X1 , X2 ) = X1 ∩ ¬X2 . Noticing that Γ (X1 ∩ X2 ) ⊆ Γ (X1 , X2 ) and Γ (X1 ∩ ¬X2 ) ⊆ Γ (X1 , X2 ) by Th. 11.23, we can represent the histograms HistZr , r ∈ {1, 2, 3} by mappings Hr+ and Hr− in accordance with (11.82) and (11.83). The basic implementation of proportional quantifiers refers to inputs X1 and X2 . Again, it might be advantageous to separate the computation of the histograms Hr+ and Hr− from the main algorithms.5 In a practical implementation, the assumed fuzzy sets input can easily be exchanged with the resulting histograms if so desired. Apart from the histogram arrays Hr+ and Hr− , we must further compute the array G of cut levels G[j] = γj , j ∈ {0, . . . , m}, and the cardinal m, which specifies the number of γj ’s. Based on these preparations, the earlier theorem Th. 11.48 then 5
In this case, the extraction of the histogram data from X1 and X2 is delegated to a preprocessing step, and it is Hr+ and Hr− that serve as the input of the algorithms, rather than the fuzzy sets X1 and X2 . The histograms then provide a uniform representation suited for arbitrary fuzzy sets, which obviates the need for the algorithms to operate on heterogeneous data.
11.9 Implementation of Proportional Quantifiers
325
Table 11.8. Algorithm for evaluating two-place proportional quantifiers in Fowa (based on floating-point arithmetics) Algorithm for computing Fowa (Q)(X1 , X2 ) (floating-point arithmetics) INPUT: X1 , X2 + + − − − Compute G, m, H+ 1 , H2 ,H3 ,H1 , H2 , H3 ; // see text // initialize , u: for( r := 1; r ≤ 3; r := r+1 ) m + { r := H+ r [j]; ur := r + Hr [0]; } j=1
sum := G[1] * (q min (1 , 2 , u1 , u3 ) + q max (1 , 3 , u1 , u2 )); for( j := 1; j<m; j := j + 1 ) { // update clauses for 1 , 2 , 3 , u1 , u2 , u3 for( r := 1; r ≤ 3; r := r+1 ) − { r := r - H+ r [j]; ur := ur + Hr [j]; } sum := sum + (G[j+1] - G[j]) * (q min (1 , 2 , u1 , u3 ) + q max (1 , 3 , u1 , u2 )); } return sum / 2; END
permits an incremental computation of the quantities "r and ur , r ∈ {1, 2, 3}. In turn, the above theorem Th. 11.53 lets us compute j = q max ("1 , "3 , u1 , u2 ) and ⊥j = q min ("1 , "2 , u1 , u3 ) from the current values of the "r and ur in the j-th iteration. Finally theorems Th. 11.27, Th. 11.30 and Th. 11.31 show how to compute the final outcome of quantification in Fowa , M and MCX from the given data. The resulting algorithms which implement proportional quantification in the models Fowa , M and MCX are presented in Table 11.8, 11.9 and 11.10, respectively. Let us briefly discuss the organization of the algorithms for M and MCX . First of all, both algorithms have been split into two main cases, and associated main loops, in order to restrict computation to either q min ("1 , "2 , u1 , u3 ) or to q max ("1 , "3 , u1 , u2 ). Of course, it is assumed that q min and q max be computed efficiently, following theorem Th. 11.53. Due to the fact that the quantities q min and q max need not be computed simultaneously, some simplifications are possible, i.e. the so-called ‘update clauses’ which keep track of the "r and ur can be restricted to those coefficients that are actually needed for computing q min (in the first main loop) or q max (in the alternative main loop). For example, consider the first main loop in the algorithm for MCX shown in Table 11.9. In this case, the quantification result solely depends on q min ("1 , "2 , u1 , u3 ), which in turn can be computed from the current values of "1 , "2 , u1 and u3 in the j-th iteration. It is therefore sufficient to update the coefficients "1 , "2 , u1 and u3 only; while the updating of "3 and u2 , and also the computation of q max ("1 , "3 , u1 , u2 ), can be omitted. It is also instructive to compare these algorithms to their counterparts for unary quantifiers. In fact, the basic organisation of the algorithms is identical in both cases; all differences are confined to the update clauses which keep track of different sets of coefficients "r and ur . It should be apparent how the basic algorithms can be fitted to nonincreasing proportional quantifiers like “less than 60 percent” as well; suffice it to observe
326
11 Implementation of Quantifiers in the Models
Table 11.9. Algorithm for evaluating two-place proportional quantifiers in M (based on floating-point arithmetics) Algorithm for computing M(Q)(X1 , X2 ) (floating-point arithmetics) INPUT: X1 , X2 + + − − − Compute G, m, H+ 1 , H2 ,H3 ,H1 , H2 , H3 ; // see text // initialize , u: for( r := 1; r ≤ 3; r := r+1 ) m + { r := H+ r [j]; ur := r + Hr [0]; } j=1
:= q max (1 , 3 , u1 , u2 ); ⊥ := q min (1 , 2 , u1 , u3 ); if( ⊥ > 12 ) { sum := G[1] * ⊥; for( j := 1; j<m; j := j + 1 ) { // update clauses for 1 , 2 , u1 , u3 1 := 1 - H+ 1 [j]; 2 := 2 - H+ 2 [j]; u1 := u1 + H− 1 [j]; u3 := u3 + H− 3 [j]; min ⊥ := q (1 , 2 , u1 , u3 ); 1 if( ⊥ ≤ 2 ) break; sum := sum + (G[j+1] - G[j]) * ⊥; } } else if( < 12 ) { sum := G[1] * ; for( j := 1; j<m; j := j + 1 ) { // update clauses for 1 , 3 , u1 , u2 1 := 1 - H+ 1 [j]; 3 := 3 - H+ 3 [j]; u1 := u1 + H− 1 [j]; u2 := u2 + H− 2 [j]; max := q (1 , 3 , u1 , u2 ); 1 if( ≥ 2 ) break; sum := sum + (G[j+1] - G[j]) * ; } } else { return 12 ; } return sum + 12 * (1 - G[j]); END
that if Q (and hence f ) is nonincreasing, then Q = ¬Q is also a proportional quantifier, which can be described in terms of f = ¬f and v0 = ¬v0 . Noticing that the resulting mapping f is nondecreasing, the above algorithms can be used to compute F(Q )(X1 , X2 ), where F is the model of interest. According to Th. 4.20, the computation of F(Q)(X1 , X2 ) can therefore be reduced to F(Q)(X1 , X2 ) = ¬F(Q )(X1 , X2 ). We have stated the above algorithms for a special case often met in practice, in order to demonstrate the potential for optimization. However, these algorithms are easily adapted to general proportional quantifiers, which need not fulfill any monotonicity requirements. To this end, it is sufficient to replace all occurrences of q max ("1 , "3 , u1 , u2 ) and q min ("1 , "2 , u1 , u3 ) in the above algorithms with their generic counterparts, i.e. q max ("1 , "2 , "3 , u1 , u2 , u3 ) and q min ("1 , "2 , "3 , u1 , u2 , u3 ) defined by (11.91) and (11.92). Additional update
11.9 Implementation of Proportional Quantifiers
327
Table 11.10. Algorithm for evaluating two-place proportional quantifiers in MCX (based on floating-point arithmetics) Algorithm for computing MCX (Q)(X1 , X2 ) (floating-point arithmetics) INPUT: X1 , X2 + + − − − Compute G, m, H+ 1 , H2 ,H3 ,H1 , H2 , H3 ; // see text // initialize , u: for( r := 1; r ≤ 3; r := r+1 ) m + { r := H+ r [j]; ur := r + Hr [0]; } j=1
:= q max (1 , 3 , u1 , u2 ); ⊥ := q min (1 , 2 , u1 , u3 ); j := 0; if( ⊥ > 12 ) { B := 2 * ⊥ - 1; while( B > G[j+1] ) { j := j + 1; // update clauses for 1 , 2 , u1 , u3 1 := 1 - H+ 1 [j]; 2 := 2 - H+ 2 [j]; u1 := u1 + H− 1 [j]; u3 := u3 + H− 3 [j]; min B := 2 * q (1 , 2 , u1 , u3 ) - 1; } 1 1 return 2 + 2 max(B, G[j]); } else if( < 12 ) { B := 1 - 2 * ; while( B > G[j+1] ) { j := j + 1; // update clauses for 1 , 3 , u1 , u2 1 := 1 - H+ 1 [j]; 3 := 3 - H+ 3 [j]; u1 := u1 + H− 1 [j]; u2 := u2 + H− 2 [j]; B := 1 - 2 * q max (1 , 3 , u1 , u2 ); } return 12 - 12 max(B, G[j]); } return 12 ; END
clauses must then be added, because it is now necessary to keep track of all coefficients "1 , "2 , "3 , u1 , u2 and u3 . As in the case of unary quantifiers, it also makes sense to develop special versions of the algorithms, optimized for integer arithmetics. Hence let Q be are given fuzzy a proportional quantifier and suppose that X1 , X2 ∈ P(E) arguments. Further let m ∈ N \ {0} be given, which specifies the available range of integers. Once m is chosen, admissible fuzzy arguments are no longer allowed to assume arbitrary membership grades in I = [0, 1]. By contrast, we again require that µXi (e) ∈ Um for all e ∈ E and i ∈ {1, 2}, where Um is defined as in the unary case, cf. (11.85). Under these assumptions on legal choices of fuzzy arguments, the above algorithms which implement proportional quantifiers in Fowa , M and MCX can now be adapted in complete analogy to the earlier changes for
328
11 Implementation of Quantifiers in the Models
Table 11.11. Algorithm for evaluating two-place proportional quantifiers in Fowa (based on integer arithmetics) Algorithm for computing Fowa (Q)(X1 , X2 ) (integer arithmetics) INPUT: X1 , X2 // initialise Hr , , u H1 := HistX1 ; H2 := HistX1 ∩X2 ; H3 := HistX1 ∩¬X2 ; for( r := 1; r ≤ 3; r := r+1 ) m Hr [m+j]; ur := r + Hr [m]; } { r := j=1
cq := q min (1 , 2 , u1 , u3 ) + q max (1 , 3 , u1 , u2 ); sum := cq; for( j := 1; j<m; j := j + 1 ) { ch := false; // "change" // update clauses for 1 , 2 , 3 , u1 , u2 , u3 if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; ch := true; } if( H2 [m+j] = 0 ) { 2 := 2 - H2 [m+j]; ch := true; } if( H3 [m+j] = 0 ) { 3 := 3 - H3 [m+j]; ch := true; } if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; ch := true; } if( H2 [m-j] = 0 ) { u2 := u2 + H2 [m-j]; ch := true; } if( H3 [m-j] = 0 ) { u3 := u3 + H3 [m-j]; ch := true; } if( ch ) // one of the r or ur has changed { cq := q min (1 , 2 , u1 , u3 ) + q max (1 , 3 , u1 , u2 ); } sum := sum + cq; } return sum / m’; // where m’ = 2*m END
unary quantifiers, see pp. 315–317. The resulting algorithms for proportional quantification based on integer arithmetics are shown in Table 11.11 (Fowa ), Table 11.12 (M), and Table 11.13 (for MCX ). Specifically, the algorithms have again been augmented with flags ch (‘change’) or nc (‘no change’), which keep track of any changes of the "r or ur in the current iteration. Only if ch is set to true (or nc to false), a recomputation of q min or q max becomes necessary; otherwise, the algorithms will directly skip forward to the next iteration. In practice, the use of these integer-based algorithms should again be considered when processing speed is more important than accuracy, i.e. the domain of quantification is very large, and the loss of precision due to a small choice of m is acceptable. The final decision between the floating-point and integer-based variants should then be based on practical tests, which assess the processing times for typical instances of quantification. The presented algorithms permit an efficient implementation which will be fast enough for most applications. Due to the inherent complexity of proportional quantification compared to the unary case, the algorithms are necessarily more complicated and computationally demanding than those for simple one-place quantifiers. In some situations, however, it is possible to shortcut the general solution and delegate proportional quantification to the fast
11.9 Implementation of Proportional Quantifiers
329
Table 11.12. Algorithm for evaluating two-place proportional quantifiers in M (based on integer arithmetics) Algorithm for computing M(Q)(X1 , X2 ) (integer arithmetics) INPUT: X1 , X2 // initialise Hr , , u H1 := HistX1 ; H2 := HistX1 ∩X2 ; H3 := HistX1 ∩¬X2 ; for( r := 1; r ≤ 3; r := r+1 ) m Hr [m+j]; ur := r + Hr [m]; } { r := j=1
:= q max (1 , 3 , u1 , u2 ); ⊥ := q min (1 , 2 , u1 , u3 ); if( ⊥ > 12 ) { sum := ⊥; for( j := 1; j<m; j := j + 1 ) { nc:= true; // "no change" // update clauses for 1 , 2 , u1 , u3 if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; nc:= false; } if( H2 [m+j] = 0 ) { 2 := 2 - H2 [m+j]; nc:= false; } if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; nc:= false; } if( H3 [m-j] = 0 ) { u3 := u3 + H3 [m-j]; nc:= false; } if( nc) { sum := sum + ⊥; continue; } // one of 1 , 2 , u1 , u3 has changed ⊥ := q min (1 , 2 , u1 , u3 ); if( ⊥ ≤ 12 ) break; sum := sum + ⊥; } } else if( < 12 ) { sum := ; for( j := 1; j<m; j := j + 1 ) { nc:= true; // update clauses for 1 , 3 , u1 , u2 if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; nc:= false; } if( H3 [m+j] = 0 ) { 3 := 3 - H3 [m+j]; nc:= false; } if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; nc:= false; } if( H2 [m-j] = 0 ) { u2 := u2 + H2 [m-j]; nc:= false; } if( nc) { sum := sum + ; continue; } // one of 1 , 3 , u1 , u2 has changed := q max (1 , 3 , u1 , u2 ); if( ≥ 12 ) break; sum := sum + ; } } else { return 12 ; } return (sum + 12 *(m-j)) / m; END
algorithms for unary quantifiers. Specifically, this simplification is possible when the first argument, i.e. the restriction of the quantifier, happens to be a crisp set, as in “Most married men are bald”. In this case, the restriction married men ∈ P(E) can serve as the domain of a derived unary quantifier, as shown by the following theorem.
330
11 Implementation of Quantifiers in the Models
Table 11.13. Algorithm for evaluating two-place proportional quantifiers in MCX (based on integer arithmetics) Algorithm for computing MCX (Q)(X1 , X2 ) (integer arithmetics) INPUT: X1 , X2 // initialise Hr , , u H1 := HistX1 ; H2 := HistX1 ∩X2 ; H3 := HistX1 ∩¬X2 ; for( r := 1; r ≤ 3; r := r+1 ) m Hr [m+j]; ur := r + Hr [m]; } { r := j=1
:= q max (1 , 3 , u1 , u2 ); ⊥ := q min (1 , 2 , u1 , u3 ); if( ⊥ > 12 ) { for( j := 1; j<m; j := j + 1 ) { ch := false; // "change" // update clauses for 1 , 2 , u1 , u3 if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; ch := true; } if( H2 [m+j] = 0 ) { 2 := 2 - H2 [m+j]; ch := true; } if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; ch := true; } if( H3 [m-j] = 0 ) { u3 := u3 + H3 [m-j]; ch := true; } if( ch ) // one of 1 , 2 , u1 , u3 has changed { ⊥ := q min (1 , 2 , u1 , u3 ); } if( ⊥ ≤ m+j ) { return (m+j)/m’; } } return 1; } else if( < 12 ) { for( j := 1; j<m; j := j + 1 ) { ch := false; // update clauses for 1 , 3 , u1 , u2 if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; ch := true; } if( H3 [m+j] = 0 ) { 3 := 3 - H3 [m+j]; ch := true; } if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; ch := true; } if( H2 [m-j] = 0 ) { u2 := u2 + H2 [m-j]; ch := true; } if( ch ) // one of 1 , 3 , u1 , u2 has changed { := q max (1 , 3 , u1 , u2 ); } if( ≥ m-j ) { return (m-j)/m’; } } return 0; } return 12 ; END
Theorem 11.55. 2 Let E = ∅ be a finite base set and suppose that Q : P(E) −→ I is a proportional quantifier on E. Further let f denote the associated mapping f : I −→ I and v0 ∈ I the associated constant, see Def. 11.51. In the following, we will consider a crisp set V ∈ P(E), V = ∅. Then in every DFS F, F(Q)(V, X) = F(Q )(X ) ) is the restriction of X to V , i.e. for all X ∈ P(E), where X ∈ P(V
11.9 Implementation of Proportional Quantifiers
µX (e) = µX (e)
331
(11.93)
for all e ∈ V ; and where Q : P(V ) −→ I is the quantitative unary quantifier defined from q : {0, . . . , |V |} −→ I, q (c) = f (c/|V |)
(11.94)
for all c ∈ {0, . . . , |V |}, according to Q (Y ) = q (|Y |) for all Y ∈ P(V ). In other words, the theorem presents a reduction of two-place proportional quantification on the full domain to simple one-place quantification on a restricted domain, which is supplied by the first argument. The reduction is possible whenever the first argument qualifies as a proper domain, i.e. it must be crisp and nonempty. Recognizing this situation promises a boost in performance because the algorithms for unary quantification become applicable, which are much simpler (and consequently, faster) than the generic algorithms for proportional quantification. Apart from efficiency considerations, the theorem is also interesting from a theoretical perspective because it links unrestricted proportional to unrestricted absolute quantification. For purpose of illustration, let us briefly discuss the above example “Most married men are bald” in the context of the theorem. Hence suppose that “most” is defined in terms of f : I −→ I and v0 ∈ I according to Def. 11.51; a possible choice of f and v0 is: f (x) = 1 if x > 12 and f (x) = 0 otherwise; and v0 = 1. The mapping q : {0, . . . , |married men|} −→ I then becomes q (c) = f (c/|married men|) for all c ∈ {0, . . . , |married men|}. For example, if there are ten married men in the considered domain, then 1 : c>5 (11.95) q (c) = 0 : c≤5 for all c ∈ {0, . . . , 10}. Consequently, the derived quantifier Q : P(married men) −→ I becomes 1 : |Y | > 5 Q (Y ) = q (|Y |) = 0 : |Y | ≤ 5 for all Y ∈ P(E). In other words, Q is the one-place quantifier [≥ 6] defined in Def. 11.2. Under the assumption that married men has ten members, the proportional quantification in “Most married men are bald” therefore reduces to the simpler statement, F(most)(married men, bald) = F([≥ 6])(bald ) ,
332
11 Implementation of Quantifiers in the Models
where bald is the restriction of the fuzzy set bald to the new domain married men. Finally we utilize the earlier theorem Th. 11.4 and conclude that F(most)(married men, bald) = µ[6] (bald ) in every standard model. This example completes the discussion of proportional quantification. We are now prepared to evaluate proportional quantifiers efficiently, either based on floating-point or integer arithmetics. The proposed algorithms for proportional quantification in Fowa , M and MCX cover the ‘hard’ case of proportional two-place quantification, i.e. the aggregation can be weighted by importances, which are specified by the first argument of the quantifier. In Th. 11.53, we further improved the earlier analysis of conservative quantifiers given in Th. 11.44, and thus achieved a more efficient computation in the frequent case of those proportional quantifiers which are also known to be monotonic. Finally, we considered the special case of unrestricted proportional quantification, i.e. unary proportional quantification, or proportional quantification restricted by crisp importances. In this case, we described an apparent reduction which renders applicable the algorithms for one-place quantifiers. In those applications which do not need importance qualification, this reduction will result in an additional boost of processing speed.
11.10 Implementation of Cardinal Comparatives This section will discuss a natural class of quantifiers which has received little attention in the literature on fuzzy quantifiers: That of cardinal comparatives [92, p. 305] or comparative determiners [91, p. 123]. This class comprises all NL quantifiers that express a comparison of two cardinalities, which are sampled from two restriction arguments (linguistically realized by two noun phrases A, B) and a common scope argument C (linguistically realized by the verb phrase). For example, the quantifier “more than”, which underlies the pattern “More A’s than B’s are C’s”, is a cardinal comparative. Formally, we define cardinal comparatives as follows. Definition 11.56. 3 Let E = ∅ be a finite base set. A three-place quantifier Q : P(E) −→ I is called a cardinal comparative if Q(Y1 , Y2 , Y3 ) depends on |Y1 ∩Y3 | and |Y2 ∩Y3 | only, i.e. there exists q : {0, . . . , |E|}2 −→ I such that Q(Y1 , Y2 , Y3 ) = q(c1 , c2 ) for all Y1 , Y2 , Y3 ∈ P(E), where c1 = |Y1 ∩ Y3 | and c2 = |Y2 ∩ Y3 |. For example, “more than” can be defined as
(11.96)
11.10 Implementation of Cardinal Comparatives
more than(Y1 , Y2 , Y3 ) =
1 0
: |Y1 ∩ Y3 | − |Y2 ∩ Y3 | > 0 : else
333
(11.97)
for all Y1 , Y2 , Y3 ∈ P(E). The quantifier can be used to interpret statements like “More men than women own a car”, which becomes more than(men, women, own a car) . We can also model quantifiers of the type “exactly k more than”, i.e. 1 : |Y1 ∩ Y3 | − |Y2 ∩ Y3 | = k exactly k more than(Y1 , Y2 , Y3 ) = 0 : else and “at least k more than”, i.e. at least k more than(Y1 , Y2 , Y3 ) =
1 0
: |Y1 ∩ Y3 | − |Y2 ∩ Y3 | ≥ k : else
for all Y1 , Y2 , Y3 ∈ P(E), where k ∈ N. Special cases include “more than” and “same number of”, which can be expressed as more than = at least 1 more than same number of = exactly 0 more than . In particular, a statement like “The same number of men and women are married” now translates into the quantifying expression same number of(men, women, married) . Further examples of cardinal comparatives comprise “less than”, “twice as many”, “at least twice as many”, “many more than”, etc. Resorting to (11.96), the comparatives “(exactly) twice as many” and “at least twice as many” can be expressed in terms of the following mappings q1 and q2 , respectively: 1 : c1 = 2 c2 q1 (c1 , c2 ) = for “exactly twice as many” 0 : else 1 : c1 ≥ 2 c2 for “at least twice as many” q2 (c1 , c2 ) = 0 : else These quantifiers can be used for interpreting NL propositions like “There are exactly twice as many male as female dogs in the kennel” mentioned by [92, p. 305]. A conceivable definition of “many more than” is many more than(Y1 , Y2 , Y3 ) = q(c1 , c2 ) for all Y1 , Y2 , Y3 ∈ P(E), where 0 q(c1 , c2 ) = (c1 − c2 − α) · β 1
: c1 ≤ c2 + α : c2 + α < c1 ≤ c2 + α + 1/β : c2 > c2 + α + 1/β
334
11 Implementation of Quantifiers in the Models
for c1 , c2 ∈ {0, . . . , |E|}. Obviously, suitable choices of α, β ∈ R+ can only be made from the context. However, the rough shape of the resulting quantifier is easily grasped from q, so it should usually not pose problems to select a quantifier which suits the task at hand. Now that we are able to define the cardinal comparatives of NL in terms of semi-fuzzy quantifiers, let us consider the relation RγΦ1 ,Φ2 (X1 , X2 , X3 ), where Φ1 (Y1 , Y2 , Y3 ) = Y1 ∩ Y3 and Φ2 (Y1 , Y2 , Y3 ) = Y2 ∩ Y3 . Specifically, it must be shown how to express this relation in terms of quantities "r , ur sampled from The simple Boolean combinations of the fuzzy arguments X1 , X2 , X3 ∈ P(E). description of this target relation in terms of four Boolean combinations that are required is set forth in the next theorem. Theorem 11.57. Further let Φ1 , Φ2 Let E = ∅ be a finite base set and X1 , X2 , X3 ∈ P(E). denote the Boolean combinations Φ1 (Y1 , Y2 , Y3 ) = Y1 ∩Y3 and Φ2 (Y1 , Y2 , Y3 ) = Y2 ∩ Y3 of the crisp variables Y1 , Y2 , Y3 ∈ P(E). Then the relation R = RγΦ1 ,Φ2 (X1 , X2 , X3 ) defined by (11.34) can be computed from the cardinality min max coefficients "r = |Zr |γ and ur = |Zr |γ sampled from Z1 = Ψ1 (X1 , X2 , X3 ) = X1 ∩ X3 , Z2 = Ψ2 (X1 , X2 , X3 ) = X2 ∩ X3 , Z3 = Ψ3 (X1 , X2 , X3 ) = X1 ∩ ¬X2 ∩ X3 and Z4 = Ψ4 (X1 , X2 , X3 ) = ¬X1 ∩ X2 ∩ X3 . In terms of "1 , "2 , "3 , "4 , u1 , u2 , u3 , u4 ∈ {0, . . . , |E|}, R then becomes R = {(c1 , c2 ) : "1 ≤ c1 ≤ u1 ,
(11.98)
max(c1 − u3 + "4 , "2 ) ≤ c2 ≤ min(c1 − "3 + u4 , u2 )}} .
(11.99)
In practice, it is not necessary to consider all pairs (c1 , c2 ) ∈ R, because those cardinal comparatives actually found in NL conform to simple monotonicity patterns. Specifically, it is possible to simplify the computation of j and ⊥j if the cardinal comparative Q is at least convex in its second argument. It is easily seen that Q is convex in the second argument if and only if q is convex in the second argument, i.e. q(c1 , c2 ) ≥ min(q(c1 , c2 ), q(c1 , c2 )) given that c2 ≤ c2 ≤ c2 . Consequently, there exists cpk = cpk (c1 ) ∈ {0, . . . , |E|} such that q(c1 , c2 ) is nondecreasing in c2 for c2 ≤ cpk , and nonincreasing for c2 ≥ cpk . In particular, cpk maximizes q(c1 , cpk ) given c1 . In the convex case, we therefore have j = max{q(c1 , c∗2 ) : "1 ≤ c1 ≤ u1 } and ⊥j = min{min( q(c1 , max(c1 − u3 + "4 , "2 )), q(c1 , min(c1 − "3 + u4 , u2 ))) : "1 ≤ c1 ≤ u1 } where
11.10 Implementation of Cardinal Comparatives
min(c1 − "3 + u4 , u2 ) c∗2 = c∗2 (c1 ) = max(c1 − u3 + "4 , "2 ) cpk
335
: cpk > min(c1 − "3 + u4 , u2 ) : cpk < max(c1 − u3 + "4 , "2 ) : else
for cpk = cpk (c1 ). This representation permits a simplified computation of j and ⊥j for convex quantifiers like “about two times more than”. Most cardinal comparatives show even simpler monotonicity patterns. For example, “More Y1 ’s than Y2 ’s are Y3 ’s” is nondecreasing in the first and nonincreasing in the second argument, and the quantifier in “Less Y1 ’s than Y2 ’s are Y3 ’s” is nonincreasing in the first and nondecreasing in the second argument. Here we will restrict to the first of these monotonicity patterns, and study the corresponding quantifiers in some more depth; all results of this investigation are readily transfered to the second monotonicity pattern and we will omit the obvious modifications for the sake of brevity. Let us first link the monotonicity of a cardinal comparative Q to the monotonicity of the mapping q on cardinals. Theorem 11.58. 3
Let Q : P(E) −→ I be a cardinal comparative on a finite base set E = ∅. Further let q : {0, . . . , |E|}2 −→ I be the corresponding mapping defined by (11.96), i.e. Q(Y1 , Y2 , Y3 ) = q(|Y1 ∩ Y3 |, |Y2 ∩ Y3 |) for all Y1 , Y2 , Y3 ∈ P(E). Then the following are equivalent: a. Q is nondecreasing in the first and nonincreasing in its second argument; b. q is nondecreasing in its first and nonincreasing in its second argument. Hence the considered monotonicity pattern of cardinal comparatives reflects in a very simple pattern of the mapping q. In turn, knowing these monotonicity properties permits a simplified description of Q,X1 ,X2 ,X3 and ⊥Q,X1 ,X2 ,X3 , because the minimum and maximum must then be assumed at the ‘extreme poles’. Consequently, we obtain the following corollary to Th. 11.57 and Th. 11.58. Theorem 11.59. 3
Let Q : P(E) −→ I be a cardinal comparative on a finite base set E = ∅, and let q be the corresponding mapping q : {0, . . . , |E|}2 −→ I. If Q is nondecreasing in the first and nonincreasing in its second argument, then Q,X1 ,X2 ,X3 (γ) = max{q(c1 , max(c1 − u3 + "4 , "2 )) : "1 ≤ c1 ≤ u1 } ⊥Q,X1 ,X2 ,X3 (γ) = min{q(c1 , min(c1 − "3 + u4 , u2 )) : "1 ≤ c1 ≤ u1 } for all X1 , X2 , X3 ∈ P(E) and γ ∈ I, referring to the cardinality coefficients "r and ur , r ∈ {1, 2, 3, 4} defined in Th. 11.57.
336
11 Implementation of Quantifiers in the Models
The particular algorithms which implement cardinal comparatives in Fowa , M and MCX can easily be developed from this analysis of the considered variety of quantifiers, again utilizing the basic results that were proven for these models in Th. 11.27, Th. 11.30 and Th. 11.31, respectively. In fact, we can proceed in total analogy to the strategy used for absolute and proportional quantifiers. In the general case of unrestricted cardinal comparatives, we can express Q,X1 ,X2 ,X3 and ⊥Q,X1 ,X2 ,X3 , for a given choice of γ ∈ I, as Q,X1 ,X2 ,X3 (γ) = q max ("1 , "2 , "3 , "4 , u1 , u2 , u3 , u4 ) ⊥Q,X1 ,X2 ,X3 (γ) = q min ("1 , "2 , "3 , "4 , u1 , u2 , u3 , u4 ) , where q max ("1 , "2 , "3 , "4 , u1 , u2 , u3 , u4 ) = max{q(c1 , c2 ) : (c1 , c2 ) ∈ Rγ (X1 , X2 , X3 )} q min ("1 , "2 , "3 , "4 , u1 , u2 , u3 , u4 ) = min{q(c1 , c2 ) : (c1 , c2 ) ∈ Rγ (X1 , X2 , X3 )} . These coefficients can readily be computed, because we know the explicit representation of Rγ (X1 , X2 , X3 ), which was established in Th. 11.57. In the typical case that the quantifier of interest belongs to the monotonicity pattern assumed in Th. 11.59 (or the reverse pattern), a considerable cut in processing times becomes possible. In particular, Q,X1 ,X2 ,X3 and ⊥Q,X1 ,X2 ,X3 can then be expressed in the following simplified way, Q,X1 ,X2 ,X3 (γ) = q max ("1 , "2 , "4 , u1 , u3 ) ⊥Q,X1 ,X2 ,X3 (γ) = q min ("1 , "3 , u1 , u2 , u4 ) , where q max and q min now read q max ("1 , "2 , "4 , u1 , u3 ) = max{q(c1 , max(c1 − u3 + "4 , "2 )) : "1 ≤ c1 ≤ u1 } q min ("1 , "3 , u1 , u2 , u4 ) = min{q(c1 , min(c1 − "3 + u4 , u2 )) : "1 ≤ c1 ≤ u1 } , see Th. 11.59. In the algorithms to be presented now, we will consider both the typical monotonic and the general quantifiers, in order to optimally account for both cases. Hence let us recall the analysis of the three prototypical models achieved in theorems Th. 11.27, Th. 11.30 and Th. 11.31, which will now be elaborated into a complete and practical implementation of cardinal comparatives. In order to instantiate the core formulas stated in these theorems, it is again necessary to fix the set of relevant cutting parameters, i.e. to choose some Γ = {γ0 , . . . , γm } such that 0 = γ0 < γ1 < · · · < γm−1 < γm = 1 and Γ ⊇ Γ (X1 , X2 , X3 ). When working with floating-point arithmetics, it is advantageous to minimize the set Γ , thus letting Γ = Γ (X1 , X2 , X3 ). Owing to Th. 11.23, we then know that
11.10 Implementation of Cardinal Comparatives
337
Γ (Z1 ) = Γ (X1 ∩ X3 ) ⊆ Γ Γ (Z2 ) = Γ (X2 ∩ X3 ) ⊆ Γ Γ (Z3 ) = Γ (X1 ∩ ¬X2 ∩ X3 ) ⊆ Γ Γ (Z4 ) = Γ (¬X1 ∩ X2 ∩ X3 ) ⊆ Γ , where Z1 = Φ1 (X1 , X2 , X3 ) = X1 ∩ X3 Z2 = Φ2 (X1 , X2 , X3 ) = X2 ∩ X3 Z3 = Φ3 (X1 , X2 , X3 ) = X1 ∩ ¬X2 ∩ X3 Z4 = Φ4 (X1 , X2 , X3 ) = ¬X1 ∩ X2 ∩ X3 , as stipulated in Th. 11.57. Consequently the histograms HistZr , r ∈ {1, 2, 3, 4}, can be represented by (m + 1)-dimensional arrays Hr+ and Hr− defined in accordance with (11.82) and (11.83). Recalling that the cardinality coefficients min max "r , ur are defined by "r = "r (j) = |Zr |γ j and ur = ur (j) = |Zr |γ j for j ∈ {0, . . . , m − 1}, we can now utilize the incremental rule presented in Th. 11.48 in order to compute these coefficients efficiently. The particular algorithms so obtained, that implement cardinal comparatives based on floating-point arithmetics in the three prototype models, are shown in Table 11.14 (Fowa ), Table 11.15 (M) and Table 11.16 (for MCX ), respectively. These algorithms provide a generic solution for arbitrary cardinal comparatives, because they implement the full variety of these quantifiers without any special requirements on monotonicity properties. Some tags have been included into the pseudo-code, which indicate how the algorithms can be taylored to the prominent monotonicity type that was considered in Th. 11.59. In this typical case of a natural language comparative, a more efficient implementation is possible due to the simplified description of q min and q max Table 11.14. Algorithm for evaluating cardinal comparatives in Fowa (based on floating-point arithmetics) Algorithm for computing Fowa (Q)(X1 , X2 , X3 ) (floating-point arithmetics) INPUT: X1 , X2 , X3 + + + − − − − Compute G, m, H+ 1 , H2 ,H3 ,H4 ,H1 , H2 , H3 ,H4 ; // see text // initialize , u: for( r := 1; r ≤ 4; r := r+1 ) m + { r := H+ r [j]; ur := r + Hr [0]; } j=1
sum := G[1] * (q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ) + q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 )); for( j := 1; j<m; j := j + 1 ) { // update clauses for 1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 for( r := 1; r ≤ 4; r := r+1 ) − { r := r - H+ r [j]; ur := ur + Hr [j]; } sum := sum + (G[j+1] - G[j]) min * (q (1 , 2 , 3 , u4 , u1 , u2 , u3 , u4 ) + q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 )); } return sum / 2; END
338
11 Implementation of Quantifiers in the Models
Table 11.15. Algorithm for evaluating cardinal comparatives in M (based on floating-point arithmetics) Algorithm for computing M(Q)(X1 , X2 , X3 ) (floating-point arithmetics) INPUT: X1 , X2 , X3 + + + − − − − Compute G, m, H+ 1 , H2 ,H3 ,H4 ,H1 , H2 , H3 , H4 ; // see text // initialize , u: for( r := 1; r ≤ 4; r := r+1 ) m + { r := H+ r [j]; ur := r + Hr [0]; } j=1
:= q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); ⊥ := q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); if( ⊥ > 12 ) { sum := G[1] * ⊥; for( j := 1; j<m; j := j + 1 ) { // update clauses for and u: 1 := 1 - H+ 1 [j]; 2 := 2 - H+ 2 [j]; // [*] 3 := 3 - H+ 3 [j]; 4 := 4 - H+ 4 [j]; // [*] u1 := u1 + H− 1 [j]; u2 := u2 + H− 2 [j]; u3 := u3 + H− 3 [j]; // [*] u4 := u4 + H− 4 [j]; min ⊥ := q (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); // [1] min // ⊥ := q (1 , 3 , u1 , u2 , u4 ); // [2] if( ⊥ ≤ 12 ) break; sum := sum + (G[j+1] - G[j]) * ⊥; } } else if( < 12 ) { sum := G[1] * ; for( j := 1; j<m; j := j + 1 ) { // update clauses for and u: 1 := 1 - H+ 1 [j]; 2 := 2 - H+ 2 [j]; 3 := 3 - H+ 3 [j]; // [+] 4 := 4 - H+ 4 [j]; u1 := u1 + H− 1 [j]; u2 := u2 + H− 2 [j]; // [*] u3 := u3 + H− 3 [j]; u4 := u4 + H− 4 [j]; // [*] max := q (1 , 2 , 3 , u4 , u1 , u2 , u3 , u4 ); // [1] // := q max (1 , 2 , u4 , u1 , u3 ); // [2] if( ≥ 12 ) break; sum := sum + (G[j+1] - G[j]) * ; } } else { return 12 ; } return sum + 12 * (1 - G[j]); END
11.10 Implementation of Cardinal Comparatives
339
Table 11.16. Algorithm for evaluating cardinal comparatives in MCX (based on floating-point arithmetics) Algorithm for computing MCX (Q)(X1 , X2 , X3 ) (floating-point arithmetics) INPUT: X1 , X2 , X3 + + + − − − − Compute G, m, H+ 1 , H2 ,H3 ,H4 ,H1 , H2 , H3 ,H4 ; // see text // initialize , u: for( r := 1; r ≤ 4; r := r+1 ) m + { r := H+ r [j]; ur := r + Hr [0]; } j=1
:= q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); ⊥ := q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); j := 0; if( ⊥ > 12 ) { B := 2 * ⊥ - 1; while( B > G[j+1] ) { j := j + 1; // update clauses for , u: 1 := 1 - H+ 1 [j]; 2 := 2 - H+ 2 [j]; // [*] 3 := 3 - H+ 3 [j]; 4 := 4 - H+ 4 [j]; // [*] u1 := u1 + H− 1 [j]; u2 := u2 + H− 2 [j]; u3 := u3 + H− 3 [j]; // [*] u4 := u4 + H− 4 [j]; B := 2 * q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ) - 1; // [1] // B := 2 * q min (1 , 3 , u1 , u2 , u4 ) - 1; // [2] } return 12 + 12 max(B, G[j]); } else if( < 12 ) { B := 1 - 2 * ; while( B > G[j+1] ) { j := j + 1; // update clauses for , u: 1 := 1 - H+ 1 [j]; 2 := 2 - H+ 2 [j]; 3 := 3 - H+ 3 [j]; // [*] 4 := 4 - H+ 4 [j]; u1 := u1 + H− 1 [j]; u2 := u2 + H− 2 [j]; // [*] u3 := u3 + H− 3 [j]; u4 := u4 + H− 4 [j]; // [*] B := 1 - 2 * q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); // [1] // B := 1 - 2 * q max (1 , 2 , 4 , u1 , u3 ); // [2] } return 12 - 12 max(B, G[j]); } return 12 ; END
presented above. Consequently, some of the coefficients "r and ur need not be tracked, because they take no effect on q min or q max . Thus the corresponding ‘update clauses’ for these quantities can be dropped; i.e. the lines with a trailing [*] can be eliminated. In addition, the lines marked with [1] should be replaced with those labelled [2], which refer to the simplified formulas for computing q min and q max in the monotonic case. These simple changes will boost performance for virtually all cardinal comparatives in natural languages (noticing that quantifiers of the reverse monotonicity type, like “less than”,
340
11 Implementation of Quantifiers in the Models
can be treated along the same lines), and result in a practical implementation of cardinal comparatives which is ready for use in the targeted applications of fuzzy quantifiers. Having presented a solution for floating-point arithmetics, we will now add variants of the above algorithms for integer arithmetics. Hence let Q be a cardinal comparative (not required to be monotonic) and let X1 , X2 , X3 ∈ P(E) be given fuzzy arguments. Further suppose that m ∈ N \ {0} is given, which specifies the available range of integers. The given choice of m again restricts admissible fuzzy arguments to a finite set of equidistant membership grades in the unit interval, i.e. all legal choices of arguments must satisfy µXi (e) ∈ Um for all e ∈ E and i ∈ {1, 2, 3}, where Um is defined as before, see (11.85). By pursuing the same strategy that already proved useful for deriving the integer-based implementation of unary quantifiers (see pp. 315–317) and also of proportional quantifiers, we have also developed the corresponding solutions which implement cardinal comparatives based on integer arithmetics. The resulting variants of the above algorithms are shown in Table 11.17 (Fowa ), Table 11.18 (M), and Table 11.19 (for MCX ). The listings are again annotated by tags [*], [1], [2], in order to indicate the possible optimizations for monotonic quantifiers. In this case, the update clauses annotated by [*] can again be deleted, and lines marked with [1] should be replaced by the simplified variants [2]. Table 11.17. Algorithm for evaluating cardinal comparatives in Fowa (based on integer arithmetics) Algorithm for computing Fowa (Q)(X1 , X2 , X3 ) (integer arithmetics) INPUT: X1 , X2 , X3 // initialise Hr , , u H1 := HistX1 ∩X3 ; H2 := HistX2 ∩X3 ; H3 := HistX1 ∩¬X2 ∩X3 ; H4 := Hist¬X1 ∩X2 ∩X3 ; for( r := 1; r ≤ 4; r := r+1 ) m Hr [m+j]; ur := r + Hr [m]; } { r := j=1
cq := q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ) + q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); sum := cq; for( j := 1; j<m; j := j + 1 ) { ch := false; // "change" // update clauses for and u: for( r := 1; r ≤ 4; r := r+1 ) { if( Hr [m+j] = 0 ) { r := r - Hr [m+j]; ch := true;} if( Hr [m-j] = 0 ) { ur := ur + Hr [m-j]; ch := true; } } if( ch ) // one of the r or ur has changed { cq := q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ) + q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); } sum := sum + cq; } return sum / m’; // where m’ = 2*m END
11.10 Implementation of Cardinal Comparatives
341
Table 11.18. Algorithm for evaluating cardinal comparatives in M (based on integer arithmetics) Algorithm for computing M(Q)(X1 , X2 , X3 ) (integer arithmetics) INPUT: X1 , X2 , X3 // initialise Hr , , u H1 := HistX1 ∩X3 ; H2 := HistX2 ∩X3 ; H3 := HistX1 ∩¬X2 ∩X3 ; H4 := Hist¬X1 ∩X2 ∩X3 ; for( r := 1; r ≤ 4; r := r+1 ) m Hr [m+j]; ur := r + Hr [m]; } { r := j=1
:= q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); ⊥ := q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); if( ⊥ > 1 ) { 2 sum := ⊥; for( j := 1; j<m; j := j + 1 ) { nc:= true; // "no change" // update clauses for , u: if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; nc:= false; } if( H2 [m+j] = 0 ) // [*] { 2 := 2 - H2 [m+j]; nc:= false; } // [*] if( H3 [m+j] = 0 ) { 3 := 3 - H3 [m+j]; nc:= false; } if( H4 [m+j] = 0 ) // [*] { 4 := 4 - H4 [m+j]; nc:= false; } // [*] if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; nc:= false; } if( H2 [m-j] = 0 ) { u2 := u2 + H2 [m-j]; nc:= false; } if( H3 [m-j] = 0 ) // [*] { u3 := u3 + H3 [m-j]; nc:= false; } // [*] if( H4 [m-j] = 0 ) { u4 := u4 + H4 [m-j]; nc:= false; } if( nc) { sum := sum + ⊥; continue; } // one of the r , ur has changed ⊥ := q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); // [1] // ⊥ := q min (1 , 3 , u1 , u2 , u4 ); // [2] if( ⊥ ≤ 1 ) break; 2 sum := sum + ⊥; } } else if( < 1 ) { 2 sum := ; for( j := 1; j<m; j := j + 1 ) { nc:= true; // update clauses for and u: if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; nc:= false; } if( H2 [m+j] = 0 ) { 2 := 2 - H2 [m+j]; nc:= false; } if( H3 [m+j] = 0 ) // [*] { 3 := 3 - H3 [m+j]; nc:= false; } // [*] if( H4 [m+j] = 0 ) { 4 := 4 - H4 [m+j]; nc:= false; } if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; nc:= false; } if( H2 [m-j] = 0 ) // [*] { u2 := u2 + H2 [m-j]; nc:= false; } // [*] if( H3 [m-j] = 0 ) { u3 := u3 + H3 [m-j]; nc:= false; } if( H4 [m-j] = 0 ) // [*] { u4 := u4 + H4 [m-j]; nc:= false; } // [*] if( nc) { sum := sum + ; continue; } // one of the r , ur has changed := q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); // [1] // := q max (1 , 2 , 4 , u1 , u3 ); // [2] if( ≥ 1 ) break; 2 sum := sum + ; } } else { return 1 ; } 2 *(m-j)) / m; return (sum + 1 2 END
342
11 Implementation of Quantifiers in the Models
Table 11.19. Algorithm for evaluating cardinal comparatives in MCX (based on integer arithmetics) Algorithm for computing MCX (Q)(X1 , X2 , X3 ) (integer arithmetics) INPUT: X1 , X2 , X3 // initialise Hr , , u H1 := HistX1 ∩X3 ; H2 := HistX2 ∩X3 ; H3 := HistX1 ∩¬X2 ∩X3 ; H4 := Hist¬X1 ∩X2 ∩X3 ; for( r := 1; r ≤ 4; r := r+1 ) m Hr [m+j]; ur := r + Hr [m]; } { r := j=1
:= q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); ⊥ := q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); if( ⊥ > 1 ) { 2 for( j := 1; j<m; j := j + 1 ) { ch := false; // "change" // update clauses for and u: if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; ch := true; } if( H2 [m+j] = 0 ) // [*] { 2 := 2 - H2 [m+j]; ch := true; } // [*] if( H3 [m+j] = 0 ) { 3 := 3 - H3 [m+j]; ch := true; } if( H4 [m+j] = 0 ) // [*] { 4 := 4 - H4 [m+j]; ch := true; } // [*] if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; ch := true; } if( H2 [m-j] = 0 ) { u2 := u2 + H2 [m-j]; ch := true; } if( H3 [m-j] = 0 ) // [*] { u3 := u3 + H3 [m-j]; ch := true; } // [*] if( H4 [m-j] = 0 ) { u4 := u4 + H4 [m-j]; ch := true; } if( ch ) // one of the r , ur has changed { ⊥ := q min (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); } // [1] // { ⊥ := q min (1 , 3 , u1 , u2 , u4 ); } // [2] if( ⊥ ≤ m+j ) { return (m+j)/m’; } } return 1; } ) { else if( < 1 2 for( j := 1; j<m; j := j + 1 ) { ch := false; // update clauses for and u: if( H1 [m+j] = 0 ) { 1 := 1 - H1 [m+j]; ch := true; } if( H2 [m+j] = 0 ) { 2 := 2 - H2 [m+j]; ch := true; } if( H3 [m+j] = 0 ) // [*] { 3 := 3 - H3 [m+j]; ch := true; } // [*] if( H4 [m+j] = 0 ) { 4 := 4 - H4 [m+j]; ch := true; } if( H1 [m-j] = 0 ) { u1 := u1 + H1 [m-j]; ch := true; } if( H2 [m-j] = 0 ) // [*] { u2 := u2 + H2 [m-j]; ch := true; } // [*] if( H3 [m-j] = 0 ) { u3 := u3 + H3 [m-j]; ch := true; } if( H4 [m-j] = 0 ) // [*] { u4 := u4 + H4 [m-j]; ch := true; } // [*] if( ch ) // one of the r , ur has changed { := q max (1 , 2 , 3 , 4 , u1 , u2 , u3 , u4 ); } // [1] // { := q max (1 , 2 , 4 , u1 , u3 ); } // [2] if( ≥ m-j ) { return (m-j)/m’; } } return 0; } ; return 1 2 END
11.10 Implementation of Cardinal Comparatives
343
Finally let us consider a special case of comparative quantifiers which permits additional simplifications.6 In many cases, a cardinal comparative Q(Y1 , Y2 , Y3 ) can be expressed in terms of the difference c1 − c2 = |Y1 ∩ Y3 | − |Y2 ∩ Y3 |. For example, this is the case with “more than”, cf. (11.97) above. The simplicity of these quantifiers translates into rules of computation that are equally simple. In particular, we need not know the complete relation RγΦ1 ,Φ2 (X1 , X2 , X3 ) = {(|Y1 ∩ Y3 |, |Y2 ∩ Y3 |) : (Y1 , Y2 , Y3 ) ∈ Tγ (X1 , X2 , X3 )} in order to compute Q,X1 ,X2 ,X3 and ⊥Q,X1 ,X2 ,X3 . By contrast, it is sufficient to know the set of differences D = {|Y1 ∩ Y3 | − |Y2 ∩ Y3 | : (Y1 , Y2 , Y3 ) ∈ Tγ (X1 , X2 , X3 )} . Hence let us describe the latter set directly, which might pay off from the efficiency point of view because all choices of arguments (Y1 , Y2 , Y3 ), (Y1 , Y2 , Y3 ) ∈ Tγ (X1 , X2 , X3 ) with |Y1 ∩ Y3 | − |Y2 ∩ Y3 | = |Y1 ∩ Y3 | − |Y2 ∩ Y3 | now become identified, i.e. it is sufficient to consider one representative only. The new format therefore lets us eliminate some computational redundancy. The next theorem shows how to compute the set of differences: Theorem 11.60. and γ ∈ I. Then Let E = ∅ be a finite base set, X1 , X2 , X3 ∈ P(E) {c1 − c2 : (Y1 , Y2 , Y3 ) ∈ Tγ (X1 , X2 , X3 ), c1 = |Y1 ∩ Y3 |, c2 = |Y2 ∩ Y3 |} = {d : "3 − u4 ≤ d ≤ u3 − "4 } , referring to the cardinality coefficients "3 , "4 , u3 and u4 introduced in Th. 11.57. The latter theorem has some obvious applications. If a cardinal comparative which depends on D only is convex in its second argument, then q : Z −→ I with Q(Y1 , Y2 , Y3 ) = q(|Y1 ∩ Y3 | − |Y2 ∩ Y3 |) is also convex. Hence there exists a maximum dpk ∈ Z such that q is nondecreasing for d ≤ dpk and nonincreasing for d ≥ dpk . We then have : "3 − u4 ≤ dpk ≤ u3 − "4 q(dpk ) j = q max ("3 , "4 , u3 , u4 ) = q(u3 − "4 ) : dpk > u3 − "4 q("3 − u4 ) : dpk < "3 − u4 ⊥j = q min ("3 , "4 , u3 , u4 ) = min(q("3 − u4 ), q(u3 − "4 )) . 6
These coincide with the narrower concept of absolute comparative quantifiers proposed in [29].
344
11 Implementation of Quantifiers in the Models
These formulas speed up the computation of j and ⊥j for quantifiers like “about ten more than”. If Q even conforms to the above monotonicity pattern (nondecreasing in the first and nonincreasing in the second argument), then q : Z −→ I is nondecreasing. In this case, the computation simplifies to j = q(u3 − "4 )
and ⊥j = q("3 − u4 ) .
For example, consider the quantifier “more than” defined by (11.97). We now observe that the quantifier can be expressed directly in terms of d = c1 − c2 , i.e. more than(Y1 , Y2 , Y3 ) = q(d) , where q : Z −→ {0, 1} is the mapping 1 q(d) = 0
: d>0 : d≤0
for all d ∈ Z. We notice that q is nondecreasing in d. By the above reasoning, then, the upper and lower bound mappings more than,X1 ,X2 ,X3 and ⊥more than,X1 ,X2 ,X3 reduce to 1 : u3 − "4 > 0 more than,X1 ,X2 ,X3 (γ) = q(u3 − "4 ) = 0 : u3 − "4 ≤ 0 1 : "3 − u4 > 0 ⊥more than,X1 ,X2 ,X3 (γ) = q("3 − u4 ) = 0 : "3 − u4 ≤ 0 for all γ ∈ I. Substituting these expressions for q max and q min in the above algorithms will speed up the implementation of the comparative “more than”. Other cardinal comparatives like “less than”, “at least k more than” and “exactly k more than” can be treated analogously. This completes the discussion of cardinal comparatives. A formal definition of these quantifiers has been presented which abstracts from the NL examples, and it has been shown how the resulting quantifiers can be implemented in the proposed framework. Some possible optimizations of the algorithms have also been described, which become admissible when the quantifier exhibits the typical monotonicity pattern. In presenting the formal definition of cardinal comparatives, and showing how they can be implemented, we have augmented the classes of fuzzy quantifiers known to fuzzy set theory. The new class is an interesting one, because comparisons of cardinalities are not only frequently seen in NL, but also of obvious relevance to applications.
11.11 Chapter Summary and Application Examples This chapter has shown that the improved models presented in this monograph can also compete with earlier approaches to fuzzy quantification when
11.11 Chapter Summary and Application Examples
345
it comes to efficient implementation. In order to provide the theory with the required computational backing, we developed the basic techniques for implementing quantifiers in arbitrary models of the Fξ -type.7 Three prototypical models were chosen to demonstrate these techniques. The model MCX shows excellent formal properties and is related to the ‘basic’ FG-count approach and the Sugeno integral. The model Fowa is mainly interesting because it extends the ‘basic’ OWA approach and the Choquet integral; in addition, it is a typical Fξ -DFS with properties different from those of MB -models. The model M was chosen because its behaviour is somewhat in between Fowa and MCX : Like Fowa , it uses the integral to abstract from cut levels; however, the simple averaging (Q,X1 ,...,Xn + ⊥Q,X1 ,...,Xn )/2 of Fowa is replaced with the fuzzy median aggregation also used in MCX . In addition, the model M has been used in the experimental retrieval system for weather documents from which the examples in this monograph are taken [53, 60]. In order to facilitate the development of algorithms for other models than M, MCX and Fowa , the chapter was especially concerned with the development of building blocks which will be useful for implementing quantifiers in arbitrary Fξ -models. The first problem to be solved was the following: It is practically impossible to consider all choices of γ in the continuous range [0, 1] for computing a quantification result. We have therefore shown that in the relevant cases, it is possible to restrict to a finite sample of cutting levels which fully determine the quantification result. In particular, this can always be done for finite base sets, and the sample of cut-levels then takes a very simple form. Having solved the problem of continuous cut ranges, we then expressed the prototypical models in terms of the finite sample of cut levels. This reformulation of an Fξ model in terms of the finite sample of j ’s and ⊥j ’s is the chief task to be accomplished when implementing a new model. As soon as the model is written as a function of the j ’s and ⊥j ’s, it becomes possible to compute quantification results because everything has now turned finite. Some further refinements are necessary to obtain an efficient algorithm which not only computes the correct results, but is also useful in practice. Specifically, the computation of quantification results based on j and ⊥j will only be efficient if the quantities j and ⊥j can also be calculated efficiently. Computing these in a naive way from the three-valued cuts is not a viable approach, because the total cut range Tγ (X1 , . . . , Xn ) will grow beyond computational limits for base sets of realistic size. The three-valued cutting mechanism, which is essentially supervaluationist, considers all possible alternatives and thus involves a large number of calculations at each cut level. The actual quantification result, however, usually depends only on a small sample of these alternatives. In order to avoid these redundant computations, we focused on quantitative (i.e. automorphism 7
The possible extension to FΩ -DFSes would be pointless due to their discontinuous behaviour.
346
11 Implementation of Quantifiers in the Models
invariant) quantifiers, which comprise almost all examples of relevance to technical application. We first reviewed a well-known result of TGQ which states that every quantitative two-valued quantifier can be expressed in terms of the cardinalities of its arguments and their Boolean combinations. A corresponding result can be proven for semi-fuzzy quantifiers: These are also quantitative if and only if they are reducible to cardinalities of Boolean combinations, i.e. the quantifiers can be expressed by a mapping q : {0, . . . , |E|}K −→ I, see Th. 11.32. We then proved in Ths. 11.40–11.43 that the quantification result Fξ (Q)(X1 , . . . , Xn ) of an Fξ -model can always be computed from the defining mapping q and from pairs of upper and lower cardinality indices ur and "r , which are sampled from Boolean combinations of the arguments (based on the standard fuzzy intersection and complement). Some optimizations have then been described which simplify these formulas for the important types of quantifiers. In Th. 11.44, we refined the general analysis for conservative quantifiers; the resulting formulas have been used for the efficient implementation of proportional quantifiers. In Th. 11.57, we further improved the general analysis of cardinal comparatives, the prime case of three-place quantifiers. These advances in the analysis of quantitative quantifiers establish the second component needed for efficient implementation. To sum up, the improved analysis of quantitative quantifiers avoids the direct computation of the threevalued cuts. Quantification results can now be calculated from cardinality information represented by pairs of coefficients "r and ur , and it is no longer necessary to check all alternatives in the total cut range Tγ (X1 , . . . , Xn ). Having reduced the computation of j and ⊥j to cardinality indices ur and "r , it then became essential that these coefficients be computed as efficiently as possible. We therefore developed a fast mechanism for calculating the required cardinality information. The goal was that of avoiding the explicit computation of the sets Zγmin and Zγmax , as well as the subsequent computation of min max their element counts |Z|γ and |Z|γ , which determine the current coeffimin
max
cients "(j) = |Z|γ and u(j) = |Z|γ for the given γ = γ j . In order to avoid repeated element counting, the cardinalities of the layers {e ∈ E : µZ (e) = α} will now be precomputed for all relevant choices of the membership grade α. The resulting pairs of membership grades and associated cardinalities can be represented by the histogram of Z. We then developed very efficient update rules for the coefficients u and " which compute the upper and lower cardinalities associated with the three-valued cut from the histogram of the involved fuzzy subset. The basic techniques for implementing quantifiers in Fξ -DFSes were then combined to complete algorithms for implementing the main types of NL quantifiers in the prototypical models. The supported quantifiers comprise the familiar types of absolute and proportional quantifiers but also quantifiers of exception like “all except k” and cardinal comparatives like “much more than”. These types received few attention in the existing literature on fuzzy quantifiers. Zadeh [199, p. 757], for
11.11 Chapter Summary and Application Examples
347
example, mentions the quantifier “more than” when explaining his ‘third kind’ of fuzzy quantifiers, but he does not attempt an interpretation. Comparisons of fuzzy cardinalities have been discussed by Dubois and Prade [42], but they were not viewed as a special form of quantification. In any case, the algorithms for cardinal comparatives are the first systematic and complete implementation of an important class of quantifiers ‘of the third kind’ in the sense of Zadeh [199, p. 757]. The successful implementation of such diverse types of quantifiers in the models demonstrates that the proposed approach to fuzzy quantification is not only theoretically appealing, but also viable in practice. In particular, we have shown that the featured models Fowa , M and MCX are computational, by describing efficient algorithms which implement the relevant types of quantifiers in these models. The chapter also described alternative versions of the algorithms based on integer arithmetics. These variants can be very efficient if the considered integer range is small compared to the size of the domain. Operations on intensity images with millions of pixels and a limit to eight bit accuracy are a prime example. Due to their clear structure, the proposed algorithms are easily extended to additional classes of quantifiers. In addition, the algorithms can provide a blueprint for implementing other types of models. It is also instructive to consider the issue of computational complexity. Let N = |E| denote the cardinality of the base set and m = |{γ0 , . . . , γm }| the number of relevant cutting levels. We consider separately the the complexity of pre-computing Hr+ and Hr− , which is the same for all quantifiers, and the complexity of the main algorithms which operate on the pre-computed histograms. Due to the use of floating-point arithmetics, the histograms must be stored in some kind of tree structure rather than the fixed-size arrays used for histograms of discrete levels. Thus, the complexity of the computing the histograms is O(N log m) in this case.8 The complexity of the main loop is identical for Fowa , M and MCX . However, it crucially depends on the type of quantifier being evaluated, which decides how efficiently j = q max (. . . ) and ⊥j = q min (. . . ) will be computed from the cardinality coefficients. The results for the considered types of quantifiers and the effects of the described optimizations are shown in Table 11.20. Here, absolute quantifiers and quantifiers of exception show the same pattern as quantitative unary quantifiers because of their reductions (11.87) and (11.88) to the unary case. The results labelled ‘† ’ can be achieved if Q is a function of |Y1 ∩ Y3 | − |Y2 ∩ Y3 |. In closing this chapter, we will discuss two examples of fuzzy quantification which demonstrate that the proposed algorithms are indeed useful and efficient. The first example uses fuzzy quantifiers to support weighted querying of retrieval systems. In information retrieval, one is typically faced with a large number of unstructured documents or document representatives (database records). The retrieval system is expected to rank the documents according 8
as opposed to O(N ) for integer arithmetics, where the data can be stored in a simple array during computation.
348
11 Implementation of Quantifiers in the Models
Table 11.20. Complexity of the algorithms for Fowa , M and MCX : Main loop Type (Monot.) unary quant. (general case) (convex) (monotonic) absolute exception proportional (general case) (convex) (monotonic) abs. comparative (general case) (convex) (monotonic)
Example
Complexity
There is an even number of X’s There are about 50 X’s There are at least 20 X’s About 50 X1 ’s are X2 ’s All except ten 10 X1 ’s are X2 ’s
O(m N ) O(m) O(m) like unary quantitative like unary quantitative
|X1 ∩ X2 |/|X1 | is either 0.1 or 0.2 About 50% of the X1 ’s are X2 ’s Most X1 ’s are X2 ’s
O(m N 2 ) O(m N ) O(m)
|X1 ∩ X3 | − |X2 ∩ X3 | is prime Twice the X1 than X2 ’s are X3 ’s More X1 ’s than X2 ’s are X3 ’s
O(m N 2 ) resp. O(m N )† O(m N ) resp. O(m)† O(m)
to the search criterion, and to present them sorted by their relevance to the user query. In this process, fuzzy quantifiers can be useful as aggregation operators which support importance qualification. We then have a number of subcriteria which comprise the domain of quantification, E. Each criterion e ∈ E has an associated importance µW (e) ∈ I and an associated grade of applicability µA,d (e) with respect to the considered document d. These im Based on portance grades can be organized into fuzzy subsets W, Ad ∈ P(E). 2 a semi-fuzzy quantifier Q : P(E) −→ I, e.g. Q = almost all, we can then express the aggregation score 'd which corresponds to the NL statement “Q important criteria apply to the document d”. The ranking score then becomes 'd = Q(W, Ad ) , = F(Q) for the chosen DFS F. The documents d ∈ D of the docwhere Q ument base D will then be sorted according to their relevance grade 'd and presented in decreasing order of relevance (most relevant first). For example, we can determine the documents to which most relevant criteria e ∈ E apply by using the ranking criterion 'd = m ost(W, Ad ) . In this case, the fuzzy quantifier m ost = F(most) is independent of the assumed standard model, see (2.2) and Th. 5.22. This basic approach has been utilized for improving retrieval performance in a meta-search engine for bibliographic databases [65]. In this system, the
11.11 Chapter Summary and Application Examples
349
possible combinations of search terms are no longer limited to Boolean operators; apart from the Boolean connectives, the users can also choose quantifiers like “almost all”, “many” and “a few”. In addition, all terms can be weighted by their importance with respect to the aggregation. Similar techniques have been employed by Bordogna and Pasi [19] in order to support more powerful queries to information retrieval systems which also include fuzzy quantification. Further examples which demonstrate the benefits of fuzzy quantification to the querying of information systems are [23, 84, 86, 135]. It should be emphasized, though, that these applications resort to the Σcount or OWA approaches, and therefore do not profit from the improvements of fuzzy quantification achieved in this work. The proposed method for weighted aggregation with fuzzy quantifiers is not restricted to textual documents and classical information retrieval. The following example starts from a meaningful query to an image database, which involves the use of fuzzy quantification. The document base assumed in the example again consists of digitized satellite images, from which corresponding images of cloudiness situations have been extracted. The goal is to rank these cloudiness situations according to a criterion like “Q of Southern Germany is cloudy”, where Q is a quantifier like “more than 20 percent” or “as much as possible”. We thus have a uniform criterion, i.e. the fuzzy predicate “cloudy”, which is applied repeatedly and evaluated for all pixels in the image of cloudiness grades. The global criterion therefore expresses some kind of aggregation over the spatial region of Southern Germany (see Fig. 1.2). The criterion results in a single scalar evaluation for each image. Consequently, the aggregation totally eliminates the spatial dimension. It results in a single numerical score 'd which determines the relevance of each image under the query. Let us now consider the case that Q = as much as possible, which corresponds to the ranking criterion “As much as possible of Southern Germany is cloudy”. Because we are working with discrete, digitized images, we will first translate the original criterion into a corresponding condition on discrete pixels in the digitized image, viz “As many pixels as possible that belong to Southern Germany are classified as cloudy”. The ranking is then computed by determining the document score 'd of each image d in the database, and by presenting the images in decreasing order of their associated scores. The processing of the quantifier is delegated to the model M presented in Chap. 7. Thus the score 'd of the considered image d is given by 'd = M(as many as possible)(Southern Germany, cloudyd ) ,
350
11 Implementation of Quantifiers in the Models
Fig. 11.1. Ranking computed for query “As much as possible of Southern Germany is cloudy”. Most relevant results in top left corner, quality of match decreases from left to right and from top to bottom
where as many as possible is the quantifier defined by (2.9), which denotes the relative share, and cloudyd is the fuzzy region of cloudiness grades depicted in image d. The example query was then run on twelve images, a random sample of the images in the document base. The resulting ranking of cloudiness situations under the criterion “As much as possible of Southern Germany is cloudy” is presented in Fig. 11.1. As opposed to the results of existing approaches in the image ranking task shown in the introduction, the computed ordering appears to be in good conformance with our expectations, i.e. the resulting image sequence shows a steady decrease of cloudiness in Southern Germany. This positive result is not suprising, though, because the important expectations on quantification results have been encoded in the axiom system. The example also provides experimental data on processing times. Due to the large size of the base set (all pixel positions in the images) and the modest demands on accuracy, we applied the algorithm based on integer arithmetics with 8 bit accuracy (see Table 11.12). In order to compute the ranking, a quantification result had to be calculated for each image, based on the domain of 219063 (411x533) pixel coordinates. This took approx. 0.03 seconds processing time per image on an AMD Athlon 600 MHz PC running Linux, assuming that the image is already loaded into the working memory. Fuzzy quantifiers are not only useful for weighted aggregation and importance qualification; they are also important tools for linguistic data summarization. Following Yager [172], the linguistic summary is usually constructed from a linguistic quantifier, which represents the quantity in agreement, and from a natural language predicate, which functions as the summarizer . For example, consider the following linguistic summary cited from Kacprzyk and Strykowski [83, p. 31], who detail a system for generating summaries of sales data at a computer retailer.
11.11 Chapter Summary and Application Examples
351
“Much sales of components is with a high commission”. The summary can be split into the domain ‘sales of components’, the quantity in agreement ‘much’ and finally the summarizer, ‘[sales] with a high commission’. We can therefore express the summary in terms of the quantifying expression much(sales with a high commission) , must be a suitable fuzzy quantifier defined on the universe of where much component sales. Of course, it is also possible to use two-place quantifiers for constructing summaries. For example, imagine a summary “Much recent sales of components is with a high commission”. In this case, we have an additional summarizer ‘recent [sales of components]’ which expresses a soft constraint on those sales of components covered by the summary. Such relative summaries can be modelled by a quantifying expression built from a two-place quantifier, viz much(recent sales of components, sales with a high commission) . Experimental systems for linguistic data summarization which fit into this general framework are described in [82, 83, 135, 202]. These systems rely on structured, relational representations, and are therefore taylored to classical database applications. However, data summarization with fuzzy quantifiers needs not be based on relational data. The same techniques can also be used to summarize image sequences, for example. In order to illustrate this, let us consider a large collection of intensity images (each also annotated by a ‘time stamp’), which represent cloudiness situations. We will assume that users of a weather information system are interested in the weather in certain ‘natural’ geographic areas and the distribution and flow of weather in certain ‘natural’ time intervals. Here ‘natural’ means that these regions of interest have an associated cognitive representation, and can therefore be expressed linguistically. Hence consider a user interested in characteristics of the recent weather situation. Let us further assume that the user expresses the time of interest in natural language by specifying the temporal summarizer “in the last days”. The chosen term “in the last days” then imposes a soft constraint on those images in the database which must be considered in the summary. For purposes of demonstration, we shall limit ourselves to a small sequence of cloudiness situations, which is depicted in the upper row of Fig. 11.2. The relevance of these images with respect to the criterion “in the last days” is also shown in the figure. The individual pieces of information must now be condensed by generating image summaries. Fuzzy two-place quantifiers are suitable operators to accomplish this summarization because they can incorporate the
352
11 Implementation of Quantifiers in the Models
importances expressed by “in the last days”. The criteria for generating the summary images thus take the form “Q-times cloudy in the last days”, where Q is the chosen quantifier. In order to describe the data from several perspectives, the system is supposed to apply several quantifiers Q, each of which highlights a certain characteristic of the images. The summary images will be generated as follows. We start from the base set E of images representing cloudiness situations. It is understood that these images share the same set of pixel coordinates, i.e. all images have the same dimensions. The images are also annotated with a time stamp. The time stamp which judges membership of the can be used to define a fuzzy set X1 ∈ P(E), images in the fuzzy time interval “in the last days”. The summary criterion 2 must be specified in terms of a semi-fuzzy quantifier Q : P(E) −→ I. The target image is defined on the same set of pixel coordinates as the original images. It is generated by pixel-wise application of the following procedure. which We have already fixed the quantifier Q and the restriction X1 ∈ P(E) encodes the importances. In dependence on the given pixel p, we define a where µX2,p (e) is the degree of cloudiness second fuzzy subset X2,p ∈ P(E), observed in cloudiness situation e at the location of p, i.e. the degree to which p is classified as ‘cloudy’. By applying the quantifier to the restriction X1 and scope X2,p , we obtain the intensity of the considered pixel in the result image, which is the score of the criterion “Q-times cloudy in pixel p in the last days”. In other words, the resulting image R has pixel intensities Rp = F(Q)(X1 , X2,p ) , for all pixels p, where F is the chosen model of fuzzy quantification. Here we shall use F = M, which was the only known DFS at the time the experiments were carried out, but the other models Fowa and MCX are also conceivable choices. Noticing that every quantifier will highlight a certain regularity observed in the images, it makes sense to use more than one quantifier to achieve a richer description of the data. Here we will use the universal quantifier for modelling “always”; the existential quantifier for modelling “at least once”; and some intermediate quantifiers for modelling “sometimes” (i.e., at least a few times), “often” (assumed to increase linearly with frequency) and “almost always”. The latter quantifiers were all derived from a trapezoidal 2 proportional quantifier trpa,b,c : P(E) −→ I defined by ta,b (|Y1 ∩ Y2 |/|Y1 |) : Y1 = ∅ trpa,b,c (Y1 , Y2 ) = c : Y1 = ∅ : z
b
11.11 Chapter Summary and Application Examples
353
Data:
0.2
0.4
0.6
0.7
0.8
0.9
1.0
1.0
Results:
at least once sometimes
often
almost always always
some
trp0,1,0.5
trp0.6,1,1
trp0,0.4,0
all
Fig. 11.2. Image sequence and summarization results for various choices of the criterion “Q-times cloudy in the last days”. Regions that meet the criterion are depicted white
where Y1 , Y2 ∈ P(E), a, b, c, z ∈ I, a < b. The precise definition of the derived quantifiers in terms of the parameters a, b, c ∈ I as well as the corresponding summaries are shown in Fig. 11.2. In the generated summary images, one immediately recognizes those areas that were sunny all of the time (those depicted black in the leftmost summary), those areas that were overcast with clouds all of the time (those shown white in the rightmost image) and the exact graduation of the areas in between (center images). In the crisp case, it would be possible to display all of this information in one single image, which depicts frequency. However, this cannot be done in the fuzzy case, because the importances of the summarized images (specified by “in the last days”) and also the cloudiness grades are both a matter of tendency. This is why several images are needed to summarize the data. Finally let us consider the processing times measured in the second experiment. In the example shown in Fig. 11.2, a total number of 219063 quantification results involving a domain of 8 images had to be computed for each result image (i.e. one quantifying expression for each of the 411x533 pixels). We again used the integer-based implementation of proportional quantifiers in M with eight bits of accuracy. The summarisation process took a total of about 5 seconds per result image on the same computer platform as above. These numbers suggest that the proposed algorithms are indeed very efficient, and not likely to create a performance bottleneck in applications.
12 Multiple Variable Binding and Branching Quantification
12.1 Motivation and Chapter Overview Observing that Zadeh’s traditional framework is too limited in scope, the framework for fuzzy quantification proposed in Chap. 2 was explicitly designed to capture a substantial class of linguistic quantifiers. The main part of the book was then concerned with the development of a complete and practical theory of fuzzy quantification within the proposed framework by defining the intended models, by devising constructions for such models, and finally by presenting algorithms for implementing quantifiers in these models which make them useful for applications. All of these efforts were directed at the modelling of individual quantifiers. However, it is not only the diversity of possible quantifiers in NL which poses difficulties to a systematic and comprehensive modelling of linguistic quantification. Further difficulties arise when the quantifiers are no longer treated in isolation. Even in simple cases like “most”, the ways in which these quantifiers interact when combined in meaningful propositions can be complex and sometimes even puzzling. Consider the quantifying proposition “Most men and most women admire each other”, for example, in which we find a reciprocal predicate, “admire each other”. Barwise [6] argues that so-called branching quantification is needed to capture the meaning of propositions involving reciprocal predicates. Without branching quantifiers, the above example must be linearly phrased as either a. or b., a. [most x : men(x)][most y : women(y)] adm(x, y) b. [most y : women(y)][most x : men(x)] adm(x, y) . (Obviously, we need a logic which supports generalized quantifiers in order for these expressions to make sense. For a suitable system of first-order logic with generalized quantifiers see Barwise and Feferman [8, Ch. 2]). Neither interpretation captures the expected symmetry with respect to the men and
I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 355–373 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
356
12 Multiple Variable Binding and Branching Quantification
women involved. Surely the Boolean conjunction of (a) and (b) will be symmetric, but it does not capture the expected meaning either. In fact, we need a construction like [Q1 x : men(x)] adm(x, y) [Q2 y : women(y)] where the quantifiers Q1 = Q2 = most operate in parallel and independently of each other. This branching use of quantifiers can be analysed in terms of Lindstr¨ om quantifiers [106], i.e. multi-place quantifiers capable of binding several variables. In this case, we have three arguments, and the quantifier should bind x in men(x), y in women(y) and both variables in adm(x, y). Thus anticipating the use of Lindstr¨ om quantifiers which will be formally defined in the next section, the above branching expression can be modelled by a Lindstr¨ om quantifier Q of type 1, 1, 2: Qx,y,xy (men(x), women(y), adm(x, y)) (For details on the logical language and the definition of its semantics, see Lindstr¨ om’s original presentation [106]). Let us now consider the problem of assigning an interpretation to such quantifiers. Obviously, Q depends on the meaning of “most”. Given a fixed choice of base set or ‘universe’ E = ∅ over which the quantification ranges, the quantifier “most” in its precise sense (majority of) can be expressed as 1 : |Y1 ∩ Y2 | > 12 |Y1 | most(Y1 , Y2 ) = 0 : else for all crisp subsets Y1 , Y2 of E (provided that E be finite), see (2.2). This lets us evaluate “most men are married” by calculating most(men, married), where men, married are the sets of men and of married people, respectively. The definition can be extended to base sets of infinite cardinality if so desired. A possible interpretation of “most” in this case is 1 ∩ Y2 ) = ∅ for all bijections β : Y1 −→ Y1 1 : Y1 ∩ Y2 ∩ β(Y most(Y1 , Y2 ) = 0 : else where Y1 , Y2 ∈ P(E). Let us now attempt a similar analysis of the above quantifier Q. We first notice that the quantifier must accept three arguments, i.e. the sets A = men, B = women ∈ P(E) of men and women, and the binary relation R = adm ∈ P(E 2 ) of people admiring each other, assumed to be crisp for simplicity. The quantifier then determines a two-valued quantification result Q(A, B, R) ∈ {0, 1} from these data. Barwise [6, p. 63] showed how to define Q in a special case; here we adopt Westerst˚ ahl’s reformulation for binary quanti2 fiers [167, p. 274, (D1)]. Hence consider a choice of Q1 , Q2 : P(E) −→ {0, 1}. Let us further assume that the Qi ’s, like “most”, are nondecreasing in their
12.2 Lindstr¨ om Quantifiers
357
second argument, i.e. Qi (Y1 , Y2 ) ≤ Qi (Y1 , Y2 ) whenever Y2 ⊆ Y2 . In this case, the complex quantifier Q becomes: 1 : ∃U × V ⊆ R : Q1 (A, U ) = 1 ∧ Q2 (B, V ) = 1 Q(A, B, R) = (12.1) 0 : else Thus “Most men and most women admire each other” means that there exists a mutual admiration group U × V ⊆ adm such that most men and most women belong to that group. In my view, this analysis is correct and expresses the intended meaning of the example. It should be remarked at this point that Westerst˚ ahl, unlike Barwise, is only concerned with conservative quantifiers, thus assuming that Qi (Y1 , Y2 ) = Qi (Y1 , Y1 ∩ Y2 ). This permits him to restrict attention to U ×V ⊆ R∩(A×B) rather than U ×V ⊆ R without changing the interpretation. Westerst˚ ahl also extends this analysis to a generic construction which admits non-monotonic quantifiers (p. 281, Def. 3.1). But, how can we incorporate approximate quantifiers and fuzzy arguments, as in “Many young and most old people respect each other”? The goal of the present chapter is devising a method which assigns meaningful interpretations to such cases. To this end, we will first incorporate Lindstr¨ om-like quantifiers capable of binding several variables into the QFM framework. Following this, the axiom system for ordinary models of fuzzy quantification will be adapted to the new cases. This extension is modelled closely after the DFS solution and thus permits to re-use the models presented so far. We will also consider more specialized conditions which might further constrain the plausible models of fuzzy quantification with Lindstr¨ om quantifiers. Specifically, we will connect Lindstr¨ om-like quantifications and the analysis of branching constructions to the formation of quantifier nestings. Finally we will discuss how the above analysis of reciprocal constructions in terms of branching quantifiers and its generalization due to Westerst˚ ahl can be extended towards fuzzy branching quantification. The proposed analysis is of special importance to linguistic data summarization because the full meaning of reciprocal summarizers which describe factors being “correlated” or “associated” with each other can only be captured by branching quantification.
12.2 Lindstr¨ om Quantifiers Let us start from the notion of crisp quantifiers which incorporate multiplevariable binding. A suitable semantical concept which achieves this is defined as follows. Definition 12.1. A Lindstr¨ om quantifier is a class Q of (relational) structures of some common type t = t1 , . . . , tn , such that Q is closed under isomorphism [106, p. 186]. The cardinal n ∈ N specifies the number of arguments; the individual components ti ∈ N specify the number of variables that the quantifier binds in its
358
12 Multiple Variable Binding and Branching Quantification
i-th argument position. For example, the existential quantifier, which accepts one argument and binds one variable, has type t = 1. The corresponding class E comprises all structures E, A where E = ∅ is a base set and A is a nonempty subset of E. In the introduction to this chapter, we have already met with a more complex quantifier Q of type 1, 1, 2 given by (12.1). In this case, Q is the class of all structures E, A, B, R with Q(A, B, R) = 1, where E = ∅ is om, such the base set and A, B ∈ P(E), R ∈ P(E 2 ). As shown by Lindstr¨ quantifiers can be used to assign an interpretation to quantifying expressions like Qx,y,xy (ϕ(x), ψ(y), χ(x, y)) where Q binds x in ϕ(x), y in ψ(y), and both x, y in χ(x, y). Thus, Q is capable of binding multiple variables in its third argument position. Informally, the semantical interpretation of ϕ(x) results in the set A = {x : ϕ(x)} of all things which satisfy ϕ(x); ψ will result in the set B of all things which satisfy ψ(y); and χ(x, y) will result in the relation R = {(x, y) : χ(x, y)} ∈ P(E 2 ). The resulting sets (or more precisely, ti -ary relations) can then be used to determine the truth status of the quantifying sentence, by checking whether (A, B, R) ∈ Q or not. The formal definition of the logical system and its semantics are described in [106]. To sum up, Lindstr¨ om quantifiers introduce a very powerful notion of quantifiers. However, as already pointed out in the introduction, the condition that Q be closed under isomorphism, which is useful for describing logical or mathematical quantifiers, is too restrictive when we turn to the modelling of natural language. In order to express non-quantitative examples of quantifiers, e.g. proper names “Ronald” or constructions involving proper names like “all except Lotfi” which depend on specific individuals, it is therefore necessary to drop invariance under isomorphisms. The following definition of a generalized Lindstr¨ om quantifier accounts for these considerations. Definition 12.2. A generalized Lindstr¨ om quantifier is a class Q of relational structures of a common type t = t1 , . . . , tn . Again, the arity n ∈ N specifies the number of arguments, and the components ti ∈ N specify the number of variables that the quantifier binds in its i-th argument position. In principle, we have now arrived at the two-valued concept which will be generalized to the fuzzy case by incorporating approximate quantification and fuzzy arguments. In order to avoid the introduction of a ‘fuzzy class of relational structures’, it is convenient to organize the information gathered in the total class Q into more localized representations. In order to accomplish this, we will follow the same basic strategy that was pursued in the case of ordinary two-valued, semifuzzy and fuzzy quantifiers, which were all defined relative to a given domain, E. Hence let us introduce a notion of two-valued L-quantifiers (where ‘L’ signifies Lindstr¨ om) which are no longer concerned with all base sets at the same time, but rather relativized to a single choice of E.
12.3 (Semi-)fuzzy L-Quantifiers and a Suitable Notion of QFMs
359
Definition 12.3. A two-valued L-quantifier of type t = t1 , . . . , tn on a base set E = ∅ is a n
mapping Q : × P(E ti ) −→ 2. Hence Q assigns a crisp quantification result i=1
Q(Y1 , . . . , Yn ) ∈ {0, 1} to each choice of crisp arguments Yi ∈ P(E ti ), i ∈ {1, . . . , n}. A full two-valued L-quantifier Q of type t assigns a two-valued L-quantifier QE of type t on E to each base set E = ∅. Let us briefly discuss how the full two-valued L-quantifiers Q so defined are related to generalized Lindstr¨ om quantifiers Q. Starting from a Lindstr¨ om quantifier Q, we can define a full two-valued L-quantifier by QE (Y1 , . . . , Yn ) = 1 ⇔ (E, Y1 , . . . , Yn ) ∈ Q for all base sets E = ∅ and all crisp arguments Yi ∈ P(E ti ), i = 1, . . . , n. Starting from a full two-valued L-quantifier Q, on the other hand, we can define a corresponding generalized Lindstr¨ om quantifier as the class Q of all relational structures (E, Y1 , . . . , Yn ) of type t which satisfy the condition QE (Y1 , . . . , Yn ) = 1. These constructions are obvious inverses of each other, i.e. we can always switch from a generalized Lindstr¨om quantifier to the corresponding full L-quantifier and vice versa. Consequently, both concepts express basically the same thing, and are thus interchangeable. Here we will opt for the latter concept of two-valued L-quantifiers, which is better suited for our purposes. In addition, it is convenient to keep the base set E fixed. We will therefore focus on the relativized notion of two-valued L-quantifiers of type t on E (rather than ‘full’ ones). These provide the point of departure for my generalizations to be presented in the next section. It should be obvious how to develop unrelativized notions like the above ‘full two-valued L-quantifiers’ from these concepts, but such extensions are irrelevant for my current purposes.
12.3 (Semi-)fuzzy L-Quantifiers and a Suitable Notion of QFMs Now that the basic issues concerning two-valued Lindstr¨ om quantifiers have been clarified, we can turn to the problem of introducing the intended support for fuzziness. The basic strategy that proved itself useful for developing the ordinary framework then guides us to introducing suitable definitions of semifuzzy L-quantifiers, fuzzy L-quantifiers and L-QFMs. Definition 12.4. A semi-fuzzy L-quantifier of type t = t1 , . . . , tn on a base set E = ∅ is a n
mapping Q : × P(E ti ) −→ I which assigns a gradual quantification result i=1
Q(Y1 , . . . , Yn ) ∈ [0, 1] to each choice of crisp arguments Yi ∈ P(E ti ), i ∈ {1, . . . , n}.
360
12 Multiple Variable Binding and Branching Quantification
Thus, Q accepts crisp arguments of the indicated types, but it can express n approximate quantifications. ‘Ordinary’ semi-fuzzy quantifiers Q : P(E) −→ I can be viewed as a special case of semi-fuzzy L-quantifiers; they roughly correspond to n-ary semi-fuzzy L-quantifiers of type t = 1, . . . , 1 on the given base set, E. However, it is important to notice that semi-fuzzy L-quantifiers are not literally identical to this special case of semi-fuzzy L-quantifiers; we only have a one-to-one correspondence (see below). Semi-fuzzy L-quantifiers are proposed as a uniform specification medium for arbitrary quantifiers suitable to express multiple variable binding. Again, we also need operational quantifiers, which are not restricted to crisp inputs: Definition 12.5. : A fuzzy L-quantifier of type t on a base-set E = ∅ is a mapping Q n ti ) −→ I which assigns a gradual quantification result Q(X 1 , . . . , Xn ) ∈ × P(E
i=1
ti ), i ∈ {1, . . . , n}. [0, 1] to each choice of fuzzy arguments Xi ∈ P(E
: P(E) n −→ I again correspond to the special case of Fuzzy quantifiers Q an n-ary fuzzy L-quantifier of type 1, . . . , 1 on E, but they are not literally identical to these quantifiers, and only related by a one-to-one correspondence. A suitable fuzzification mechanism must be used for associating specifications to target quantifiers. The notion of an L-QFM can be developed in total analogy to the earlier proposal of QFMs for ordinary quantifiers. Definition 12.6. An L-quantifier fuzzification mechanism (L-QFM) F assigns to each semifuzzy L-quantifier Q a corresponding fuzzy L-quantifier F(Q) of the same type t = t1 , . . . , tn and on the same base set E = ∅. These definitions introduce a new framework for fuzzy quantification which encompasses Lindstr¨om quantifiers and thus, multiple variable binding (provided a suitable definition of a logic on top of such quantifiers. Such a system of logic might be modelled closely after Lindstr¨ om’s original stipulations in [106]). The models for ‘L-quantification’, i.e. L-QFMs, are totally unconstrained, though, and again we need to make provisions in order to guarantee meaningful interpretations and shrink down the totality of possible L-QFMs to the intended choices.
12.4 Constructions on L-Quantifiers It is possible to develop semantical criteria for L-QFMs in total analogy to those for ordinary QFMs. However, some preparations are necessary. The notion of an underlying semi-fuzzy L-quantifier of a given fuzzy Lquantifier is defined pretty much the same way as in the ordinary case:
12.4 Constructions on L-Quantifiers
361
Definition 12.7. be a fuzzy L-quantifier of type t = t1 , . . . , tn on a base set E. The Let Q n : × P(E ti ) −→ I is the semi-fuzzy underlying semi-fuzzy L-quantifier U(Q) i=1
quantifier of type t on E defined by 1 , . . . , Yn ) = Q(Y 1 , . . . , Yn ) , U(Q)(Y for all Y1 ∈ P(E t1 ), . . . , Yn ∈ P(E tn ). ‘forgets’ that Q can be applied to fuzzy arguments, and Hence again, U(Q) only considers its behaviour on crisp arguments. This notion will prove useful to express that an L-QFM properly generalizes the semi-fuzzy quantifiers it is applied to, in analogy to (Z-1) Recalling the constructions defined for ordinary semi-fuzzy quantifiers in Chap. 3, we further needed projection quantifiers πe : P(E) −→ 2 and fuzzy −→ 2 in order to describe membership assessprojection quantifiers π e : P(E) ments as a special kind of quantification; the corresponding condition imposed on ordinary models was (Z-2) However, we will not introduce a special notation for two-valued projection L-quantifiers and fuzzy projection L-quantifiers. To see why, consider a two-valued projection quantifier Qe of type 1 on some base set E, e ∈ E. Generalizing the definition of ordinary projection quantifiers, Qe is the mapping Qe : P(E 1 ) −→ 2 defined by Qe (Y ) = χY ((e)) for all Y ∈ P(E 1 ). It is then apparent that Qe can be expressed as Qe = π(e) , where π(e) : P(E 1 ) −→ I is an ordinary projection quantifier defined by Def. 3.2. Hence two-valued projection L-quantifiers are special cases of ordinary projection quantifiers, and we need not introduce a new symbol to signify them. e of type 1 on a base By similar reasoning, a fuzzy projection quantifier Q 1 e (X) = µX ((e)) for set E = ∅ is the mapping Qe : P(E ) −→ I defined by Q 1 ). Hence Q e can be viewed as a special case of ‘ordinary’ fuzzy all X ∈ P(E e = π (e) , where π (e) : P(E) −→ I is defined by Def. projection quantifier Q 3.4 Again, there is no need to introduce a new notation because every fuzzy projection L-quantifier can be expressed as π (e) for some e ∈ E. As noted above, semi-fuzzy quantifiers and fuzzy quantifiers can be viewed as special cases of semi-fuzzy L-quantifiers and fuzzy L-quantifiers by means of a suitable embedding. This correspondence is important because the formal machinery introduced so far, in particular the construction of induced truth functions and the definition of the induced extension principle, is developed in terms of ordinary quantifiers. This suggests that we might utilize this correspondence in order to define the induced truth functions and induced extension principle of an L-QFM in terms of the definitions which are already available for ordinary QFMs. Hence let us make this correspondence explicit. We first consider a semin fuzzy quantifier Q : P(E) −→ I. As already remarked above, a matching semi-fuzzy L-quantifier Q has arity n, type t = 1, . . . , 1, and is also defined n on E. Hence Q is a mapping Q : P(E 1 ) −→ I. This demonstrates why Q
362
12 Multiple Variable Binding and Branching Quantification
and Q are in general not identical: it is true that the semi-fuzzy L-quantifier Q is a also an ordinary n-ary semi-fuzzy quantifier, but it is defined on a different base set, i.e. E 1 rather than E. Here, E 1 is the set of all one-tuples E 1 = {(e) : e ∈ E}, i.e. the set of all mappings f : {1} −→ E, which is usually different from the set E = {e : e ∈ E}. A similar situation occurs when we turn attention to fuzzy quantifiers vs. fuzzy L-quantifiers. In this : P(E) n −→ I is also different from its possible case, a fuzzy quantifier Q n 1 ) −→ I, essentially for the same reason. We shall now : P(E counterparts Q consider the obvious one-to-one correspondence between the elements of these sets. Let β : E −→ E 1 denote the mapping which maps the elements e ∈ E to one-tuples β(e) = (e)
(12.2)
The mapping which maps the one-tuples (e) ∈ E to their first component (e)1 = e will be denoted ϑ : E 1 −→ E, i.e. ϑ((e)) = e
(12.3)
for all (e) ∈ E 1 . Obviously, β and ϑ are isomorphisms, and ϑ is the inverse mapping of β. Based on β and ϑ, we are now able to express the correspondence between ordinary (semi-)fuzzy quantifiers and their generalized counn terparts in formal terms. Hence let Q : P(E) −→ I be a semi-fuzzy quantifier. The semi-fuzzy L-quantifier Q of type 1, . . . , 1 on E which corresponds to n times
Q can be expressed as n
Q = Q ◦ × ϑ,
(12.4)
1 ), . . . , ϑ(Y n )) Q (Y1 , . . . , Yn ) = Q(ϑ(Y
(12.5)
i=1
i.e.
for all Y1 , . . . , Yn ∈ P(E). The inverse of ϑ, i.e. β, can be used in a similar way to translate back from semi-fuzzy L-quantifiers of arity n and type 1, . . . , 1 to corresponding ordinary semi-fuzzy quantifiers, i.e. assuming crisp arguments. In the fuzzy case, we can also proceed analogously. In order to translate a : P(E) n −→ I into an n-ary fuzzy L-quantifier of type fuzzy quantifier Q
1, . . . , 1 on E, we proceed as in (12.4); but now, the arguments range over fuzzy sets, so we will replace the crisp powerset mapping ϑ with the fuzzy ˆ image mapping ϑˆ obtained from the standard extension principle. Again, we of arity n and type 1, . . . , 1 on E to a can also relate a fuzzy L-quantifier Q n −→ I, which then becomes : P(E) corresponding ordinary fuzzy quantifier Q n = Q ◦ × βˆˆ Q i=1
12.4 Constructions on L-Quantifiers
363
i.e. ˆˆ ˆˆ (X1 , . . . , Xn ) = Q( β(X Q 1 ), . . . , β(Xn )) for all X1 , . . . , Xn ∈ P(E). In the following definition, we combine these constructions in order to accomplish an interpretation of ordinary quantifiers in L-QFMs. Definition 12.8. Let F be an L-QFM. The corresponding ordinary QFM FR is defined by n n ◦ × βˆˆ , FR (Q) = F(Q ◦ × ϑ) i=1
i=1
n
for all semi-fuzzy quantifiers Q : P(E) −→ I. The construction of a corresponding ‘ordinary’ QFM FR for a given L-QFM F permits us to define the induced truth functions and the induced extension principle of an L-DFS F as the fuzzy truth functions and the extension principle, respectively, which are induced by the ordinary DFS FR : Definition 12.9. Let F be an L-QFM and f : 2n −→ I be a given mapping (i.e. a ‘semi-fuzzy ) : In −→ truth function’) of arity n ∈ N. The induced fuzzy truth function F(f I is defined by )=F
R (f ), F(f where FR is the ordinary QFM defined by Def. 12.8. Notes 12.10. •
whenever F is clear from the context, we will again use abreviations ¬ = = F(∨) F(¬), ∨ etc. • The definition of induced truth functions is extended to the induced fuzzy and ∪ in the obviset operations of complementation ¬ , intersection ∩ ous ways, i.e. again by elementwise application of the induced negation, conjunction and disjunction on membership grades. Based on the induced fuzzy truth functions and fuzzy set operations of an LQFM, we can now develop constructions on semi-fuzzy and fuzzy L-quantifiers which parallel my definitions of antonyms, negation and duals for ordinary quantifiers. Here, we will confine ourselves to exemplifying the definition of dualization, which will be needed for generalizing (Z-3) to L-QFMs; the other concepts can be developed in a similar way but will not be significant in the following.
364
12 Multiple Variable Binding and Branching Quantification
Definition 12.11. Given a semi-fuzzy L-quantifier Q of type t = t1 , . . . , tn , n > 0 on some base of type t on E is defined by set E = ∅, the semi-fuzzy L-quantifier Q 1 , . . . , Yn ) = ¬ Q(Y Q(Y1 , . . . , Yn−1 , ¬Yn ) is called the dual of Q. The dual of for all Y1 ∈ P(E t1 ), . . . , Yn ∈ P(E tn ). Q of type t on E is defined analogously. a fuzzy L-quantifier Q In order to express a condition constraining the behaviour of F for unions of argument sets which parallels (Z-4), we must also generalize the construction of quantifiers from such unions. The definition of semi-fuzzy and fuzzy Lquantifiers resulting from unions should be apparent: Definition 12.12. Given a semi-fuzzy L-quantifier Q of type t = t1 , . . . , tn , n > 0 on some base set E = ∅, the semi-fuzzy L-quantifier Q∪ of arity n + 1 and type
t1 , . . . , tn , tn on E is defined by Q∪(Y1 , . . . , Yn+1 ) = Q(Y1 , . . . , Yn−1 , Yn ∪ Yn+1 ) for all Y1 ∈ P(E t1 ), . . . , Yn ∈ P(E tn ) and Yn+1 ∈ P(E tn ). For fuzzy L of type t on E, the (n + 1)-ary fuzzy L-quantifier Q ∪ of type quantifiers Q
t1 , . . . , tn , tn on E is defined analogously. Monotonicity of semi-fuzzy and fuzzy L-quantifiers is defined as in the ordinary case. We only need to make sure that all arguments involved are chosen from their appropriate ranges Yi ∈ P(E ti ) for semi-fuzzy L-quantifiers, and ti ) in the fuzzy case: Xi ∈ P(E Definition 12.13. A semi-fuzzy L-quantifier Q of type t = t1 , . . . , tn , n > 0 on some base set E is nondecreasing in its i-th argument, i ∈ {1, . . . , n}, if Q(Y1 , . . . , Yn ) ≤ Q(Y1 , . . . , Yi−1 , Yi , Yi+1 , . . . , Yn ) for all Y1 ∈ P(E t1 ), . . . , Yn ∈ P(E tn ) and Yi ∈ P(E ti ) with Yi ⊆ Yi . Q is said to be nonincreasing in its i-th argument if under the same conditions, it always holds that Q(Y1 , . . . , Yn ) ≥ Q(Y1 , . . . , Yi−1 , Yi , Yi+1 , . . . , Yn ) . of type t on E are analThe corresponding definitions for fuzzy quantifiers Q ogous. In this case, the arguments range over P(E ti ), and ‘⊆’ is the fuzzy inclusion relation.
12.5 The Class of Plausible L-Models
365
Based on semi-fuzzy L-quantifiers which are nonincreasing in their last argument, we will be able to express a condition on L-QFMs which parallels (Z-5). Finally, let us extend the condition that F be compatible with functional composition (Z-6) towards the new type of fuzzification mechanisms. We will resort to the ordinary QFM FR associated with F in order to define the induced extension principle of an L-QFM. Definition 12.14. R , The induced extension principle F of an L-QFM F is defined by F = F i.e. F is the induced extension principle of the QFM FR defined by Def. 12.8.
12.5 The Class of Plausible L-Models Based on the above constructions, we can now define my axiom system for a class of plausible L-QFMs, called ‘L-DFSes’ or ‘L-models’ of fuzzy quantification. It should be obvious from the earlier characterization of DFSes and the generalization of the relevant constructions how the criteria for L-DFSes which support Lindstr¨ om-like quantifiers can be expressed: Definition 12.15. An L-QFM F is called an L-DFS if the following conditions are satisfied for all n semi-fuzzy L-quantifiers Q : × P(E ti ) −→ I of arbitrary types t = t1 , . . . , tn i=1
and on arbitrary base sets E = ∅: Correct generalisation Projection quantifiers Dualisation
U(F(Q)) = Q if t ∈ { , 1} (L-1) F(Q) = π (e) if Q = π(e) for some e ∈ E (L-2) = F(Q) n>0 F(Q) (L-3)
n>0 Internal joins F(Q∪) = F(Q)∪ (L-4) Preservation of monotonicity If Q is nonincreasing in the n-th arg, then (L-5) F(Q) is nonincreasing in the n-th arg, n > 0 Functional application
n
n
i=1
i=1
i) F(Q ◦ × fi ) = F(Q) ◦ × F(f
(L-6)
t
where fi : E i −→ E ti , t ∈ Nn (same n), E = ∅. The axiom system is constructed in total analogy to my basic system for ordinary QFMs. Thus it is not surprising that the generalized L-models are also suitable for carrying out ‘ordinary’ quantification, i.e. the model FR associated with F is indeed a DFS.
366
12 Multiple Variable Binding and Branching Quantification
Theorem 12.16. Let F be an L-QFM and FR the corresponding QFM defined by Def. 12.8. If F is an L-DFS, then FR is a DFS. There will be no further comments on the axioms here because they are so closely modelled after the conditions (Z-1)–(Z-6) imposed on plausible choices of QFMs. Ample motivation for these was given in Chap. 3. and there is no point repeating these arguments here. However, another issue needs clarification. As witnessed by the previous chapters, the comprehensive discussion of ordinary DFSes, the development of constructive principles for useful models, and finally the implementation of such models, required considerable effort. Obviously, there would be a lot to gain if we could reuse these results on ordinary models. Ideally, it would be possible to express all L-DFSes in terms of suitable ordinary DFSes. In this case, my results on properties of the models, constructive classes of plausible models and finally the techniques developed for implementation would directly translate into corresponding results on the new L-models. In order to effect the desired reduction, we need a generic method which lets us construct a ‘suitable’ L-QFM FL from a given ordinary QFM F. Here ‘suitable’ means that: (a) FL properly extends F, i.e. F = FLR ; and (b) FL is an L-DFS whenever F is a DFS, i.e. the construction ‘works’. In addition, the new construction should be compatible with the earlier construction of FR from a given L-QFM. Hence we should also have (c) F = FRL for all LDFSes F, i.e. F can be recovered from FR . A generic construction of L-models which answers these requirements, will eliminate the problem of establishing classes of L-DFSes altogether, because every L-DFS F can then be expressed as F = FL for an ordinary DFS F. we propose the following construction to accomplish this. Let F be an ordinary QFM and let Q be a semi-fuzzy L-quantifier of type t = t1 , . . . , tn on some base set E. we will abbreviate m = max{t1 , . . . , tn } ,
(12.6)
in particular ti ≤ m for all i ∈ {1, . . . , n}. We can therefore introduce injections ζi : E ti −→ E m which embed E ti into a common extended domain E m . These are defined by ζi (e1 , . . . , eti ) = (e1 , . . . , eti , eti , . . . , eti )
(12.7)
(m − ti ) times
for all (e1 , . . . , eti ) ∈ E ti , i ∈ {1, . . . , n}. Hence ζi fills in the (m − ti ) missing components by repeating eti . It should be apparent that ζi is indeed an injection. We also need mappings in the reverse direction. These will be written κi : E m −→ E ti , i ∈ {1, . . . , n}. The definition is as follows, κi (e1 , . . . , em ) = (e1 , . . . , eti )
(12.8)
12.5 The Class of Plausible L-Models
367
for all (e1 , . . . , em ) ∈ E m , i ∈ {1, . . . , n}. Hence κi is the projection on the first ti components of (e1 , . . . , em ), i.e. it simply drops the final (m − ti ) components. In particular, the κi ’s are onto (surjective). It should be apparent from (12.7) and (12.8) that subsequent application of ζi (which adds m − ti components) and κi (which drops these components) will take us back to the original ti -tuple. Hence for all i ∈ {1, . . . , n}, κi ◦ ζi = idE ti .
(12.9)
In dependence on the given semi-fuzzy L-quantifier Q, we now define a mapn ping Q : P(E m ) −→ I by Q (Y1 , . . . , Yn ) = Q( κ1 (Y1 ∩ Im ζ 1 ), . . . , κ n (Yn ∩ Im ζ n )) ,
(12.10)
for all Y1 , . . . , Yn ∈ P(E m ). It is instructive to observe that Q is, at the same time, an n-ary semi-fuzzy L-quantifier of type m, . . . , m on E, and an n ordinary semi-fuzzy quantifier Q : P(E m ) −→ I on a different base set, E m . In fact, this is the basic idea underlying the following construction of L-QFMs from ordinary QFMs. Definition 12.17. The L-QFM FL associated with a given QFM F is defined as follows. For all semi-fuzzy L-quantifiers Q of some type t = t1 , . . . , tn , on a base set E = ∅, n ˆ FL (Q) = F(Q ) ◦ × ζˆi i=1
where Q is the semi-fuzzy quantifier defined by (12.10), and the ζi ’s given ˆ ti ) −→ P(E m ) by applying the by (12.7) are extended to mappings ζˆi : P(E standard extension principle. Now let us consider the above criteria (a)–(c) for a successful reduction of L-DFSes to ordinary DFSes in turn. Theorem 12.18. If F is a DFS, then FLR = F. (FL properly generalizes the original model F.) Theorem 12.19. If F is a DFS, then the corresponding L-QFM FL is an L-DFS. (The proposed construction results in meaningful interpretations.) Theorem 12.20. If F is an L-DFS, then FRL = F.
368
12 Multiple Variable Binding and Branching Quantification
The latter theorem is of particular relevance because it eliminates the problem of establishing classes of L-DFSes altogether. To sum up, my criteria (a)–(c) are all valid and by the above reasoning, we need not be concerned with developing classes of L-DFSes any longer. The ordinary models F that have already been studied in some depth are sufficient to represent every L-DFS F as F = FL , given a suitable choice of F. This is of invaluable help because the investigation of constructive principles for plausible models, like those presented in Chap. 3, turned out to be a rather complex matter. The canonical construction of FL , then, permits the reuse of the computational models M, MCX and Fowa to handle the new cases of fuzzy L-quantification. In the following, we will identify these models with their extensions for simplicity, thus writing MCX rather than (MCX )L etc.
12.6 Nesting of Quantifiers The axioms for L-DFSes are directly modelled after the corresponding axioms for ordinary DFSes; they do not refer to details of multiple variable binding (on the syntactic level), or handling ti -ary (fuzzy) relations which occur as arguments of L-quantifiers (i.e. on the level of modelling devices). It is an interesting question if additional axioms specifically concerned with L-quantifiers might prove useful for identifying those L-DFSes best suited for modelling branching quantification. A construction that comes to mind is quantifier nesting. Consider the formula Q xQ yϕ(x, y), for example. It should be apparent how to develop a semantical interpretation of such formulas in the proposed framework. Based on semi-fuzzy L-quantifiers, we can either analyze this in terms of two quantifiers of type t1 = t2 = 1 applied in succession. Alternatively, we can look at the whole block Qxy = Q xQ y, which corresponds to a single semi-fuzzy 2 ) be the interpretation of L-quantifier of type t = 2. Now let R ∈ P(E ϕ(x, y) as a fuzzy relation. The analysis in terms of two separate quantifiers will result in an interpretation F(Q )(Z), where µZ (e) = F(Q )(eR) and µeR (e ) = µR (e, e ) for all e, e ∈ E. This analysis thus corresponds to a fuzzy L-quantifier F(Q ) @ F(Q ) of type 2 defined by F(Q ) @ F(Q )(R) = F(Q )(Z) for all fuzzy relations R ∈ P(E 2 ). The quantifier block Qxy , on the other hand, Q of type 2 defined by corresponds to a semi-fuzzy L-quantifier Q @ Q (S) = F(Q )(Z) , Q @ µZ (e) = Q (eS), eS = {e : (e, e ) ∈ S} for all crisp relations S ∈ P(E 2 ), and Q )(R). (The ‘tilde’-notation @ it results in a second interpretation, F(Q @ is used here to signify that Q @ Q depends on the chosen L-QFM). It is natural to require that the two interpretations coincide, i.e.
12.6 Nesting of Quantifiers
369
Q ) = F(Q ) @ F(Q ) . F(Q @ More generally, we can define a nesting operation for semi-fuzzy L-quantifiers and fuzzy L-quantifiers of arbitrary types (the quantifier Q must offer an argument slot to nest in, though, and thus needs positive arity n > 0). Definition 12.21. Let E = ∅ be a given base set and Q a semi-fuzzy L-quantifier on E of type t = t1 , . . . , tn , n > 0. Further let Q be a semi-fuzzy L-quantifier on E of Q arbitrary type t = t1 , . . . , tn , n ∈ N. The semi-fuzzy L-quantifier Q @ ∗ on E of type t = t1 , . . . , tn−1 , tn + t1 , . . . , tn + tn which results from the nesting of Q into the last argument of Q is defined by Q (Y1 , . . . , Yn−1 , S1 , . . . , Sn ) = F(Q )(Y1 , . . . , Yn−1 , Z) Q @
for all Yi ∈ E ti , i ∈ {1, . . . , n − 1} and Sj ∈ E tn +tj , j ∈ {1, . . . , n }, where tn ) is defined by Z ∈ P(E µZ (e1 , . . . , etn ) = F(Q )((e1 , . . . , etn )S1 , . . . , (e1 , . . . , etn )Sn )
(12.11)
for all e1 , . . . , etn ∈ E, and where the tj -ary relation (e1 , . . . , etn )Sj ∈ P(E tj ) is given by (e1 , . . . , etn )Sj = {(e1 , . . . , etj ) : (e1 , . . . , etn , e1 , . . . , etj ) ∈ Sj }
(12.12)
for j ∈ {1, . . . , n }. A similar construction is possible for fuzzy L-quantifiers: Definition 12.22. a fuzzy L-quantifier on E of type Let E = ∅ be a given base set and Q t = t1 , . . . , tn , n > 0. Further let Q be a fuzzy L-quantifier on E of arbitrary Q @ of type t∗ = type t = t1 , . . . , tn , n ∈ N. The fuzzy L-quantifier Q
t1 , . . . , tn−1 , tn + t1 , . . . , tn + tn on E is defined by @ Q (X1 , . . . , Xn−1 , R1 , . . . , Rn ) = Q (X1 , . . . , Xn−1 , Z) Q ti ), i ∈ {1, . . . , n − 1} and Rj ∈ P(E tn +tj ), j ∈ {1, . . . , n }, for all Xi ∈ P(E tn ) is defined by where Z ∈ P(E ((e1 , . . . , et )R1 , . . . , (e1 , . . . , et )Rn ) µZ (e1 , . . . , etn ) = Q n n
(12.13)
tj ), j ∈ {1, . . . , n }, is for all e1 , . . . , etn ∈ E, and where (e1 , . . . , etn )Rj ∈ P(E the fuzzy relation defined by
µ(e1 ,...,etn )Rj (e1 , . . . , etj ) = µRj (e1 , . . . , etn , e1 , . . . , etj ) for all e1 , . . . , et ∈ E. j
(12.14)
370
12 Multiple Variable Binding and Branching Quantification
In this general case, too, we would expect that plausible choices of F comply with the nesting operation. Definition 12.23. An L-QFM F is said to be compatible with quantifier nesting (in the last argument) if the equality Q ) = F(Q ) @ F(Q ) F(Q @
(QN)
is valid for all semi-fuzzy L-quantifiers Q of type t = t1 , . . . , tn , n > 0 and Q of arbitrary type t = t1 , . . . , tn , n ∈ N defined on the same base set E = ∅. Judging from some first tests, it appears that quantifier nesting expresses a very restrictive condition and excludes many useful models. In fact, we have the following result on general quantifier nestings: Theorem 12.24. Suppose that an L-DFS F is compatible with quantifier nesting. Then FR is compatible with fuzzy argument insertion. In other words, the only standard model which potentially admits the nesting of quantifiers is MCX , as shown by Th. 7.80 and Th. 7.81. It has not been established yet whether MCX validates the criterion but I would rather guess that this is not the case. It is perfectly possible – as in the case of conservativity Def. 6.20 or convexity Def. 6.8 – that the unrestricted or ‘strong’ requirement of compatibility with quantifier nestings conflicts with other reasonable criteria and must thus be weakened to a compatible ‘core’ requirement. Obviously, further research should be directed at these issues in order to determine the status of MCX and develop suitable weakenings of the nesting criterion if necessary. Let us briefly discuss the limitation of (QN) to nestings in the last argument of the quantifier Q . Nestings in another argument position k can be reduced to this case by flipping argument positions k and n, performing the nesting in the last argument, and then reordering the arguments again into their intended target positions. Every DFS is known to be compatible with such permutations of argument positions, a property which apparently generalizes to the L-models. Thus the compatibility of a DFS with nestings in the last argument position is sufficient to ensure that it also comply with nestings in other argument positions. By repeating this process, (QN) ensures the compatibility of conforming L-DFSes with multiple nestings of arbitrary depths. Thus, the condition is sufficient to cover general nestings involving quantifiers of arbitrary types with an arbitrary number of embedded quantifiers. It would be useful however, to relate the proposed general nestings in the last argument to the simple nesting of unary quantifiers considered earlier, or to other simplified nesting criteria. This might facilitate the test that a model of interest (not necessary a standard model like MCX ) comply with
12.7 Application to the Modelling of Branching Quantification
371
the nesting condition, and also eliminate some redundancy from the axiom system. The general nestings described by (QN) can probably be reduced to a simpler criterion, but further research is necessary to clarify this matter. Should a substantial weakening of the original requirement be necessary, the investigation of simplified criteria might also guide us to a useful weakening which still covers the nestings of linguistic relevance.
12.7 Application to the Modelling of Branching Quantification Let us now elucidate how the motivating example “Many young and most old people respect each other” can be interpreted in the proposed framework. In this case, we have semi-fuzzy quantifiers Q1 = many, defined by many(Y1 , Y2 ) = |Y1 ∩ Y2 |/|Y1 |, say, and Q2 = most, defined as above. Both quantifiers are nondecreasing in their second argument, i.e. we can adopt the analysis proposed by Barwise. The modification of equation (12.1) towards gradual truth values will be accomplished in the usual way, i.e. by replacing existential quantifiers with sup and conjunctions with min (non-standard connectives are not possible here). The semi-fuzzy L-quantifier Q of type 1, 1, 2 constructed from Q1 , Q2 then becomes Q(A, B, R) = sup{min(Q1 (A, U ), Q2 (B, V )) : U × V ⊆ R} for all A, B ∈ P(E) and R ∈ P(E 2 ). Hence “Many men and many women are relatives of each other”, for example, which rests on crisp arguments, is now taken to denote the maximum degree to which many men and many women belong to a group U × V ⊆ R of mutual relatives. This analysis appears to be conclusive – but we still need to extend it to fuzzy arguments. To this end, it is sufficient to apply the chosen L-DFS F. We then obtain the fuzzy L-quantifier F(Q) of type 1, 1, 2 suited to handle this case. Returning to the original example involving young and old people, we have fuzzy subsets young, old ∈ P(E) of young and old persons, respectively, and a fuzzy re 2 ) of persons who respect each other. Thus, a meaningful lation rsp ∈ P(E interpretation of “Many young and most old people respect each other” is now given by F(Q)(young, old, rsp). In the event that either Q1 or Q2 fail to be increasing in the second argument, we must adopt Westerst˚ ahl’s generic method for interpreting branching quantifiers. Let us now discuss how the method can be applied in the fuzzy case. Suppose that Q1 , Q2 are arbitrary semi-fuzzy quantifiers of arity n = 2 on some base set E. Following Westerst˚ ahl, we introduce nondecreasing and nonincreasing approximations of the Qi ’s, defined by Q+ i (Y1 , Y2 ) = (Y , Y ) = sup{Q (Y , U ) : U ⊇ Y2 }, resup{Qi (Y1 , L) : L ⊆ Y2 } and Q− 1 2 i 1 i spectively. With the usual replacement of existential quantification with sup and conjunction with min, Westerst˚ ahls interpretation formula [167, p. 281, Def. 3.1] becomes:
372
12 Multiple Variable Binding and Branching Quantification
+ − − Q(A, B, R) = sup{min{Q+ 1 (A, U1 ), Q2 (B, V1 ), Q1 (A, U2 ), Q2 (B, V2 )} :
(U1 ∩ A) × (V1 ∩ B) ⊆ R ∩ (A × B) ⊆ (U2 ∩ A) × (V2 ∩ B)} for all A, B ∈ P(E) and R ∈ P(E 2 ). Application of an L-DFS then determines the corresponding fuzzy L-quantifier F(Q) of type 1, 1, 2 suitable for interpretation. As shown by Westerst˚ ahl [167, p.284], his method results in meaningful interpretations provided that (a) the Qi ’s are ‘logical’, i.e. Q1 (Y1 , Y2 ) and Q2 (Y1 , Y2 ) can be expressed as a function of |Y1 | and |Y1 ∩ Y2 |, see van Benthem [9, p. 446] and [10, p. 458]; and (b), the Qi ’s are convex in their second argument, or ‘CONT’ in Westerst˚ ahl’s terminology, i.e. Qi (Y1 , Y2 ) ≥ min(Qi (Y1 , L), Qi (Y1 , U ) for all L ⊆ Y2 ⊆ U according to Def. 6.8 The latter condition ensures that Q1 and Q2 can be recovered from their nonde− creasing approximations Q+ i and their nonincreasing approximations Qi , i.e. + − Qi = min(Qi , Qi ). This is generally the case when Q1 and Q2 are either nondecreasing in their second argument (“many”), nonincreasing (“few”), or of unimodal shape (“about ten”, “about one third”). An example of branching quantification with unimodal quantifiers, which demand the generic method, is “About fifty young and about sixty old persons respect each other”.
12.8 Chapter Summary Recognizing the potential of branching quantifiers to linguistic modelling, this chapter was concerned with a generalization of the quantification framework which incorporates these cases. The extension comprises fuzzy L-quantifiers (generalizations of Lindstr¨ om quantifiers to approximate quantifiers and fuzzy arguments), semi-fuzzy L-quantifiers (uniform specifications of such quantifiers) and plausible models of fuzzy quantification involving these quantifiers, called L-DFSes. The criteria for plausible L-models of fuzzy quantification parallel the requirements on QFMs, and the rationale for these conditions is the same as in the case of ordinary QFMs. It has further been shown that the new L-DFSes are in one-to-one correspondence to the ordinary models. Thus (a) every plausible model of ‘ordinary’ fuzzy quantification (DFS) can be extended to a unique L-DFS for quantifiers with multiple variable binding; (b) no other L-DFSes exist beyond those obtained from (a). Westerst˚ ahl’s analysis of branching NL quantification in terms of Lindstr¨ om quantifiers is easily generalized to semi-fuzzy L-quantifiers and thus, branching Type III quantifications. By applying the chosen model of fuzzy quantification, one then obtains a meaningful interpretation for branching Type IV quantifications, i.e. both approximate quantifiers and fuzzy arguments can be handled. At this point, it should be remarked that Westerst˚ ahl also extends his method to other types of branching quantification in NL, which in general can span more than two quantifiers. However, the formulas he proposes to
12.8 Chapter Summary
373
cope with these cases can be adapted to our analysis in terms of (semi-)fuzzy L-quantifiers in total analogy to the example chosen for demonstration. Thus, the methods must be extended to semi-fuzzy L-quantifiers in the apparent ways; applying the L-QFM F then lets us fetch the final interpretation. The identification of those L-DFSes specifically suited for modelling branching quantification in NL is an advanced topic that should be tackled by future research. A possible strengthening of the system based on quantifier nesting has already been sketched. The nesting construction is linked to branching quantification in the following way: In the motivating example of this chapter, the introduction of the complex quantifier Q of type 1, 1, 2 rests on the grouping of several (independent) quantifiers into a block of quantifiers, which is then treated as an integral, singular quantifier. Applying the same idea to a linear sequence of quantifiers takes us to the construction of nested quantifiers discussed in Sect. 12.6. Naturally, a model suited for branching quantification, which rests on the grouping of quantifiers into blocks, should also be compatible with such nestings of linear quantifiers. However, the resulting criterion appears to be extremely restrictive and it cannot be adopted without sacrificing useful models like M and Fowa . It is not even clear if the full compatibility with quantifier nestings is consistent with the basic axioms at all, and it might therefore be necessary to weaken the criterion to typical classes of linguistic quantifiers (e.g. quantitative). Obviously a more thorough discussion of quantifier nestings is necessary to clarify these issues. The proposed analysis of reciprocal constructions in terms of fuzzy branching quantifiers is of particular relevance to linguistic data summarization [83, 182]. Many summarizers of interest express mutual (or symmetric) relationships and can therefore be verbalized by a reciprocal construction. (Asymmetrical relations R can also be used in reciprocal constructions after op symmetrization, i.e. they must be replaced with R = R ∩ R ). An ordinary summary like “Q1 X1 ’s are strongly correlated with Q2 X2 ’s” does not capture the symmetrical nature of correlations, and it neglects the resulting groups of mutually correlated objects. The proposed modelling in terms of branching quantifiers, by contrast, supports a novel type of linguistic summaries specifically suited for describing groups of interrelated objects. Branching quantification, in this sense, is a natural language technique for detecting such groups in the data. A possible summary involving a reciprocal predicate is “The intake of most vegetables and many health-related indicators are strongly associated with each other”.
13 Discussion
13.1 A Novel Theory of Fuzzy Quantification Natural language is pervaded by fuzziness, but the fuzziness of language does not normally cause problems in communication. There must be regularities underlying linguistic fuzziness which explain the apparent fact of systematical interpretation. This is the methodological assumption on which the research in this book was based. Fuzzy quantifiers, too, are frequently used in language and we normally understand them without difficulty. This suggests that the regularities of fuzzy quantification might be uncovered and made available for the automatic interpretation of these expressions in practical applications. The book has described the fundamentals of such a theory of fuzzy quantification. The novel analysis that it proposes should appeal both to the linguist and the fuzzy set theorist. The theory comprises the following components: • • • • •
a conceptual framework for analysing fuzzy quantification with the desired comprehensiveness and precision; a system of formal postulates which effectively characterize those models which are semantically plausible; a thorough evaluation of the class of these models under various semantical criteria; the description of concrete models derived from a given constructive principle, and the identification of models of special significance within the total class of such models; the development of efficient algorithms for implementing the main types of quantifiers in the dedicated models.
There are precursors to this work both in linguistics and fuzzy set theory. Linguists have developed the notion of a generalized quantifier [7,9,10,69,92,93], which covers a broad range of linguistic quantifiers. As opposed to the logical abstractions ∀ and ∃, the ‘real’ quantifiers found in NL are typically restricted by a qualifying argument (i.e. they can be multi-place); they are often not logical symbols; they are often not quantitative (i.e. expressible in terms of I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 375–389 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
376
13 Discussion
cardinalities), and they are often not definable in first-order predicate logic. The generalized quantifiers of TGQ are sufficiently powerful to express such quantifiers. However, they are essentially limited to precise quantifications based on crisp arguments. Research in fuzzy set theory, on the other hand, has achieved a modelling of Type IV quantifications [108] where both the quantifier and its arguments can be fuzzy. Zadeh’s framework for analyzing fuzzy quantification [197, 199] is restricted to absolute and proportional quantifiers, though. Moreover there is negative evidence against the traditional approaches with regard to linguistic adequacy (see also discussion in Sects. 1.9.1 to 1.10, and the evaluation of existing approaches detailed in Appendix A). The new framework for fuzzy quantification described in this monograph was expected to establish a unifying perspective on both traditions of analyzing NL quantification which accounts for the linguistic considerations but also incorporates a fuzzy-sets based modelling of vagueness. In order to achieve the intended coverage of linguistic phenomena, we started from the linguistic analysis. The concept of a two-valued generalized quantifier known from TGQ was considered a suitable point of departure for developing the framework. The generalization to fuzzy quantifiers had to support gradual interpretations and fuzzy arguments in order to cope with arbitrary Type IV quantifications. We have adopted Zadeh’s view of fuzzy quantifiers as fuzzy second-order predicates to achieve this. However, the expressiveness of fuzzy quantifiers made it difficult to establish a suitable choice of fuzzy quantifier for a linguistic quantifier of interest. This is because fuzzy quantifiers must deal with fuzzy arguments, which forbids a simple definition of their input-output behaviour in terms of crisp cardinalities. In order to cope with this modelling problem of explaining the relationship between linguistic quantifiers and the available modelling devices, the specification of a quantifier was separated from its operational form used for interpretation. We therefore introduced simplified descriptions which only capture the essential aspects of a quantifier. This specification format makes it easy to define the quantifier of interest. It is the responsibility of fuzzy quantification theory to establish the matching operational quantifier for a given specification. Following this general strategy, we introduced the concept of a semi-fuzzy quantifier which offers a compact description of the linguistic quantifiers of interest. It thus plays a similar role as membership functions µQ in the earlier approaches. Semi-fuzzy quantifiers isolate the modelling of approximate quantifiers like “almost all” from the problem of dealing with fuzzy arguments, i.e. they support gradual interpretations but only accept crisp arguments. This simplification corresponds to the Type III quantifications of Liu and Kerre [108]. Still, semi-fuzzy quantifiers are sufficiently powerful to embed two-valued crisp generalized quantifiers known to the linguistic theory. Moreover, the phenomena of linguistic interest that were previously studied in TGQ can all be discussed on the level of semi-fuzzy quantifiers as well. Due
13.1 A Novel Theory of Fuzzy Quantification
377
to the restriction to crisp arguments, it is usually easy to grasp the meaning of a given semi-fuzzy quantifier. In particular, semi-fuzzy quantifiers can be conveniently defined in terms of the usual cardinality because they only accept crisp arguments. For the same reason, however, semi-fuzzy quantifiers are essentially limited to their function as specifications because fuzzy arguments cannot be handled. By connecting the specification of a linguistic quantifier and its associated operational interpretation, the notion of a quantifier fuzzification mechanism (QFM) completes the new framework for fuzzy quantification. The structuring into specifications and interpretations, which is fundamental to the proposed framework, splits the modelling problem into applicationdependent and independent parts. The domain-specific task consists in selecting the semi-fuzzy quantifier which best fits the meaning of the quantifier of interest in a given application context. The central problem of fuzzy quantification, which is application-independent, then consists in explaining how this description in terms of a semi-fuzzy quantifier should be extended to fuzzy arguments. The proposed QFMs account for this problem. By isolating the core competence of handling fuzzy arguments, part of the complexity of the modelling problem is effectively delegated to the QFM. This division of responsibilities makes it possible to analyse the essential aspects of fuzzy quantification without getting stuck in discussions which particular interpretation should be assigned to a given linguistic quantifier. The particular choice of a semi-fuzzy quantifier suited for modelling a given linguistic quantifier can apparently depend on the context. To be sure, this is not really surprising because the precise meaning of quantifiers like “many” or “almost all” is obviously context-dependent as well and determined by contextual factors like an assumed standard of comparison. Thus, the problem of context dependence it is not an artefact of the proposed modelling devices, but simply reflects a characteristic of language itself. In particular, the problem will not go away if we switch to the familiar µQ -based representations. The notion of semi-fuzzy quantifiers has been designed in such a way that all contextual factors fall into the realm of specification. Hence when describing the quantifiers of interest, the users must resolve these factors and commit to a single unambiguous interpretation. In this way, the context-dependent factors are effectively isolated from the core problems of fuzzy quantification, i.e. from those factors that can be subjected to a general solution. This organization of fuzzy quantification made it possible to develop the theory of fuzzy quantification independently of the intricacies of context dependence. From a practical perspective, though, it makes sense to assist the user or application designer who must decide on the concrete interpretation of these context-dependent quantifiers in terms of a membership function. This problem of determining a suitable choice of quantifiers can be considered as a special case of knowledge acquisition. There are several techniques for determining membership grades of fuzzy sets which might also be used to construct membership functions of fuzzy quantifiers. For a survey of these techniques,
378
13 Discussion
see Klir and Yuan [99, Chap. 10], who also give pointers into the specialized literature. In the future, it might be possible to integrate some experimental findings on the effects of external factors like size or shape on the denotation of quantifiers which depend on the context [121, 122]. However, these ideas have not yet reached practical significance. The proposed QFMs merely serve as a placeholder which lets us talk about candidate models of fuzzy quantification on a formal level. They do not account for any considerations regarding the linguistic quality of interpretations. It is the responsibility of a theory of fuzzy quantification to identify those models of fuzzy quantification which best answer the linguistic expectations. To achieve this, it was necessary to develop formal criteria which ensure that the QFM preserve the main characteristics of quantifiers as well as relationships between quantifiers connected by linguistic constructions like dual or antonym. This can be likened to the familiar mathematical concept of a homomorphism, i.e. of a structure-preserving mapping compatible with a number of relevant constructions. Some of the concepts needed to express these regularities were already known from TGQ, like the above constructions of antonym and dual which only had to be generalized to the fuzzy case. However, some new concepts were also needed which account for cross-domain and cross-arity coherence. These requirements then comprise a catalogue of formal criteria against which every model of interest can be checked. In order to avoid unnecessary effort in proofs, such a catalogue should describe the target class of models in terms of a minimal (i.e. irreducible) axiom system. This will ensure that the axioms capture independent aspects of linguistic adequacy. In Chap. 3, we presented a complete system of such criteria which also exhibits the desired minimality. The semantical postulates (Z-1) to (Z-6) account for a number of very elementary requirements on plausible and coherent interpretations. The first criterion, (Z-1), requires the models to correctly generalize the original specification for a special case of quantifiers. The second criterion (Z-2) asserts that the models be compatible with membership assessments. Based on a canonical construction of induced fuzzy connectives, (Z-3) requires the compatibility of the interpretations with dualisation. The criterion (Z-4), in turn, makes sure that the models comply with unions of arguments. The requirement (Z-5) accounts for the desired monotonicity of interpretations, which is enforced for a special case. Finally (Z-6)links the interpretations obtained for different universes of quantification by requiring the compatibility of the model with the canonical choice of extension principle. Taken together the axiom system establishes a type of plausible model for which we coined the term determiner fuzzification scheme, or DFS for short. The proposed class of models was then evaluated against a catalog of additional semantic desiderata that can legitimately be expected of interpretations of fuzzy quantifiers. First of all, we were concerned with the induced choice of fuzzy connectives, which was shown to be plausible in every DFS. We then considered various aspects of argument structure. It was shown that
13.1 A Novel Theory of Fuzzy Quantification
379
every DFS is compatible with permutations of arguments, cylindrical extensions (augmentation with vacuous arguments), complementation (formation of antonyms), but also external negation and the construction of duals. In addition, it admits intersections of arguments, their symmetrical difference, the insertion of crisp arguments and hence, crisp adjectival restriction. The models further preserve monotonicity properties of quantifiers, including ‘local’ monotonicity for restricted ranges of arguments. The induced extension principle as well as the induced fuzzy inverse images are also reasonable. The models will interpret both quantitative and non-quantitative cases of quantifiers in a plausible way, and they preserve the property of ‘having extension’ which is shared by most linguistic quantifiers. In addition, the models are ‘contextual’, i.e. insensitive to the behaviour of a quantifier outside the context given by the fuzzy arguments. Finally the standard quantifiers are interpreted in a plausible way, judging from Thiele’s characterization of fuzzy universal and existential quantifiers in terms of T- and S-quantifiers [158]. This explicit evaluation shows that the proposed models are justified from a linguistic perspective and internally consistent. It should be pointed out that the catalog of semantic desiderata can also be applied to other kinds of models. For example, the criteria made it possible to evaluate the existing approaches to fuzzy quantification – in this case with negative results (see Appendix A). This demonstrates that the formalization of desiderata for plausible interpretations is an important research topic in its own right. It is also instructive to elucidate the structure of natural subclasses. We therefore considered collections of models grouped by some common property. The relative homogeneity of members in such a subclass made it possible to define interesting constructions. First of all the models were grouped by their induced negation. It was then shown that without loss of generality, one can focus on those models which induce the standard negation. This result was established by means of a model translation scheme which permits a bidirectional transformation between models. Now grouping these ¬-DFSes by their induced disjunction, we obtained the -DFSes. These classes are sufficiently homogeneous to permit the class of ∨ aggregation of models. In particular, the classes are closed under symmetric sums, convex combinations and similar averaging operators. In order to gather more knowledge on the structure of the classes, we investigated the natural specificity order on these models. Due to the symmetry of a DFS with respect to negation, it is not possible to use the linear order ≤, i.e. there is nothing -DFS. However, the models are partially ordered by like a ‘least’ or ‘greatest’ ∨ specificity, i.e. under the usual fuzziness order introduced by Mukaidono [118]. -models, there is always a It was shown that for non-empty collections of ∨ greatest lower bound in terms of specificity which can be expressed in terms of the generalized fuzzy median. We further discussed the criterion of specificity -DFSes has a least consistency. It was shown that a non-empty collection of ∨ upper bound in terms of specificity exactly if it is specificity consistent. The resulting model can again be given an explicit description.
380
13 Discussion
Finally we considered the class of standard DFSes which induce the usual set of fuzzy connectives. The book has presented an axiomatic description of these standard models of fuzzy quantification. Some results on the characteristics of these standard models were also reported. For example, it was shown that for two-valued quantifiers, all standard models coincide with the fuzzification mechanism proposed by Gaines [49]. Few is known about non-standard models which depend on a different choice of fuzzy connectives. In order to maintain the possibility of such nonstandard models, it was necessary to express the criteria (Z-1)–(Z-5) for plausible models in terms of the canonical construction of induced connectives (rather than using the standard choices min, max and 1 − x). For similar reasons, the postulate (Z-6) avoids the use of the standard extension principle, in favour of the natural extension principle induced by the given QFM. However, the research into these general (non-standard) models is still in its infant stage. It is an open problem whether models exist which induce a conjunction or disjunction different from the standard choices, and future research should be directed at this issue. We also considered a few special characteristics of linguistic or practical interest which should not be expected of arbitrary models. First of all, we formalized two distinct aspects of continuity which ensure a certain stability of quantifications against changes in the quantifier and in its arguments. It further seems natural to assume that plausible models will ‘propagate fuzziness’, i.e. less specific input (quantifier or arguments) should not result in more specific interpretations. A suitable notion of specificity for quantifiers and their arguments was again given by Mukaidono’s fuzziness order. Moreover we investigated the theoretical limits on the total catalogue of semantical properties that can be verified by a QFM. Conjunctions and disjunctions of quantifiers, as well as the identification of several variables, are examples of constructions to which a QFM cannot be fully compatible even under conditions much weaker than the DFS axioms. The existence of such critical cases is not surprising, though, due to the known impossibility of satisfying all axioms of Boolean algebra in the continuous-valued case. It would be interesting to examine whether the identification of variables will result in more specific results under the fuzziness order c . For example, we would expect that F(Q)(X, X) c F(Q )(X) if Q (Y ) = Q(Y, Y ) for all Y ∈ P(E). This kind of behaviour appears natural if a model propagates fuzziness, but a general result has not been proven yet. Two other semantical criteria must also be formulated very carefully, notably the preservation of convexity (which applies e.g. to quantifiers of trapezoidal shape) and the preservation of conservativity. In both cases, the original ‘naive’ rendering of the criterion for fuzzy sets turned out too strong. However, it was possible to give weaker definitions consistent with the core axioms which still cover most cases of linguistic interest. The ‘realistic’ reformulation of weak conservativity is validated by all of the plausible models, while the weak
13.1 A Novel Theory of Fuzzy Quantification
381
preservation of convexity is at least achieved by special models like MCX . Finally we discussed the construction of fuzzy argument insertion, which is necessary for a compositional treatment of quantifiers like “many young” in “Many young A’s are B’s”. As shown in Chap. 7. there is only one standard model, MCX , which is compatible with this linguistic construction. The investigation into the characteristics of DFSes revealed how the plausible models behave – but it does not yet show how actual examples of such models look like. In order to populate the abstract space of admissible interpretations with concrete models, the book also covered the issue of developing prototypical examples. The basic idea was that of investigating general constructions which can express an interesting range of models. Examples of such constructive principles that come to mind are α-cutting and the resolution principle. However, these constructions cannot express the intended models due to their lack of symmetry with respect to negation. The novel concept of a three-valued cut exhibits the desired symmetry. The three-valued subsets obtained from these cuts are in turn expanded into a range of crisp sets. By considering all choices of arguments in these cut ranges, we obtained the sets SQ,X1 ,...,Xn (γ) of supervaluation alternatives (i.e. possible quantifications) at the cut-level γ ∈ I. Such a set of supervaluation alternatives must then be aggregated into the final quantification result. In the monograph, this basic approach was used to define three classes of constructive models. The proposal of MB -DFSes rests on a simple aggregation in terms of the generalized fuzzy median m 12 . The median-based aggregation is motivated by a principle of ‘cautiousness’ or ‘least specificity’ and thus generalizes the basic strategy of supervaluationism [90, p. 24]. The median-based method determines a class of pretty regular models with appealing formal characteristics. For example, less specific input cannot result in more specific outputs in these models, i.e. every MB -DFS is known to propagate fuzziness. Moreover the class contains a model with exceptional semantic properties. The formal analysis of MCX revealed its compatibility with fuzzy argument insertion and fuzzy adjectival restriction, the weak preservation of convexity and other properties of linguistic relevance. The model was further shown to consistently generalize the Sugeno integral and thus the basic FG-count approach. The second class of Fξ -models replaces the median-based aggregation scheme with a more general construction based on upper and lower bounds = sup SQ,X1 ,...,Xn and ⊥ = inf SQ,X1 ,...,Xn on the supervaluation alternatives in SQ,X1 ,...,Xn . The resulting models, which include the MB -type, are generally less regular. For example, they need not propagate fuzziness, i.e. less specific input can result in more specific interpretations. The class contains a model Fowa of particular interest which consistently generalizes the Choquet integral and hence the core OWA approach to a general DFS. Finally we discussed an even broader class of FΩ -models which admits arbitrary constructions that operate on SQ,X1 ,...,Xn . The class is of minor practical relevance because all of its ‘robust’ members already belong to the
382
13 Discussion
Fξ -type. However, these FΩ -models coincide with the Fψ -DFSes, i.e. with the natural class of models defined in terms of the standard extension principle. In other words, the most general construction of models that was described in this book is mainly of theoretical interest. It is an issue of future research to relate the FΩ /Fψ -models to the full class of standard DFSes, and to develop even more general constructions if necessary to catch hold of arbitrary standard models. Moreover, it would be important to develop constructions which give us a grip on non-standard models as well. This might provide a positive answer to the problem if nonstandard models exist at all. In fact, D´ıaz-Hermida et al [31] have described a probabilistic model of fuzzy quantification which satisfies the axiomatic requirements; the model is interesting in this context because it induces the algebraic product rather than the standard fuzzy conjunction. The method is only defined for finite base sets, though, and does not qualify as a non-standard DFS for that reason. It might be useful to introduce a corresponding notion of QFMs and DFSes restricted to finite universes, in particular to investigate the possible junctures of fuzzy quantification and probability theory. For practical purposes it is also important to identify those examples of the known models which are likely most useful in applications. The models MCX and Fowa as well as a third example M, which is some kind of compromise between MCX and Fowa , were chosen as prototypes which serve as the candidates for practical implementation. The level of abstraction achieved by the theoretical skeleton and the clear division of responsibilities, i.e. specification versus interpretation and model versus implementation, fostered the rapid development of fuzzy quantification theory. The strict separation of theoretical analysis and computational aspects made it possible to resolve the semantical issues in a more elegant solution compared to previous approaches. Moreover, it became possible to identify broad classes of prototypical models without getting involved into the practical concern how these models can eventually be implemented. But of course, the new ‘nice’ models will not be useful in practice unless we can apply them to compute real quantification results. In order to provide the theory with sufficient computational backing, we developed efficient algorithms for implementing the main types of quantifiers in the new models. To this end, it was first shown that for finite E, the computation of quantification results in Fξ -models can always be based on quantities j , ⊥j ∈ I obtained from a finite sample of cut levels γj , i.e. there is no need to consider all choices of γ in the continuous range [0, 1]. We then presented an analysis of ‘quantitative’ quantifiers in terms of cardinality information sampled from the arguments, and went on explaining how the computation of j and ⊥j can be optimized based on this analysis. Finally, it was shown how the required cardinality information can be computed efficiently from histograms of the involved fuzzy arguments and their Boolean combinations. We then presented concrete algorithms for implementing the main types of quantifiers in the prototypical models M, MCX and Fowa . These algorithms
13.2 Comparison to Earlier Work on Fuzzy Quantification
383
are assembled from the above building blocks (like histogram-based computation of the cardinality coefficients) and thus demonstrate the practical utility of this analysis for implementing quantifiers in Fξ -models. The issue which of these computational models should be preferred in a given application should now be decided by practical experiments. In particular, it will be instructive to compare MCX , which generalizes the Sugeno integral, and Fowa , which abstracts from the Choquet integral, and gather some evidence which model is perceived more natural by the users.
13.2 Comparison to Earlier Work on Fuzzy Quantification Let us now compare the novel theory of fuzzy quantication to existing work in order to highlight the main contributions of this monograph. One of the main reasons for introducing the new framework was the limited coverage of linguistic phenomena inherent to Zadeh’s analysis. The comprehensive classification of natural language quantifiers presented by Keenan and Stavi [92, pp. 253256] distinguishes sixteen main classes of quantifiers (including quantifiers of exception like “all except one”, definites like “the ten”, bounding quantifiers like “only” etc). All of these heterogeneous quantifiers are covered by TGQ (assuming two-valued quantifications), while the traditional framework is essentially limited to the absolute and proportional kinds. The proposed theory of fuzzy quantification embeds all of these cases and demonstrates how the basic approach of TGQ can be generalized to arbitrary Type IV quantifications. It must be pointed out, though, that the absolute and proportional quantifiers are indeed cases of special relevance. The list of main types further includes quantifiers of exception and cardinal comparatives, which are also significant from the linguistic perspective and of obvious utility to applications. These are the quantifiers covered by the algorithms presented in Chap. 11. Further types of quantifiers, like proportional comparatives, are described in the literature on TGQ, cf. Keenan and Moss [91], Keenan and Stavi [92], or Keenan and Westerst˚ ahl [93]. An example is “a greater proportion of French students than workers” [91, p. 78]. The classification of semi-fuzzy quantifiers into categories of linguistic and practical value can be developed in total analogy to the existing classification of TGQ for precise quantifications; see D´ıaz-Hermida et al. [30] for a description of several interesting classes. An implementation of these new kinds of quantifiers in the prototypical models is straightforward from the general analysis presented in Chap. 11. As opposed to the novel framework, existing approaches are closely intertwined with the specifics of absolute and proportional quantifiers. Zadeh’s framework does not introduce uniform specifications suited for all quantifiers, but rather stipulates different kinds of membership functions µQ : R+ −→ I
384
13 Discussion
vs. µQ : I −→ I for respresenting the absolute and proportional types. These first-order representations express the shape of the quantifier, but they do not specify the precise way in which the arguments of the quantifier enter into the quantification results. The responsibility to account for argument structure is therefore delegated to the model of quantification: For each type of quantifier, the model must be equipped with a corresponding interpretation method which knows how to combine the arguments in order to compute the quantification result. Consequently, there will be one rule for two-place proportional quantifiers, another rule for unary absolute quantifiers etc. This impairs extensibility because one needs to introduce one specification format and corresponding interpretation rule per quantifier type. The splitting of interpretation rules also makes it difficult to ensure internal coherence. To sum up, the lack of a uniform representation obstructs the formalization of universal models which are capable of interpreting arbitrary quantifiers. As witnessed by the proposed representation in terms of semi-fuzzy quantifiers, these deficiencies of the traditional framework can easily be avoided if (a) we use a uniform specification format for describing quantifiers; and (b) if irregular details like the specifics of argument structure are encoded in these specifications. Based on the uniform representation, it then becomes possible to formulate universal models which assign interpretations to such specifications. Without any need to introduce new specifications or extend the interpretation rules, we can then support unanticipated cases of quantifiers even if these do not match the known patterns of argument structure. Obviously the uniform representation also facilitates the development of models which show the desired coherence across quantifiers types. The limited coverage of the traditional analysis becomes visible, in particular, in its poor support for multi-place quantification. Apart from unary and binary quantifiers, which correspond to unrestricted vs. restricted quantification, one also finds ternary quantifiers in natural languages. An example is “more than” in the quantifying proposition “More men than women are smokers”. Such quantifications cannot be interpreted in terms of the absolute and proportional quantifiers supported by the traditional models, simply because “more than” and similar cases are irreducible to two-place quantifiers, as shown by Hamm [69, pp. 23+]. We already mentioned proportional comparatives which can depend on four arguments. And compound quantifiers like “Most Y1 ’s or Y2 ’s are Y3 ’s and Y4 ’s” can involve a potentially unbounded number of arguments. Even if one does not view such constructions as quantifiers proper, a framework for fuzzy quantification artificially restricted to quantifiers of arities n ≤ 4 would likely be not any simpler than a solution for arbitrary n ∈ N. Consequently, the novel framework was designed for general multi-place quantifiers. A model of quantification in this general sense cannot be taylored to special quantifiers like the absolute or proportional kinds. This means that we had to give up Zadeh’s idea of using some measure of absolute or relative cardinality to evaluate quantifiers, and we could not use specialized concepts like the
13.2 Comparison to Earlier Work on Fuzzy Quantification
385
‘degree of orness’ either [165, 179, 183]. By contrast, such models may only be concerned with the effect of fuzzy arguments on the interpretation of quantifying expressions. In this way, it becomes possible to support quantifiers with an arbitrary number of arguments. Because multi-place quantification includes restricted proportional quantification as a special case, the notorious problem of proportional quantification with importance qualification will has been solved as a by-product of treating all n-place quantifiers. To existing approaches, however, the complexity of restricted quantification was prohibitive (see counterexamples in Chap. 1). An extension of these approaches to ternary quantifiers was suggested by Zadeh, who mentions the “third kind” of quantifiers in [197, p. 149] and [199, p. 757]. However, it appears that nobody pursued this direction further, probably due to the difficulties of existing approaches with importance qualification which involves only two (rather than three or more) arguments. Existing approaches to fuzzy quantification usually assume that the base set be finite.1 It is true that the universe of quantification will typically be finite in practical applications. However, infinite base sets will be essential to the modelling of fuzzy mass quantification. The examples of quantifiers in this book usually involved count nouns (“men”, “patients”, “Swedes”,. . . ) and were thus concerned with concrete objects. Mass nouns like “water” or “wine”, however, call for a different kind of quantification which ranges over continuous, possibly atomless masses like “some water” or “much wine”. An adequate modelling of quantification over such masses will necessarily involve infinite sets (of which masses are a special case). The proposed notion of QFMs and the plausible models (DFSes) developed from it support infinite universes of quantification and thus provide a good starting point for approaching fuzzy mass quantification. However, it is likely necessary to add some more structure in order to deal with mass quantification. For example, it might be convenient to assume that E be a measurable space with a measure P defined on it. We can then define a proportional quantifier Q, intended to model “more than 30 percent” on E, by 1 : P (Y1 ∩ Y2 ) ≥ 0.3 P (Y1 ) Q(Y1 , Y2 ) = 0 : else assuming that Y1 , Y2 ⊆ E be measurable subsets. In the case that Y1 or Y2 is not measurable, it is most natural to declare Q(Y1 , Y2 ) undefined. This suggests that quantification over masses will involve an extension to partial (rather than total) semi-fuzzy quantifiers, and a suitable notion of QFM for mapping partial semi-fuzzy quantifiers to fuzzy quantifiers. Hence the support for quantifiers on infinite base sets, which is already available in the current 1
The Σ-Count approach can be generalised to measurable fuzzy subsets of sets of infinite cardinality by replacing summation with integration [193, p. 167]. However, as we have seen in Chap. 1, the Σ-count approach does not offer a suitable departure for developing plausible models of fuzzy NL quantification.
386
13 Discussion
proposal, is an important prerequisite of treating mass quantification but it needs to be enhanced with a model of partial quantifiers and basic concepts of measure theory. The novel theory also achieves an adequate treatment of quantifiers which lack automorphism invariance (or ‘quantitativity’). By Th. 4.43, every model of the DFS axioms maps quantitative semi-fuzzy quantifiers like “almost all” to quantitative fuzzy quantifiers. But the theorem also states that F maps non-quantitative semi-fuzzy quantifiers to corresponding nonquantitative fuzzy quantifiers. Some examples are proper names like john = πJohn , and non-quantitative compound quantifiers like “All X’s except the men are Y ’s” or “All male X’s are Y ’s”. Existing approaches to fuzzy quantification, by contrast, are defined in terms of absolute or relative cardinality measures for fuzzy sets, which must be invariant under automorphisms to capture the notion of ‘cardinality’. Consequently, the traditional models always result in quantitative quantifiers, i.e. they cannot model any non-quantitative cases. This failure of existing approaches gains some weight as soon as infinite base sets are admitted (e.g. in mass quantification). In this case most of the important quantifiers (including “most”, for example) become non-quantitative in the sense of automorphism invariance, see van Benthem [9, p.473+]. Taken together, these observations demonstrate that from the perspective of coverage, only the new proposal accounts for the linguistic data because it embeds the established Theory of Generalized Quantifiers. The traditional approaches to fuzzy quantification, however, are too narrow from a linguistic perspective and for structural reasons, they bear the risk of producing incoherent results. The coverage of the new theory with respect to quantificational phenomena compared to that of TGQ and that of existing approaches to fuzzy quantification is summarized in Fig. 13.1. With the notable exception of H. Thiele’s analysis of specialized classes of quantifiers [158,159], existing approaches to fuzzy quantification are generally based on interpretation rules the linguistic plausibility of which was ‘proven’ by presenting a few positive examples. The novel theory of fuzzy quantification, by contrast, rests on an axiomatic foundation. Considerable effort has been spent on the discovery and formalization of the intuitive requirements on plausible interpretations of quantifiers, and the resulting criteria will ensure meaningful quantification results in all conforming models. The coherence of interpretations can also be guaranteed across quantifier types. This is possible due to the use of uniform specifications and generic models which treat all quantifiers in basically the same way. Concerning specific constructions of linguistic relevance, the proposed models are fully compatible with Aristotelian squares, i.e. they admit the formation of external negation, antonyms, and duals. This property is offered by none of the earlier approaches. Furthermore, every DFS is compatible with the Piaget group of transformations, the significance of which stems from empirical findings in developmental psychology (see Dubois and Prade [41, p. 158+]). The models further preserve extension and thus reproduce the insensitivity of NL against the precise choice of do-
13.3 Future Perspectives
387
Fig. 13.1. Coverage of linguistic quantifiers: DFS (outer rectangle) vs. TGQ (left half) and existing approaches (circles)
main (see Th. 4.45). It should also be pointed out that by committing to a model of fuzzy quantification, a unique definition of all fuzzy connectives will be induced. For example, every DFS is compatible with exactly one choice of conjunction, disjunction, or implication – there is no degree of freedom left in the definition of these connectives. Every DFS further induces a unique choice of extension principle and of fuzzy inverse images. Thus, the proposed models are not only capable of interpreting arbitrary quantifiers in the sense of TGQ and their generalizations to Type IV quantifications. By fixing a coherent choice of fuzzy connectives and other basic constructs of fuzzy set theory, they also establish a consistent view of fuzzy set theory. The approaches described in the literature, by contrast, are essentially limited compared to TGQ and struggling with internal difficulties that stem from their lack of an axiomatic foundation.
13.3 Future Perspectives The proposed theoretical framework is much more complete and linguistically faithful compared to earlier work. However, there are also a few cases not yet treated into which future research efforts should be directed. We already mentioned the problem of quantification over masses (e.g. “much wine”), which suggests an extension of the basic framework by measure-theoretic notions.
388
13 Discussion
More attention should also be payed to fuzzy branching quantification. Some first steps towards the modelling of branching quantification have been made in Chap. 12 which extends the fuzzy sets framework by Lindstr¨ om quantifiers capable of binding several variables. However it is not yet clear how branching quantifications can be implemented efficiently in the proposed models. In addition a better understanding of these powerful quantifications must be gained and additional semantical criteria should be developed to control them. The postulate of compatibility with nestings of quantifiers is a natural choice, but it appears to be extremely restrictive (possibly even conflicting with the core axioms). This means that the criterion should be restricted to cases of special linguistic relevance if necessary, for example to quantifiers which are quantitative, conservative or monotonic. In order to achieve unrestricted compatibility with the nesting of quantifiers, it would also be possible to explore alternatives to the core axioms. Specifically, the following criterion of compatibility with inverse images might be useful in this context, viz n
n
F(Q ◦ × fi−1 ) = F(Q) ◦ × F−1 (fi ) i=1
i=1
n
for all semi-fuzzy quantifiers Q : P(E) −→ I and all mappings fi : E −→ E , i ∈ {1, . . . , n}, see Def. 4.63. Like ‘functional application’ (Z-6), this criterion will ensure the systematicity of F across domains. Thus, it would be instructive to investigate an alternative axiom system where (Z-6)is replaced by the novel postulate (and additional criteria if necessary). Another topic worth investigating is concerned with the epistemic difficulty of ascertaining the precise values of attributes like height, age, temperature etc. Rather than assuming exact knowledge of these attributes and thus infinite precision, a more realistic approach would admit imperfect knowledge of attribute values. The research into fuzzy quantification under these weaker assumptions has been initiated by H. Prade [126,127], who approaches the problem in the setting of possibility theory. It would likely be useful to incorporate similar ideas into the new framework and consider its possible extensions to interval-valued quantifications. To sum up, this book has presented the fundamentals of a theory of fuzzy quantification. Although a few special cases like intensional quantifiers had to be ignored to keep things manageable, the proposed analysis achieves a greater coverage than earlier work. In addition, it demonstrates a clear and rigorous treatment of its subject. The axiomatic procedure, i.e. the formalization of semantical postulates which constrain the admissible models, made it possible to investigate fuzzy quantification with unparalleled formal rigor. The resulting axioms (Z-1) to (Z-6) give us a grip on several dimensions of fuzzy quantification which are relevant with regard to linguistic plausibility. Future research into such criteria will elucidate more and more dimensions of fuzzy quantification in natural language. If the basic method proposed in this work will be pursued further, we might eventually be able to fully reproduce the semantics of natural language quantifiers. And, we will also gain some in-
13.3 Future Perspectives
389
sights into linguistic regularities which clarify the role of vagueness in human language and eventually, cognition. The basic analysis proposed in this book has already been developed into a computational theory of fuzzy quantification which is ready for practical application. The new algorithms avail us with a computational model of absolute and proportional quantifiers which better conforms to the linguistic expectations. They are easily substituted for the existing methods which proved to be unreliable. Furthermore, it now becomes possible to support new types of quantifiers like cardinal comparatives, which are also of significance to applications. Due to their axiomatic foundation, the new models of fuzzy quantification avoid the problems of earlier approches. The proposed catalogue of semantical postulates accounts for the essential dimensions of linguistic plausibility and coherence of interpretation. For the first time, it now becomes possible to build reliable applications based on fuzzy quantifiers. The applications that directly profit from the improved models include fuzzy querying interfaces for databases, fuzzy information retrieval and linguistic data summarization.
A An Evaluation Framework for Approaches to Fuzzy Quantification
A.1 Motivation and Chapter Overview Fuzzy quantifiers will not realize their full potential for applications unless the models used for interpretation are linguistically adequate and catch the intended meaning of these quantifiers. For example, if an operator is labelled ‘most’, it is essential for avoiding misunderstanding that it behave like the NL quantifier “most”. However, an evaluation of the existing approaches to fuzzy quantification from this linguistic viewpoint is somewhat difficult due to the different representations chosen by fuzzy set theory and linguistics, i.e. membership functions µQ vs. generalized quantifiers in the sense of TGQ. Accordingly, Zadeh [197] mentions TGQ in his seminal paper on fuzzy quantification, but he does not attempt any cross-comparison. In the remaining literature on fuzzy quantifiers, there are sporadic reports concerning linguistic plausibility [3, 42, 132, 134, 184]. However, there is no systematic evaluation of these approaches from the perspective of TGQ. Based on the framework for fuzzy quantification presented in the main part of this work, we will now conduct such an evaluation of the traditional approaches with respect to their linguistic plausibility. For that purpose, these approaches must first be fitted into the QFM framework. This will be accomplished by a canonical construction which associates with each ‘traditional’ approach a corresponding partial QFM. This translation will make it possible to apply the formal adequacy criteria developed for evaluating QFMs. The resulting evaluation framework will then be used to discuss the main approaches to fuzzy quantification. The central results of this evaluation have already been anticipated in Chap. 1. Here we will present these examples in full detail. In particular, the images shown in the introduction will now be analyzed in some more depth. Usually we will also describe paper-and-pencil examples which reproduce the critical behaviour for a handful of participating objects. All examples will be analyzed carefully in order to spot the causes of failure, which can usually be traced to a violation of the proposed adequacy criteria for QFMs. Because of its explanatory value, it is hoped that this I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 391–415 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
392
A An Evaluation Framework for Approaches to Fuzzy Quantification
analysis will also be of interest to those who take sides with the traditional view of fuzzy quantification. Note A.1. For the sake of readability, there will be an occasional repetition of concepts already mentioned in the introduction. In particular, the counterexamples will be developed from scratch and presented without any reference to their summaries in the first chapter.
A.2 The Evaluation Framework Let us now introduce the formal framework for evaluating the traditional approaches to fuzzy quantification. The framework is taylored to those models which rest on Zadeh’s representation of fuzzy linguistic quantifiers as fuzzy subsets of the non-negative reals or of the unit interval. In order to make a uniform framework possible, it is convenient to fit these approaches in the following general structure. The model of interest, which will be symbolized by Z, must be defined both for absolute quantifiers µQ : R+ −→ I (Zadeh’s quantifiers of the first kind) and relative/proportional quantifiers µQ : I −→ I (Zadeh’s quantifiers of the second kind).1 We will consider both the unrestricted use of the quantifier, where the quantification involves only one explicit and possibly fuzzy argument (“Most things are tall”), and the two-place use, where the quantification can be fuzzily restricted, as in “Most young are poor”. Labelling the absolute and proportional cases by abs and prp, respectively, and also marking the unrestriced and restricted uses by superscripts (1) and (2), a ‘full’ model Z of fuzzy quantification in Zadeh’s setting can then be specified by defining the following interpretation mechanisms for all choices of finite domains E: (1) (1) −→ I, for • Zabs , which maps µQ : R+ −→ I to Zabs (µQ ) : P(E) unrestricted absolute quantification; (2) (2) 2 −→ I, for • Zabs , which maps µQ : R+ −→ I to Zabs (µQ ) : P(E) restricted absolute quantification; (1) (1) −→ I, for • Zprp , which maps µQ : I −→ I to Zprp (µQ ) : P(E) unrestricted proportional quantification; (2) (2) 2 −→ I, for • Zprp , which maps µQ : I −→ I to Zprp (µQ ) : P(E) restricted proportional quantification.
modelling modelling modelling modelling
The approaches described in the literature do not always instantiate the complete scheme, and we will therefore allow that certain cases be omitted. In addition, the definition of the absolute restricted case, (2)
(1)
Zabs (µQ )(X1 , X2 ) = Zabs (µQ )(X1 ∩ X2 ) , 1
(A.1)
See Zadeh [197, p. 149] for the distinction of quantifiers of the first kind (based on absolute counts) and those of the second kind (based on relative counts).
A.2 The Evaluation Framework
393
is usually assumed without mention, and the two-place use of absolute quantifiers only shows up in examples which demonstrate the application of these approaches. Among other things, the above equation will reduce “About fifteen children cry” to “There are about fifteen crying children”, because (2)
(1)
Zabs (µabout 15 )(children, cry) = Zabs (µabout 15 )(children ∩ cry) . (1)
(2)
Finally, Zprp can be expressed in terms of Zprp whenever the latter mechanism is available, and then becomes (1) (2) (µQ )(X) = Zprp (µQ )(E, X) . Zprp
For example, “Most things are expensive” can be interpreted by (1) (2) Zprp (µmost )(expensive) = Zprp (µmost )(E, expensive) ,
where µmost (x) = 1 for x > 0.5 and 0 otherwise, and X ∈ expensive ∈ P(E). Now that we have a uniform notation for the approaches of interest, we can set up the evaluation framework. The problem is that the concepts developed in the main part of the book cannot directly be applied to existing approaches. This is because existing approaches are not formulated as quantifier fuzzification mechanisms, i.e. they apply to µQ ’s and not to semi-fuzzy quantifiers. We have to bridge the gap between semi-fuzzy quantifiers and fuzzy linguistic quantifiers in a systematic way. Hence let us recall the notion : P(E)n −→ I, which is nothing of an underlying semi-fuzzy quantifier U(Q) : P(E) n −→ I to crisp arbut the restriction of the given fuzzy quantifier Q guments (see Def. 2.11). Now let us consider one of the approaches Z based on fuzzy linguistic quantifiers. We cannot expect that Z give rise to a ‘full’ (totally defined) quantifier fuzzification mechanism. Compared to the scope of a QFM, which aims at NL quantification in general, the quantificational phenomena addressed by existing approaches are simply too limited, and cover only a fragment of the possible cases. In other words, Z is not sufficient to determine a total QFM defined for arbitrary quantifiers. However, it is often possible to reconstruct a partially defined quantifier fuzzification mechanism F based on Z as follows. Given the membership function µQ of a fuzzy linguistic quantifier, we first obtain the corresponding semi-fuzzy quantifier relative to Z as Q = U(Z(µQ )), and use this to define F(Q) = Z(µQ ). Obviously, the construction of F succeeds only if U(Z(µQ )) → Z(µQ ) is functional, but this is a plausible requirement anyway. It will be called the evaluation framework assumption (EFA). • The EFA is very closely related to the QFA, which underlies the quantification framework described in the main part of the book (see Sect. 2.6). of interest is To be specific, the QFA states that every base quantifier Q uniquely determined by its behaviour on crisp arguments. By contrast, = Z(µQ ) which result from the the EFA requires that those quantifiers Q
394
A An Evaluation Framework for Approaches to Fuzzy Quantification
considered interpretation mechanism Z are fully specified by their quantification results for crisp arguments. If we assume that the mechanism = Z(µQ ) are potential denotations Z is semantically plausible, i.e. all Q of NL quantifiers, then the EFA demands that the QFA be valid for all quantifiers in the responsibility of Z, and hence comes out as the QFA in disguise. • In case the EFA holds unconditionally for Z, we can use the constructed partial fuzzification mechanism F to check the preservation and homomorphism properties of interest. We only need to take care of the fact that we now have a partial QFM, rather than a totally defined QFM. In order to reject a criterion of interest, it is sufficient to spot a violation of the criterion by quantifiers for which F is defined. In order to establish the validity of a plausibility criterion, we must first extend its definition to partial QFMs. For example, a partial QFM F is said to preserve negation if F(Q) is defined exactly if F(¬Q) is defined; and in case both are defined, we have F(¬Q) = ¬F(Q). The required modifications to the other criteria are also apparent. When evaluating the existing approaches, we will not insist on the use of the induced connectives, because the literature on these approaches generally assumes the standard set of connectives. The use of the standard connectives further avoids the problem that the construction of induced connectives might no longer work when the EFA is violated. For this reason, we will generally assume the standard truth functions (negation 1−x, conjunction min, disjunction max etc.) in this chapter, and it is understood that all constructions on (semi-) fuzzy quantifiers like dualisation etc. be defined in terms of these connectives. • If the EFA is violated by Z, then the concepts developed for QFMs can still be useful to assess certain properties of Z. In this case, the EFA can always be enforced by restricting the set of considered membership functions µQ , i.e. by eliminating the conflicting cases. To see this, suppose E = ∅ is a n base set, Q : P(E) −→ I is a semi-fuzzy quantifier, and let us introduce the set of possible choices for the interpretation of Q, = Q ∧ exists µQ s.th. Q = Z(µQ )} . : P(E) n −→ I : U(Q) Ch(Q) = {Q If the EFA holds, then Ch(Q) is either empty, in which case F(Q) is in which case F(Q) is undefined, or it is a singleton set Ch(Q) = {Q}, defined and F(Q) = Q. If the EFA does not hold, then Ch(Q) has more than one element in certain cases. By reducing the set of considered µQ , we can always obtain a reduced system in which all Ch(Q) – now determined from the restricted set of µQ – are singleton or empty. We can hence view Ch(Q) as providing a number of choices for F(Q) ∈ Ch(Q). n We shall say that Z can represent a semi-fuzzy quantifier Q : P(E) −→ I if there exists some µQ such that Q = U(Z(µQ )). In order to reject the validity of a plausibility criterion, it is sufficient to prove that Z cannot represent Q without violating the property.
A.3 The Sigma-Count Approach
395
Now that the basic framework has been introduced, it will be applied in order to assess the plausibility of the main approaches to fuzzy quantification, viz the Σ-count, FG-count, OWA- and FE-count models. When presenting counter-examples in the evaluation to follow, we will generally prefer precise quantifiers like “all” or [rate ≥ r], because there are stronger intuitions about the intended behaviour of these quantifiers, and the claim that certain results are counter-intuitive should be uncontroversial. Of course, this does not mean that we are only interested in two-valued quantifiers.
A.3 The Sigma-Count Approach Let us first consider the Σ-count approach already mentioned in the introduction. The model rests on the notion of a Σ-count 2 [36], which is is defined as the sum of the membership values of a given fuzzy set X ∈ P(E) (E finite), thus
Σ-Count(X) = µX (e) . e∈E
The Σ-count is claimed to provide a (coarse) summary of the cardinality of the fuzzy set X, expressed as a non-negative real number. A corresponding scalar definition of fuzzy proportion, the relative Σ-count or relative cardinality [189], is defined by Σ-Count(X2 /X1 ) = Σ-Count(X1 ∩ X2 )/ Σ-Count(X1 ) . The Σ-count approach to fuzzy quantification, introduced by Zadeh [193, 197], uses Σ-Count(X) and Σ-Count(X2 /X1 ) to model the absolute and proportional kinds of fuzzy linguistic quantifiers. The two types of quantifiers are hence treated differently. As explained above, the unrestricted and restricted interpretations of both types of quantifiers must be discerned, i.e. quantification relative to the domain as a whole, or relative to an explicitly given, and possibly fuzzy, restriction. Referring to the notation for the mechanism Z = SC (‘Sigma Count’), the definition of the Σ-count approach can now be stated as: (1)
SCabs (µQ )(X) = µQ (Σ-Count(X)) (2)
(1)
SCabs (µQ )(X1 , X2 ) = SCabs (µQ )(X1 ∩ X2 ) (2) SC(1) prp (µQ )(X) = SCprp (µQ )(E, X)
SC(2) prp (µQ )(X1 , X2 ) = µQ (Σ-Count(X2 /X1 ))
2
also known as the ‘power’ of a fuzzy set
396
A An Evaluation Framework for Approaches to Fuzzy Quantification
Note A.2. The split definition in terms of four separate formulas simply instantiates the generic scheme for a given mechanism Z that was presented above. In particular, the ‘abs’-versions apply to the absolute kind where µQ : R+ −→ I, and the ‘prp’-versions to the proportional kind µQ : I −→ I. Furthermore, the superscripts (1) and (2) denote unrestricted and restricted quantification, respectively. Now that the definition of the Σ-count approach has been presented in the assumed format, we can set out to investigate the linguistic plausibility of the approach. To begin with, the Σ-count approach is somewhat peculiar because (unlike the other approaches), it does not comply with the EFA. This is most apparent with absolute fuzzy linguistic quantifiers µQ : R+ −→ I: The assumption is then violated by any pair of membership functions µQ , µQ : R+ −→ I with µQ |N = µQ |N but µQ = µQ . The trouble is that with absolute quantifiers, the Σ-count approach requires the specification of quantification result µQ (x) in the case that the computed Σ-count x is not a cardinal number. The decision on which quantification results to assign for such x affects the results obtained, but intuitions are scarce in this unfamiliar case. Next we will consider an example which recasts Yager’s observations [184, p.257] on counter-intuitive behaviour of the Σ-count approach. Hence suppose that E = {Hans, Maria, Anton} is a set of persons, and using Zadeh’s notation, that the fuzzy subset blond ∈ P(E) is defined by blond = 1 1 1 /Hans + /Maria + /Anton, i.e. all three are blond to a degree of 13 . Now 3 3 3 consider the quantifying statement “There is exactly one blond person”. Its (1) interpretation in the Σ-count approach is SCabs (µQ )(blond), where Q is a quantifier suited to model (the unrestricted use of) “exactly one”. In order for the condition of correct generalisation to be respected, we are forced to have µQ (1) = 1. It follows that the above statement evaluates to 1 (fully true), although there is clearly not exactly one blond person in the base set (which one should that be?) but rather a total amount of blondness of one, as one might say, i.e. nothing of linguistic significance.3 A similar example from the domain of meteorological images already mentioned in the introduction, is presented in Fig. A.1. In this case, the Σ-count approach is used to evaluate the condition that “About ten percent of Southern Germany are cloudy”; and it is assumed that µQ : I −→ I is chosen such that, say, µQ (x) = 1 in the range x ∈ [0.08, 0.12], and that µQ decays to zero outside this range (the precise shape of µQ in latter case is inessential to the example). As shown in Fig. A.1, both cloudiness situations (a) and (b) are considered cloudy to a degree of one (fully true) if the Σ-count approach is used to evaluate “About 10 percent of Southern Germany are cloudy”. While a result of 1 is plausible in case (a), this is not the case in situation (b), in 3
Zadeh [197] proposes to apply some threshold on X in these cases, in order to prevent accumulative effects of small membership grades, but this is clearly an ad-hoc device because there is no obvious way in which any particular choice of threshold can be justified.
A.3 The Sigma-Count Approach
Southern Germany (a) SC: 1 (OK)
397
(b) SC: 1 (implausible)
Fig. A.1. “About 10 percent of Southern Germany are cloudy” (Sigma-Count)
which all of Southern Germany is cloudy to a very low degree (viz. one tenth), which certainly does not mean that one tenth of Southern Germany is cloudy. ‘Trivial’ or ‘degenerate’ cases often require particular attention. One such case is that of a quantifier supplied with an argument tuple of empty sets. Let us now demonstrate that the Σ-count approach does not treat this case consistently. To see this, let us choose 0 < r2 < r1 < 1 and consider the two-valued 2 proportional quantifiers Q1 = [rate ≥ r1 ], Q2 = [rate > r2 ] : P(E) −→ 2 (see Def. 2.5), which have Q1 (∅, ∅) = 1, but Q2 (∅, ∅) = 0. The problem is that Zadeh does not specify the denotation of Σ-Count(∅/∅). So let us assume that Σ-Count(∅/∅) = c ∈ I; further suppose that µQ1 , µQ2 : I −→ I are used to model Q1 and Q2 , respectively. By not satisfying the EFA, the Σ-count approach leaves us with some choices which µQ1 , µQ2 to use as a model of Q1 and Q2 . However, every reasonable choice of µQ1 should have µQ1 (x) = 1 only if x ≥ r1 , and µQ2 (x) = 0 only if x ≤ r2 .4 Correct generalization then demands that µQ1 (c) = 1, i.e. c ≥ r1 , but also that µQ2 (c) = 0, i.e. c ≤ r2 , which contradicts the assumption that r2 < r1 . Let us further observe that the Σ-count approach yields potentially satisfying results only if µQ is genuinely fuzzy, because a two-valued quantifier (with corresponding two-valued membership function µQ ) is mapped to a : P(E) n −→ 2, the results of which are always crisp.5 One fuzzy quantifier Q 2 might object that, although a two-valued quantifier Q : P(E) −→ 2 is to be modelled, an adequate choice of µQ should be continuous-valued. For example, if E = {a, b}, can’t we model the universal quantifier ∀ by 0 : x ≤ 12 µQ (x) = 2x − 1 : x > 12 rather than using the two-valued membership function µ∀ : I −→ I, 4
5
it will be shown below that in order for the Σ-count approach to preserve extension, these conditions on µQ1 (x) and µQ2 (x) must hold for all x ∈ I ∩ Q. If we also require that monotonicity properties be preserved, this result extends to the full range x ∈ I. This problem has been obscured by Zadeh’s use of the quantifier most, which he views as being genuinely fuzzy.
398
A An Evaluation Framework for Approaches to Fuzzy Quantification
µ∀ (x) =
0 1
: x<1 : x=1
which results in a crisp operator when applying SC(1) prp ? To see that this objection is invalid, let us assume that the two-valued quantifier Q to be ‘fuzzified’ is of the proportional type; we will utilize the fact that such quantifiers have extension.6 Now suppose that µQ : I −→ I is a proper choice for interpreting Q, and q ∈ Q ∩ I is some rational number in I. Firstly, we can choose such that Σ-Count(X2 /X1 ) = q. Because q is rational and X1 , X2 ∈ P(E) nonnegative, there exist z, m ∈ N s.th. q = z/m; we may also require that m ≥ |E|. We extend E by arbitrary elements to some superset E ⊇ E with |E | = m, and choose an arbitrary crisp subset Z ∈ P(E ) with |Z| = z. Correct generalisation then yields µQE (q) = µQE (Σ-Count(Z/E)) = QE (E, Z) ∈ 2 . In order to avoid a violation of the semantic requirement of extensionality (see Def. 4.44), which would disqualify the Σ-count approach anyway, we are then justified to assume that µQE (Σ-Count(X2 /X1 )) = µQE (Σ-Count(X2 /X1 )) = µQE (q) ∈ 2 . The membership functions suited for modelling two-valued proportional quantifiers are therefore restricted to {0, 1} on I ∩ Q. The option of selecting intermediate membership grades in the open interval (0, 1) for the remaining ‘definition gaps’ on I \ Q has few practical relevance. In particular, the Σcount approach produces two-valued results, and thus cannot determine useful gradual evaluations, in the case of frequently used quantifiers like “all” and [rate ≥ r]. This peculiarity of the Σ-count approach is undesirable in at least two ways. Firstly, there is a well-known relationship between conjunction and universal quantification, which we already mentioned in Sect. 4.16: a conjunction c1 ∧ · · · ∧ cm of m two-valued criteria c1 , . . . , cm ∈ 2 corresponds to the quantified statement “All criteria c1 , . . . , cm are true”, i.e. c1 ∧ · · · ∧ cm = ∀E (C) where E = {1, . . . , m} and C = {j : cj = 1}. We should expect this relationship between conjunction and universal quantification to be preserved in the fuzzy case. (As it is in all models of DFS theory – see theorem Th. 4.55). The point is that µ∀ : I −→ I, defined by µ∀ (1) = 1, µ∀ (x) = 0 otherwise, results in 1 : X=E SC(1) (µ )(X) = prp ∀ 0 : else 6
i.e. f , v0 in Def. 11.51 can be chosen independently of E. This is apparent from equation (4.2).
A.3 The Sigma-Count Approach
399
In particular, if E = {1, 2}, we obtain the induced conjunction operator c : I × I −→ I defined by c(x1 , x2 ) = SC(1) prp (µ∀ )(X), where X = x1 /1 + x2 /2, i.e. c(x1 , x2 ) =
1 : 0 :
c1 = c2 = 1 else
This is certainly not a reasonable fuzzy conjunction operator (in particular, it is not a t-norm). Apart from this observation that the Σ-count approach assigns an implausible interpretation to the universal quantifier, there is another undesirable consequence of the fact that two-valued µQ ’s are mapped to fuzzy quantifiers SC(µQ ) which always return crisp results: such quantifiers are discontinuous, i.e. very slight changes in the membership grades of their arguments can drastically change the quantification result. More generally, we observe that whenever µQ is discontinuous, then the corresponding fuzzy quantifier under the Σ-count approach is also a discontinuous function of the membership grades of its argument sets. In practical applications, there is almost always some amount of noise (e.g. due to the finite precision of floating point operations), which can have drastic effects when using the Σ-count approach for modelling this kind of quantifiers. An example which demonstrates this sensitivity to noise is presented in Fig. A.2. In this case, image (b) is a slightly modified version of image (a) – all pixels with a cloudiness grade of 0 have been set to a slightly higher grade. Although the difference is small and hardly perceptible, the Σ-count approach ‘jumps’ from 0 to 1 when we move from the cloudiness situation (a) to the slightly modified situation depicted in (b).
Southern Germany (a) Result: 0
(b) Result: 1
Fig. A.2. “At least 60 percent of Southern Germany are cloudy” (Sigma-Count)
Finally, the Σ-count is not compatible with antonyms, as pointed out by Zadeh [197, p. 167]. Noticing that the Σ-count method admits external negation, we conclude that the approach is also not compatible with dualisation (which can be decomposed into the antonym of the negation).
400
A An Evaluation Framework for Approaches to Fuzzy Quantification
A.4 The OWA Approach In this section, we shall review Yager’s [179, 184] approach to fuzzy quantification, which is named after its use of ordered weighted averaging (OWA) operators. Only the proportional type µQ : I −→ I of fuzzy linguistic quantifiers is considered. In addition, µQ is assumed to be regular nondecreasing, i.e. µQ (0) = 0, µQ (1) = 1, and µQ (x) ≤ µQ (y) for all x, y ∈ I such that x ≤ y. In order to define the OWA approach, some more notation must be introduced. Given a finite base set E = ∅, m = |E| and µQ : I −→ I, let us define µQ,E : {0, . . . , m} −→ I by j µQ,E (j) = µQ ( m )
for j = 0, . . . , m. Let us also recall the coefficient µ[j] (X) ∈ I introduced in Def. 7.91, which denotes the j-th largest membership value of X (including duplicates). We can then define the OWA approach for unrestricted proportional quantification by OWA(1) prp (µQ )(X) =
m
(µQ,E (j) − µQ,E (j − 1)) · µ[j] (X) ,
j=1
where µQ : I −→ I is regular nondecreasing, and X ∈ P(E). In order to model two-place quantifiers, Yager introduces a weighting formula parametrised by the so-called degree of orness of the considered operator, which is defined by 1
(m − j)(µQ,E (j) − µQ,E (j − 1)) m − 1 j=1 m
orness(µQ,E ) =
(A.2)
(adapted to our notation). In other words, m−1 1
µQ,E (j) . orness(µQ,E ) = m − 1 j=1
In addition, a weight function gα : I × I −→ I is needed, which depends on the degree of orness α = orness(µQ,E ) of the given µQ . This weight function is defined by gα (x1 , x2 ) = (x1 ∨ (1 − α)) · x2 x1 ∨α , for all x1 , x2 ∈ I.7 The weighting function must now be extended to fuzzy we define gα (X1 , X2 ) ∈ P(E) pointarguments: for fuzzy sets X1 , X2 ∈ P(E), wise by µgα (X1 ,X2 ) (e) = gα (µX1 (e), µX2 (e)) , 7
We shall assume that 00 = 1, i.e. g0 (0, 0) = 1.
A.4 The OWA Approach
401
for all e ∈ E. Based on these concepts, we can now succinctly state the formula for two-place quantification in the OWA framework proposed in [179]. The 2 fuzzy quantifier OWA(2) prp (µQ ) : P(E) −→ I determined by the OWA approach can then be expressed as (1) OWA(2) prp (µQ ) = OWAprp (µQ )(gα (X1 , X2 )) ,
where µQ is regular nondecreasing and α = orness(µQ,E ). for all X1 , X2 ∈ P(E), Let us also observe that the degree of orness is undefined when the cardinality of the base set is |E| = 1. Consequently the OWA approach is unable to provide an interpretation of two-place quantifiers in the case of singleton base sets. On the positive side, it is easily observed that the OWA approach fulfills the EFA. But there is negative evidence as regards its adequacy. Firstly, orness(µQ,E ) typically depends on the cardinality of E. It follows that most = OWA(2) (µQ ) do not have extension.8 To protwo-place fuzzy quantifiers Q prp vide an example, let us consider µ[rate>0.5] : I −→ I, defined by µ[rate>0.5] (x) = 1 if x > 0.5, and 0 otherwise. Further suppose E = {Joan, Clarissa} is a set of persons, men = ∅ is the set of those persons in E which are men (this happens to be empty), and rich = E is the fuzzy set of persons in E which are rich (this happens to be crisp and coincide with E). Then, because µ[rate>0.5],E = µ∀,E , (2) OWA(2) prp (µ[rate>0.5] )(men, rich) = OWAprp (µ∀ )(∅, E) = 1 .
Let us now extend E to the larger set E = {Joan, Clarissa, Mary}, and let us suppose that the extensions of ‘men’ and ‘rich’ remain unchanged in E , i.e. Mary ∈ / men and Mary ∈ / rich. We should then expect, because the quantifier “more than half” has extension, that the quantification result does not change when we replace E with the larger base set E . However, application of OWA in the extended domain E yields OWA(2) prp (µ[rate>0.5] )(men, rich) = OWA(1) prp (µ[rate>0.5] )(0.5/Joan + 0.5/Clarissa + 0/Mary) = 0.5 . The observed change in OWA(2) prp (µ[rate>0.5] ) is clearly implausible. It is caused by the change in the degree of orness of µ[rate>0.5] from α = 0 (in E) to α = 0.5 (in E ). Similar effects are observed with most other µQ : I −→ I, which also result in a failure of OWA with respect to the property of having extension. This is not the only flaw of this approach, though. Another defect of OWA(2) prp , which is probably even more serious, can be demonstrated for all quantifiers with ‘intermediate’ degrees of orness, i.e. α = orness(µQ,E ) ∈ (0, 1).9 In this case 8 9
The effect is of practical relevance only if |E| is small. In other words, the quantifier must be distinct from ∀ and ∃.
402
A An Evaluation Framework for Approaches to Fuzzy Quantification (2) OWA(2) prp (µQ )(∅, ∅) = 0 = (1 − α) = OWAprp (µQ )(∅, E) ,
(A.3)
which shows that, except “all” and “some”, no semi-fuzzy quantifier Q : 2 P(E) −→ I which can be represented by OWA is conservative. In particular, the OWA approach cannot represent any proportional semi-fuzzy quantifiers except for “all” and “some”.10,11 But this is exactly the type of quantifiers OWA is intended to model.
Southern Germany (a) desired: 1, OWA: 0.1 (b) desired: 0, OWA: 0.6
Fig. A.3. At least 60 percent of Southern Germany are cloudy (OWA approach)
If we still try to use OWA for interpreting proportional quantifiers, implausible results must be expected. The canonical construction of F, which has been presented in Sect. A.2 above, is then replaced with direct assertions that a given semi-fuzzy quantifier Q should be mapped to a certain fuzzy quantifier F(Q) under the OWA approach. The property of correct generalisation, which was enforced by the canonical construction, is not necessarily guaranteed by such ad hoc correspondences. Nevertheless, part of the evaluation framework remains applicable, because we can still use the semantical postulates for plausible approaches in order to judge the adequacy of the resulting model. For example, let us consider the fuzzy linguistic quantifier µ[rate≥0.6] , defined by 1 : x ≥ 0.6 µ[rate≥0.6] (x) = 0 : else and make the explicit assertion that [rate ≥ r1 ] be interpreted by the fuzzy quantifier F([rate ≥ r1 ]) = OWA(2) prp (µ[rate≥0.6] ). In fact, this is the common 10 11
all proportional quantifiers are conservative by Def. 11.51. It is rather questionable anyway that “all” and “some” are considered proportional in Yager’s setting. For example, “all” and “some” are easily defined for infinite domains, which is in clear opposition to the situation with proportional quantifiers. In the author’s view, “some” should rather be considered an absolute quantifier, and “all” should be considered a special case of quantifier of exception (i.e., “all except 0”, which permits no exceptions at all).
A.4 The OWA Approach
403
choice for “at least 60 percent” when using the OWA approach. Now let us consider the suitability of the resulting fuzzy quantifier for interpreting “At least 60 percent of Southern Germany are cloudy”, given the cloudiness situation displayed in Fig. A.3. In case (a), we expect the result of 1, because so many pixels which fully belong to Southern Germany (I) are classified as fully cloudy that, regardless of whether we view the intermediate cases (II) as belonging to Southern Germany or not, its cloud coverage is always larger than 60 percent. Likewise in (b), we expect a result of 0 because regardless of whether the pixels in (II) are viewed as belonging to Southern Germany, its cloud coverage is always smaller than 60 percent. OWA, however, ranks image (b) much higher than image (a). This counter-intuitive result is explained by OWA’s lack of conservativity: the cloudiness grades of pixels in areas (III) and (IV), which do not belong to Southern Germany at all, still have a strong (and undesirable) impact on the computed results. To see that this failure of OWA is caused by the lack of conservativity (in the weak sense of Def. 6.21) of the quantifier OWA(2) prp (µ[rate≥0.6] ) let us consider the synthetic examples depicted in Fig. A.4. In images (a) and (b),
(a) desired: 1, OWA: 0.6 (b) desired: 1, OWA: 0
(c) desired: 0, OWA: 0.6 (d) desired: 0, OWA: 0
Fig. A.4. “More than 60 percent of Southern Germany are cloudy”. Synthetic (2) examples: desired results vs. results computed by OWAprp (µ[rate≥0.6] )
404
A An Evaluation Framework for Approaches to Fuzzy Quantification
Southern Germany is fully covered by clouds in the sense that each pixel in the support of Southern Germany12 has a cloudiness grade of 1. We hence expect that in both (a) and (b), the condition “More than 60 percent of Southern Germany are cloudy” is fully satisfied. In particular, image (b) is the intersection of image (a) and the support of Southern Germany. Because “more than 60 percent” is conservative, we should expect that the results for image (a) and (b) coincide. In images (c) and (d), there is no cloudiness at all in Southern Germany in the sense that each pixel in the support of Southern Germany has a cloudiness grade of 0. We should therefore expect that in both cases, the condition “More than 60 percent of Southern Germany are cloudy” fails completely (i.e. the result should be 0). In particular, the results of (c) and (d) should coincide because (d) is the intersection of (c) with the support of Southern Germany, and “more than 60 percent” is conservative. As shown in Fig. A.4, the OWA approach neither computes the same results for (a) and (b), nor for (c) and (d). Even worse, its lack of weak conservativity causes OWA not to produce the intended order of results, which is (a) and (b) best, (c) and (d) worst. By contrast, image (c), which shows no cloudiness at all in Southern Germany, and image (a), in which Southern Germany is totally covered by clouds, are both assigned the same (rather high) score of 0.6; while image (b), in which Southern Germany is totally covered by clouds, and image (d), showing now clouds at all, both receive the lowest possible score of 0. Apart from these negative findings concerning OWA’s ability to represent conservative proportional quantifiers in the case of restricted (two-place) quantification, there are some further pecularities of the OWA approach that also deserve attention. It has already been mentioned that the ‘core’ OWA approach is limited to the regular nondecreasing type of µQ : I −→ I. Yager [174] proposes a quantifier synthesis method to handle the remaining quantifiers of the proportional kind, i.e. those choices of µQ that are not regular nondecreasing. This construction can be rephrased as follows. For each considered µQ , we define µant Q : I −→ I by µant Q (x) = µQ (1 − x). µant Q is intended to model the antonym of µQ [197]. Let us call µQ : I −→ I regular nonincreasing if µant Q is regular nondecreasing. Yager [184] suggests to interpret such quantifiers in terms of their antonyms, i.e. to define (1) OWA(1) prp (µQ )(X) = OWAprp (µant Q )(¬X) ,
for all X ∈ P(E), provided that µQ is regular nonincreasing. Of course, we could also have used the negation, defined by µ¬Q (x) = 1 − µQ (x), to state an alternative definition
(1) OWA(1) prp (µQ )(X) = 1 − OWAprp (µ¬Q )(X) . 12
i.e. each pixel which belongs to Southern Germany to a degree larger than 0
A.4 The OWA Approach
405
However, both alternatives coincide because OWA(1) prp preserves the duality between the quantifiers which correspond to µant Q and µ¬Q . In the general case where µQ : I −→ I is neither regular nondecreasing nor nonincreasing, Yager suggests to decompose µQ into a conjunction (or possibly other Boolean combination) of regular nondecreasing and nonincreasing quantifiers. For example, if µQ = µQ1 ∧ µQ2 , where µQ1 is regular nondecreasing and µQ2 is regular nonincreasing, the quantifier synthesis method yields (1) (1) OWA(1) prp (µQ )(X) = OWAprp (µQ1 )(X) ∧ OWAprp (µant Q2 )(¬X) ,
for all X ∈ P(E). Unfortunately, the obtained results depend on the decomposition of µQ chosen. For example, consider ⎧ : x ∈ [0, 0.25) ⎨0 µQ (x) = 0.3 : x ∈ [0.25, 0.75] (A.4) ⎩ 0 : x ∈ (0.75, 1] which can be decomposed into a conjunction of ⎧ ⎧ : x ∈ [0, 0.25) ⎨0 ⎨1 µQ1 (x) = 0.3 : x ∈ [0.25, 0.75] µQ2 (x) = 0.3 ⎩ ⎩ 1 : x ∈ (0.75, 1] 0
: : :
but also into a conjunction of ⎧ : x ∈ [0, 0.25) ⎨0 µQ1 (x) = 0.3 : x ∈ [0.25, 0.5) ⎩ 1 : x ∈ [0.5, 1]
: x ∈ [0, 0.5) : x ∈ [0.5, 0.75] : x ∈ (0.75, 1]
⎧ ⎨1 µQ2 (x) = 0.3 ⎩ 0
x ∈ [0, 0.25) x ∈ [0.25, 0.75] x ∈ (0.75, 1]
If we interpret the regular nondecreasing µQ2 and µQ2 by means of µant Q2 and µant Q2 , as suggested by Yager, and choose E = {a, b, c, d}, and X ∈ P(E) with X = 0/a + 0.5/b + 0.5/c + 1/d, then (1) OWA(1) prp (µQ1 )(X) ∧ OWAprp (µant Q2 )(¬X) = 0.3 (1) OWA(1) prp (µQ1 )(X) ∧ OWAprp (µant Q2 )(¬X) = 0.65
i.e. the quantifier synthesis method depends on the decomposition chosen, and is hence ill-defined. These problems carry over to two-place quantification based on OWA(2) prp . We then have (1) OWA(2) prp (µQ )(E, X) = OWAprp (µQ )(X) .
This demonstrates that OWA(2) prp embeds all counter-examples for one-place quantification. In the two-place case, however, there are some additional problems because (1) OWA(2) prp , unlike OWAprp , does not preserve duals. Suppose we wish to evaluate the criterion “Less than 60 percent of Southern Germany are cloudy”. Noticing that the quantifier is not of the regular nondecreasing type, we must resort to one of the following equivalent statements.
406
A An Evaluation Framework for Approaches to Fuzzy Quantification
i. “It is not the case that at least 60 percent of Southern Germany are cloudy”, i.e. use negation and compute ¬OWA(2) prp (µ[rate≥0.6] )(X1 , X2 ); ii. “More than 40 percent of Southern Germany are not cloudy”, i.e. use the antonym and compute OWA(2) prp (µ[rate>0.4] )(X1 , ¬X2 ). Unlike in NL, however, these statements are not equivalent under the OWA approach. When applied to the images in Fig. A.3, we obtain different results as shown in Table A.1. In this case, both i. and ii. compute the same (wrong) ranking13 , which demonstrates that neither alternative is correct; in other cases, their rankings can differ. The problem is that OWA, which cannot model “less than 60 percent” directly, forces us to choose one of i., ii.; but due to their expected equivalence, there is no preference for either choice. In an appropriate model, the computed results coincide in both cases and the chosen alternative for computing the result becomes inessential. The OWA approach, however, fails to preserve the duality of “at least 60 percent” vs. “more than 40 percent”. Table A.1. Less than 60 percent of Southern Germany are cloudy Quantifier (2) ¬OWAprp (µ[rate≥0.6] ) (2) OWAprp (µ[rate>0.4] )¬ (desired result)
Fig. A.3(a) 0.9 0.4 0
Fig. A.3(b) 0.4 0 1
A.5 The FG-Count Approach Apart from the Σ-count approach, Zadeh [197] also introduces a second approach to fuzzy quantification. The FG-count approach rests on the idea that the cardinality of a fuzzy set cannot be fully described by a single scalar number (as in the Σ-count approach). By contrast, the cardinality of a fuzzy set should rather be be viewed as a fuzzy subset of the non-negative integers. If E is a finite base set, and X ∈ P(E) a fuzzy subset of E, then the FG-count of X [197, p.156,p.157], denoted FG-count(X) ∈ P(N), is given by µFG-count(X) (j) = sup{α ∈ I : |X≥α | ≥ j} = µ[j] (X) , for all j ∈ N, recalling in particular that µ[0] (X) = 1 and µ[j] (X) = 0 for all j > |E|, see Def. 7.91. Intuitively, the FG-count expresses the degree to which X ∈ P(E) has at least j elements. The FG-count can then be used to interpret absolute quantifiers µQ : R+ −→ I as follows [174]: 13
i.e. result order with respect to ≥
A.5 The FG-Count Approach
407
(1)
FGabs (µQ ) = max µQ (j) ∧ µ[j] (X) j=0,...,m
Following the canonical strategy (A.1) for defining the restricted use of ab(2) solute quantifiers, FGabs can be modelled by (2)
(1)
FGabs (µQ )(X1 , X2 ) = FGabs (µQ )(X1 ∩ X2 ) . Yager [174] proposes a method for interpreting proportional quantifiers µQ : I −→ I, which is obviously related to Zadeh’s FG-count approach. FG(1) prp then becomes (1)
FG(1) prp (µQ )(X) = FGabs (µQ,E )(X) , (adapted to our notation), where µQ,E (j) = µQ (j/m), m = |E|, as in the case of the OWA approach. As concerns the restricted use of proportional quantifiers, Zadeh proposes the relative FG-count for pairs of fuzzy sets defined by X1 , X2 ∈ P(E),
|X ∩X2≥α | α/ 1≥α FG-count(X2 /X1 ) = |X1≥α | α∈[0,1]
and emphasises “that the right-hand member” [of the defining equation] “should be treated as a fuzzy multiset, which implies that terms of the form α1 /u and α2 /u should not be combined into a single term (α1 ∨ α2 )/u, as they would be in the case of a fuzzy set” [197, p.157]. However, if the right-hand member of the equation is a fuzzy multiset, then obviously FG-count(X2 /X1 ) is a fuzzy multiset as well. It is therefore not possible to formulate FG(2) prp (µQ ) (1)
in analogy to the definition of FGabs (µQ ), i.e. the following proposal is not licensed by Zadeh’s definition: FG(2) prp (µQ )(X1 , X2 ) = sup{µQ (r) ∧ µFG-count(X2 /X1 ) (r) : r ∈ I} (This definition would be ill-formed because it identifies the cases which Zadeh explicitly wants to be kept separate). To the best of my knowledge, no attempt has been made to provide a definition of FG(2) prp (µQ ) in terms of the above FG-count(X2 /X1 ). In particular, Zadeh’s examples on the application of the FG-count approach in [197] only demonstrate its application in the unrestricted case, and can thus be handled by FG(1) prp . As in the OWA case, it is easily shown that the FG-count approach complies with the EFA. However, the range of fuzzy quantifiers which can be handled by the FG-count approach is essentially limited, because the twoplace use of proportional quantifiers cannot be modelled, which is of obvious relevance to most applications. Another limitation of the model becomes visible once we investigate the = FG(µQ ) obtained from monotonicity properties of the fuzzy quantifiers Q
408
A An Evaluation Framework for Approaches to Fuzzy Quantification
the FG-count approach. It can be observed that, regardless of µQ , the resulting fuzzy quantifiers are always monotonically nondecreasing in their arguments. Consequently the FG-count approach is unable to represent any quantifiers except for those which are nondecreasing.14 In analogy to the solution proposed for OWA, we might reduce monotonically nonincreasing quantifiers to their antonym or their negation, and consider more complex quantifiers as Boolean combinations of such monotonic quantifiers. However, the very same example already used to discuss the OWA approach reveals that the quantifier synthesis method would fail in the FG-count case, too. Using the same abbreviations as in (A.4), we have (1)
(1)
(1)
(1)
FGabs (µQ1 )(X) ∧ FGabs (µant Q2 )(¬X) = 0.3 FGabs (µQ1 )(X) ∧ FGabs (µant Q2 )(¬X) = 0.5 , i.e. we obtain different results for different decompositions of µQ . Yager [180, p.72] proposes a weighting formula, which can be conceived of providing a definition of FG(2) prp (µQ ), however not based on Zadeh’s definition of relative FG-count. It is defined by e∈S µX1 (e) (2) FGprp (µQ )(X1 , X2 ) = max min µQ , HS : S ∈ P(E) e∈E µX1 (e) where HS = min{max(1 − µX1 (e), µX2 (e)) : e ∈ S}. To see that this formula is related to the FG-count approach, suffice it to observe that (1) FG(2) prp (µQ )(E, X) = FGprp (µQ )(X)
(A.5)
for all µQ : I −→ I and X ∈ P(E), provided that FG(2) prp (µQ )(E, X) is defined in terms of the above formula. Before turning to the issue of plausibility, let us observe that FG(2) prp (µQ )(∅, X2 ) Strictly speaking, is undefined, regardless of µQ : I −→ I, E, and X2 ∈ P(E). (2) FGprp (µQ ) is not a fuzzy quantifier according to our definition, it is only 2 to I which is defined whenever X1 = ∅. As a partial mapping from P(E) we have seen in the case of the Σ-Count approach, extending the approach in such a way as to provide reasonable results in this case can be rather difficult (perhaps impossible). However, for the sake of making the framework (2) applicable to FG(2) prp , we shall assume that each FGprp (µQ ) is completed to a 14
Yager [174] therefore explicitly restricts application of the FG-count approach to monotonically nondecreasing µQ . His approach is criticized by Ralescu [132], who demonstrates that the approach is indeed unable to model non-monotonic quantifiers.
A.5 The FG-Count Approach
409
2 total mapping FG(2) prp (µQ ) : P(E) −→ I in some (arbitrary) way, and we shall not consider the results obtained in the case X1 = ∅ further. As regards the plausibility of FG(2) prp , we should first notice that the EFA is violated. This is apparent when we consider a pair of nondecreasing µQ , µQ : I −→ I and a base set E, m = |E| ≥ 2 such that µQ ( pq ) = µQ ( pq )
(A.6)
for all q ∈ {1, . . . , m}, p ∈ {0, . . . , q}, but µQ (z) = µQ (z) for some other choice of z ∈ I. In particular, we then know from (A.6) that z = 1 and hence 1 − z = 0, which will be needed below. For purpose of demonstration, we shall further assume that z ≤ 12 . As concerns the results of FG(2) prp in this case, (2) (2) we first observe that U(FGprp (µQ )) = U(FGprp (µQ )), because for all crisp Y1 , Y2 ∈ P(E), p p (2) FG(2) prp (µQ )(Y1 , Y2 ) = µQ ( q ) = µQ ( q ) = FGprp (µQ )(Y1 , Y2 )
by (A.6), where p = |Y1 ∩Y2 | and q = |Y1 |. Now pick an arbitrary but fixed pair of elements e0 , e1 ∈ E, e0 = e1 , and abbreviate λ = 1 − min(µQ (z), µQ (z)). by Defining X1 ∈ P(E) ⎧ λz ⎨ 1−z : e = e0 µX1 (e) = λ : e = e1 ⎩ 0 : else and letting X2 = {e0 } then results in (2) FG(2) prp (µQ )(X1 , X2 ) = µQ (z) = µQ (z) = FGprp (µQ )(X1 , X2 ) . (2) Hence FG(2) prp (µQ ) and FGprp (µQ ) agree for all crisp arguments, but they can disagree for certain choices of fuzzy arguments. This demonstrates that the EFA is indeed violated. Beyond this failure of the EFA, there are some further undesirable properties. Let us consider the quantifier [rate > pq ], where q ∈ N \ {0},
p ∈ {1, . . . , q −1}. Due to the fact that FG(2) prp does not fulfill the EFA, we have some choices which µQ : I −→ I to use as a model of [rate > pq ]. µ[rate> pq ] is a proper choice for interpreting [rate > pq ] using FG(2) prp because p p U(FG(2) prp (µ[rate> q ] )) = [rate > q ]
(assuming a proper completion in the case X1 = ∅), but other µQ can also fulfill this property. As in the case of the Σ-count approach, one can show that in order to prevent FG(2) prp from violating the extensionality criterion (in the sense of Def. 4.44), every reasonable choice of µQ must satisfy µQ (x) =
410
A An Evaluation Framework for Approaches to Fuzzy Quantification
µ[rate> pq ] (x) for all x ∈ I ∩ Q. In the following, we will use µ[rate> pq ] , but the effects demonstrated will also occur for other possible choices of µQ . Now suppose that E is a base set such that m = |E| > q. Then E can be written as E = {e1 , . . . , em }, where the ei are pairwise distinct elements of E. (λ) X2 ∈ P(E) by Let us define X1 ∈ P(E), (λ)
X1
= 1/e1 + · · · + 1/eq + λ/eq+1
X2 = {e1 , . . . , ep } (λ)
p for all λ ∈ I. We should expect that FG(2) prp (µ[rate> q ] )(X1 , X2 ) = 0, because p [rate > q ] is constantly zero in the range (U, V ), where
(0)
U = ({e1 , . . . , eq }, X2 ) = (X1 , X2 ) , (1)
V = ({e1 , . . . , eq+1 }, X2 ) = (X1 , X2 ) . However, instead of the desired result stated above, what we really get is the following, 0 : λ=0 (λ) (2) FGprp (µ[rate> pq ] )(X1 , X2 ) = (A.7) 1−λ : λ>0 In fact, the example reveals that FG(2) prp does not preserve local monotonicity properties, because the resulting fuzzy quantifier fails to be locally nondecreasing in (U, V ). Moreover, it is not contextual (see Def. 4.47 for the definition of contextuality): its result 1 − λ in the considered range is nonzero, although the original quantifier, i.e. the specification for crisp sets, is constantly zero in (U, V ). Apart from the failure of FG(2) prp to preserve local monotonicity, equation (A.7) also points at another weakness of FG(2) prp : if µQ is discontinuous (in (2) particular if it is two-valued), then FGprp (µQ )(X1 , X2 ) can be discontinuous in the membership grades of X1 as well. In the example, a very slight (λ) change of X1 (from λ = 0 to λ = ε > 0) can result in a drastic change (λ) p of FG(2) prp (µ[rate> q ] )(X1 , X2 ). This is not acceptable in practical applications due to the inevitable presence of noise: a very slight change in X1 , X2 might cause totally different results of FG(2) prp (µQ )(X1 , X2 ). In order to demonstrate these effects in the image ranking task, let us consider the quantifier [rate ≥ 0.05], i.e. at least five percent. Some results of FG(2) prp for the chosen quantifier are shown in Fig. A.5. Let us first consider the case that X1 is the standard choice for interpreting Southern Germany depicted in A.5(a), and where X2 is the cloudiness situation depicted in (c). We expect a result of zero in this case because there are no clouds in the support of X1 . The FG-count approach, however, returns a score of 0.55. One might subject that, as in the case of OWA, this implausible result could be
A.5 The FG-Count Approach
411
(a) Southern Germany#1 (b) Southern Germany#2 (c) cloudy Result for Southern Germany#1: 0.55 Result for Southern Germany#2: 0.95 Desired result: 0
Fig. A.5. “At least 5 percent of Southern Germany are cloudy” (FG-Count)
caused by a lack of conservativity – after all, there are some clouds in the upper part of image (c). However, it is easily verified that all fuzzy quantifiers (2) obtained from FG(2) prp are conservative. It is the failure of FGprp to preserve local monotonicity properties which causes the implausible behaviour. The sensitivity of FG(2) prp to slight changes in X1 becomes apparent when the standard definition of the fuzzy region ‘Southern Germany’ (a) is replaced with the slightly modified Southern Germany#2 shown in A.5(b). The image has been generated from (a) by replacing black pixels (µX (e) = 0) in the lower two thirds of (a) with a slightly lighter black, µX (e) = 0.05. Because the upper third of the image is unchanged, i.e. the clouds in the northern part of (c) are still outside the support of Southern Germany#2, we expect a result of 0 in this case, too. The FG-count approach, however, results in a score of 0.95, again because local monotonicity is not preserved. The large change in the result, although we have modified the representation of Southern Germany in terms of X1 only very slightly, demonstrates that the operators obtained from the FG-count approach can indeed be very brittle. We have already seen that FG(1) prp is essentially limited to nondecreasing quantifiers. Owing to (A.5), then, the proposed extension to restricted quantification is subject to the same limitations, i.e. FG(2) prp will behave reasonably only if µQ is nondecreasing. In particular, the failure of the ‘quantifier syn(1) thesis’ method to extend the coverage of FGabs and FG(1) prp directly carries (2) over to FGprp (in this case, E must be used for the first argument, while X becomes the second argument. Let us consider an additional example which shows that in the case of two-place quantification, the decomposition can also fail because FG(2) prp is not compatible with dualisation. Hence let us consider µQ (x) =
1 0
: x≥ : x<
1 6 1 6
µQ (x) =
1 0
: x> : x≤
5 6 5 6
412
A An Evaluation Framework for Approaches to Fuzzy Quantification
representing [rate ≥ 16 ] and its dual [rate > 56 ], respectively, and choose E = {e1 , e2 }, X1 = X2 = 13 /e1 + 13 /e2 . We then have FG(2) prp (µQ )(X1 , X2 ) =
2 3
=
1 3
=1−
2 3
= 1 − FG(2) prp (µQ )(X1 , ¬X2 ) .
(2) As in the case of OWA(2) prp , the failure of FGprp to preserve duals is of particular importance because FG(2) prp cannot provide a reasonable interpretation of quantifiers like “less than 30 percent” directly. In order to evaluate this quantifier, we either have to use the negation “at least 30 percent”, or the antonym “more than 70 percent”, which are duals of each other. However, because duality is not preserved by FG(2) prp , both choices produce different results as shown in Fig. A.6.
Southern Germany (a) (b) Quantifier Image (a) Image (b) not at least 30 percent 0.72 0 more than 70 percent not 1 0.29
Fig. A.6. “Less than thirty percent of Southern Germany are cloudy” (FG-count)
A.6 The FE-Count Approach Apart from introducing the FG-count and suggesting its use for the modelling of fuzzy quantification, Zadeh [197] proposes yet another definition of fuzzy cardinality. The FE-count of X ∈ P(E) is given by µFE-count(X) (j) = µ[j] (X) ∧ ¬µ[j+1] (X) . Unlike FG-count(X), which expresses the degree to which X ∈ P(E) has at least j elements, the FE-count is intended to express the degree to which X has exactly j elements. The adequacy of the FE-count as a measure of fuzzy cardinality has been questioned by Dubois and Prade [42]. Ralescu [132] has proposed a method for interpreting quantifiers which is related to the concept of FE-count. The method is only defined for the unrestricted use of absolute quantifiers µQ : R+ −→ I. In this case
A.6 The FE-Count Approach
413
(1)
FEabs (µQ )(X) = max µQ (j) ∧ µ[j] (X) ∧ ¬µ[j+1] (X) , j=0,...,m
for all X ∈ P(E). Ralescu’s model will be called the ‘FE-count approach’ here because obviously (1)
FEabs (µQ )(X) = max µQ (j) ∧ µFE-count(X) (j) . j=0,...,m
It should be pointed out, however, that Ralescu develops his proposal from a possibilistic argument and without reference to the FE-count. The model complies with the EFA, but the phenomenon treated (only unary absolute quantifiers) is of course very limited. Nevertheless, Ralescu’s approach explicitly targets at an improvement upon the FG-count model, so we should be curious about its score on the plausibility scale. Given a domain E with |E| = m and µQ : R+ −→ I, we can conveniently restrict attention to µQ,E : {0, . . . , m} −→ I (as we have already done in the case of OWA), which for absolute quantifiers becomes µQ,E (j) = µQ (j) for (1) all j = 0, . . . , m. We can then view FEabs (µQ ) as a function of µQ,E . In the case of a two-element domain, say E = {a, b}, the existential and universal quantifiers are modelled by 0 : j=0 0 : j<2 µ∀,E (j) = µ∃,E (j) = 1 : else 1 : j=2 for j ∈ {0, 1, 2}. Letting X = 1/a + 0/b and Y = 1/a + 0.5/b, we observe that (1) (1) FEabs (µ∃,E )(X) = 1, but FEabs (µ∃,E )(Y ) = 0.5, although X ⊆ Y (in the sense of inclusion of fuzzy sets). It follows that the FE-count approach does not preserve monotonicity properties of quantifiers. In particular, the example shows that the FE-count does not generate a reasonable interpretation of the existential quantifier, which should be expressable in terms of an s-norm and thus be monotonically nondecreasing.15 The example further demonstrates that the FE-count approach does not preserve duality, because µ∀,E is indeed mapped to the intended (1)
FEabs (µ∀,E )(Z) = min(µZ (a), µZ (b)) , (1)
which is different from the dual of FEabs (µ∃,E ). Finally, the FE-count approach also does not preserve negation. The failure of the FE-count method to preserve monotonicity can also be demonstrated in the example scenario of fuzzy image regions. Hence let us consider the statement that a fuzzy image region X is nonempty.16 The condition corresponds to existential quantification, and the FE-count formula is hence applied to µ∃ : R+ −→ I defined by µ∃ (0) = 0, µ∃ (j) = 1 else. Now consider the results shown in Fig. A.7. The result in case (a) is satisfactory 15 16
see Sect. 4.16 on the intended interpretation of fuzzy existential and universal quantification, which reviews Thiele’s analysis of T- and S-quantifiers. Note that we cannot use examples of two-place proportional quantifiers here because the FE-count approach is unable to model these.
414
A An Evaluation Framework for Approaches to Fuzzy Quantification
(a) desired: 1, FE: 1 (b) desired: 1, FE: 0.5 Fig. A.7. “The image region X is nonempty” (FE-count)
because X is a crisp nonempty subset of the set of pixel coordinates E. The fuzzy image region X depicted in (b) is much larger and contains X. Due to the monotonicity of the existential quantifier, we should expect that the fuzzy image region X depicted in (b) is nonempty, too. However, due to the presence of fuzziness, the FE-count approach results in a score of 0.5, which is clearly implausible. To sum up, the FE-count is limited in applicability and it even assigns inacceptable interpretations in the simplest case of unrestricted monotonic absolute quantifiers.
A.7 Chapter Summary The traditional approaches to fuzzy quantification rely on the very simple representation offered by membership functions µQ : I −→ I and µQ : R+ −→ I. Although formally less elaborated than the theory presented here, these approaches are particularly easy to implement, which makes them appealing to applications at first sight. We therefore checked if these approaches perform well in those cases they intend to model, knowing that they are too narrow for general treatment of fuzzy NL quantification (see Sect. 1.10 above). In order to assess the plausibility of the approaches described in the literature, a canonical construction was introduced which fits these approaches into the framework pursued in this work. In this way, an (incompletely determined) QFM F is assigned to each of these approaches. The framework is directly applicable if its underlying assumption is satisfied, the evaluation framework assumption (EFA). Basically, the EFA demands that all fuzzy quantifiers which can be expressed by the considered approach can be discerned on the level of crisp arguments. This requirement is justified by the fundamental assumption of the quantification framework, i.e. the QFA described in Chap. 2, which poses the related requirement that all base quantifiers of interest can also be discerned on the level of crisp arguments. This suggests that beyond its part in the construction of the canonical embedding, the EFA is also a plausibility criterion which is interesting in its own right.
A.7 Chapter Summary
415
Those approaches that violate the EFA can still be discussed in the analysis framework. Instead of a unique fuzzy quantifier assigned by the construction, we then have a number of available choices. In this case, we can still check if a given adequacy criterion can be fulfilled by making an appropriate selection from the available interpretations. If no verifying choice exists, the considered approach to fuzzy quantification must be considered incompatible with the criterion of interest. The proposed method permits a rigorous evaluation of existing approaches as to their compatibility with the semantic criteria which were originally introduced for evaluating the plausibility of QFMs. The criteria formalized in the main part of this book can guide a systematic search for critical examples which result in implausible interpretations. However, the framework can also help to better understand the defects of existing approaches because the counter-examples can now be explained as a violation of basic plausibility criteria. In the chapter, the framework was instantiated for the main approaches to fuzzy quantification, i.e. the Σ-count, FG-count, OWA and the FE-count methods. Based on the known plausibility criteria, these approaches were then checked against the intended semantics of fuzzy NL quantification. This evaluation has raised severe doubts on the utility of the traditional approaches as models of natural language quantification. For each of the approaches in this tradition, we have described several counter-examples in which its results are inacceptable. It should be emphasized that the OWA- and FG-count approaches have a reasonable ‘core’ for nondecreasing unary quantifiers. However, the proposals to extend these models to non-monotonic quantifiers and importance qualification are not conclusive. Due to the algorithmic simplicity of existing approaches, they are certainly useful as aggregation operators. However, the findings of this chapter discourage the use of these models in applications of fuzzy quantifiers which depend on their natural language meaning. It was this massive evidence against the known models which motivated the author’s departure from the traditional view of fuzzy quantification. The novel theory was built on an axiomatic foundation in order to ensure reliable interpretations and avoid the problems of existing approaches on formal grounds.
B Theorem Reference Chart
Theorems cited in Chap. 3: Th. 3.30: see [55], Th-69/72/73, p. 52 Theorems cited in Chap. 4: Th. 4.1: see [53], Th-1, p. 27 Th. 4.2: see [53], Th-2, p. 27 Th. 4.5: see [53], Th-3, p. 28 Th. 4.7: see [53], Th-6, p. 30 Th. 4.9: see [53], Th-7a, p. 31 Th. 4.10: see [53], Th-7b, p. 31 Th. 4.12: see [55], Th-36, p. 36 Th. 4.16: see [55], Th-38, p. 38 Th. 4.18: see [55], Th-82, p. 58 Th. 4.19: see [55], Th-38, p. 38 Th. 4.20: see [55], Th-38, p. 38 Th. 4.22: see [53], Th-9, p. 35 Th. 4.24: see [55], Th-38, p. 38 Th. 4.27: see [55], Th-38, p. 38 Th. 4.28: see [53], Th-4, p. 28 Th. 4.30: see [53], Th-12, p. 36 Th. 4.32: see [53], Th-13, p. 36 Th. 4.34: see [53], Th-14, p. 37 Th. 4.35: see [53], Th-16, p. 38 Th. 4.37: see [53], Th-17, p. 38 Th. 4.38: see [53], Th-19, p. 39 Th. 4.39: see [53], Th-21, p. 40 Th. 4.40: see [53], Th-22, p. 40 Th. 4.43: see [53], Th-18, p. 39 Th. 4.45: see [56], Th-16, p. 18 Th. 4.48: see [55], Th-75, p. 55 Th. 4.49: see [56], Th-9, p. 16 I. Gl¨ ockner: Fuzzy Quantifiers, StudFuzz 193, 417–424 (2006) c Springer-Verlag Berlin Heidelberg 2006 www.springerlink.com
418
B Theorem Reference Chart
Th. Th. Th. Th. Th. Th. Th. Th. Th.
4.54: 4.55: 4.59: 4.61: 4.62: 4.64: 4.70: 4.72: 4.74:
see see see see see see see see see
[156], Th-8.1, p. 42 [55], Th-24, p. 24 [156], Th-8.2, p. 48 [55], Th-25, p. 25 [55], Th-26, p. 25 [53], Th-23, p. 41 [55], Th-33, p. 34 [55], Th-34, p. 36 [55], Th-35, p. 36
Theorems cited in Chap. 5: Th. 5.3: see [53], Th-27, p. 43 Th. 5.4: see [53], Th-28, p. 44 Th. 5.6: see [53], Th-29, p. 44 Th. 5.11: see [55], Th-30, p. 27 Th. 5.13: see [56], Th-27, p. 22 Th. 5.17: see [53], Th-32, p. 48 Th. 5.19: see [53], Th-33, p. 48 Th. 5.21: see [53], Th-34, p. 49 Th. 5.22: see [55], Th-125, p. 74 Th. 5.25: see [59], Th-47, p. 168 Theorems cited in Chap. 6: Th. 6.5: see [53], note, p. 50 Th. 6.7: see [53], note, p. 52 Th. 6.10: see [55], Th-76, p. 56 Th. 6.12: see [55], Th-77, p. 56 Th. 6.13: see [55], Th-78, p. 56 Th. 6.14: see [55], Th-79, p. 57 Th. 6.16: see [55], Th-81, p. 57 Th. 6.17: see [55], Th-84, p. 59 Th. 6.19: see [55], Th-85, p. 59 Th. 6.22: see [56], Th-34, p. 27 Th. 6.24: see [56], Th-35, p. 27 Theorems cited in Chap. 7: Th. 7.3: see [53], Th-40, p. 57 Th. 7.7: see [53], Th-41, p. 57 Th. 7.17: see [59], Th-61, p. 199 Th. 7.20: see [53], Th-42, p. 61 Th. 7.24: see [53], Th-44, p. 63 Th. 7.28: see [55], Th-40, p. 42 Th. 7.29: see [55], Th-41, p. 43 Th. 7.38: see [55], Th-62, p. 49 Th. 7.39: see [55], Th-62, p. 49
B Theorem Reference Chart
Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th.
7.40: 7.41: 7.43: 7.45: 7.46: 7.47: 7.49: 7.51: 7.54: 7.57: 7.59: 7.60: 7.62: 7.63: 7.64: 7.65: 7.66: 7.67: 7.68: 7.69: 7.70: 7.71: 7.72: 7.73: 7.74: 7.75: 7.76: 7.77: 7.79: 7.80: 7.81: 7.83: 7.85: 7.86: 7.87: 7.90: 7.92: 7.94: 7.95:
see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see
[55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [57], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [55], [59], [55], [55], [55], [55], [55], [55], [55],
Th-58, p. 49 Th-65 to Th-70, p. 51/52 note, p. 47 Th-64, p. 50 Th-64, p. 50 Th-71, p. 52 note, p. 48 Th-87, p. 61 Th-91, p. 62 Th-93, p. 62 Th-86, p. 61 Th-18, p. 23 Th-88, p. 61 Th-89, p. 61 Th-90, p. 62 Th-92, p. 62 Th-103, p. 65 Th-104, p. 65 Th-105, p. 65 Th-106, p. 66 Th-107, p. 66 Th-108, p. 66 Th-109, p. 66 Th-110, p. 67 Th-111, p. 67 Th-112, p. 67 Th-113, p. 67 Th-44, p. 46 Th-46, p. 46 Th-102, p. 65 Th-124, p. 74 Th-99, p. 216 Th-100, p. 65 Th-101, p. 65 Th-123, p. 73 Th-126, p. 74 Th-132, p. 76 Th-127, p. 74 Th-128, p. 75
Theorems cited in Chap. 8: Th. 8.2: see [57], Th-20, p. 25 Th. 8.4: see [57], Th-21, p. 25 Th. 8.7: see [57], Th-22, p. 26 Th. 8.8: see [57], Th-23, p. 26
419
420
B Theorem Reference Chart
Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th.
8.9: 8.10: 8.12: 8.15: 8.17: 8.18: 8.20: 8.22: 8.24: 8.25: 8.26: 8.27: 8.28: 8.29: 8.31: 8.32: 8.33: 8.34: 8.35: 8.37: 8.38: 8.39: 8.41: 8.42: 8.43: 8.44: 8.45: 8.46: 8.47: 8.48: 8.49: 8.50: 8.51: 8.52: 8.53: 8.54: 8.55: 8.56: 8.57: 8.58: 8.59: 8.60:
see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see
[57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57], [57],
Th-24, Th-25, Th-26, Th-27, Th-28, Th-29, Th-30, Th-31, Th-32, Th-33, Th-34, Th-35, Th-36, Th-37, Th-38, Th-39, Th-40, Th-41, Th-42, Th-43, Th-44, Th-45, Th-46, Th-47, Th-48, Th-49, Th-50, Th-51, Th-52, Th-53, Th-54, Th-55, Th-56, Th-57, Th-58, Th-59, Th-60, Th-61, Th-62, Th-63, Th-64, Th-65,
p. 27 p. 27 p. 27 p. 28 p. 28 p. 28 p. 29 p. 29 p. 29 p. 30 p. 30 p. 30 p. 30 p. 30 p. 31 p. 31 p. 31 p. 31 p. 31 p. 31 p. 32 p. 32 p. 32 p. 32 p. 33 p. 33 p. 33 p. 34 p. 34 p. 34 p. 34 p. 34 p. 34 p. 35 p. 35 p. 35 p. 35 p. 35 p. 35 p. 36 p. 36 p. 36
Theorems cited in Chap. 9: Th. 9.2: see [58], Th-32, p. 34
B Theorem Reference Chart
Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th.
9.6: 9.13: 9.16: 9.17: 9.18: 9.19: 9.20: 9.21: 9.23: 9.25: 9.28: 9.29: 9.30: 9.31: 9.34: 9.35: 9.38: 9.39: 9.41: 9.42: 9.45: 9.46: 9.48: 9.49: 9.52: 9.53: 9.56: 9.57: 9.60: 9.61: 9.63: 9.64: 9.65: 9.66: 9.67: 9.68: 9.69: 9.70: 9.71: 9.72: 9.74: 9.75: 9.77: 9.78: 9.80:
see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see
[58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58],
Th-33, Th-34, Th-35, Th-36, Th-37, Th-38, Th-39, Th-40, Th-41, Th-42, Th-43, Th-44, Th-45, Th-46, Th-47, Th-49, Th-50, Th-51, Th-52, Th-53, Th-54, Th-55, Th-56, Th-57, Th-58, Th-59, Th-60, Th-61, Th-62, Th-63, Th-64, Th-65, Th-66, Th-67, Th-68, Th-69, Th-70, Th-71, Th-72, Th-73, Th-74, Th-75, Th-76, Th-77, Th-78,
p. 35 p. 36 p. 37 p. 37 p. 37 p. 38 p. 38 p. 38 p. 39 p. 39 p. 40 p. 40 p. 40 p. 41 p. 41 p. 42 p. 42 p. 43 p. 43 p. 43 p. 44 p. 44 p. 44 p. 44 p. 45 p. 45 p. 46 p. 46 p. 46 p. 46 p. 47 p. 47 p. 47 p. 47 p. 47 p. 47 p. 47 p. 48 p. 48 p. 48 p. 48 p. 48 p. 48 p. 48 p. 49
421
422
B Theorem Reference Chart
Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th.
9.81: 9.82: 9.83: 9.84: 9.85: 9.87: 9.88: 9.89: 9.90: 9.91: 9.92: 9.95: 9.96:
see see see see see see see see see see see see see
[58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58], [58],
Th-79, Th-80, Th-81, Th-82, Th-83, Th-84, Th-85, Th-86, Th-87, Th-88, Th-89, Th-90, Th-91,
p. 49 p. 49 p. 49 p. 49 p. 49 p. 50 p. 50 p. 50 p. 50 p. 50 p. 50 p. 50 p. 51
Theorems cited in Chap. 10: Th. 10.5: see [58], Th-92, p. 56 Th. 10.9: see [58], Th-93, p. 57 Th. 10.10: see [58], Th-94, p. 57 Th. 10.11: see [58], Th-95, p. 57 Th. 10.17: see [58], Th-99, p. 61 Th. 10.18: see [58], Th-107, p. 62 Th. 10.19: see [58], Th-116, p. 63 Th. 10.20: see [58], Th-121, p. 64 Th. 10.21: see [58], Th-97, p. 61 Th. 10.22: see [58], Th-98, p. 61 Th. 10.23: see [58], Th-117, p. 63 Th. 10.24: see [58], Th-118, p. 64 Th. 10.25: see [58], Th-119, p. 64 Th. 10.27: see [58], Th-120, p. 64 Th. 10.33: see [58], Th-122, p. 66 Th. 10.34: see [58], Th-123, p. 66 Th. 10.36: see [58], Th-124, p. 67 Th. 10.37: see [58], Th-125, p. 67 Th. 10.40: see [58], Th-126, p. 68 Th. 10.41: see [58], Th-127, p. 68 Th. 10.42: see [58], Th-128, p. 68 Th. 10.43: see [58], Th-129, p. 68 Th. 10.48: see [58], Th-130, p. 69 Theorems cited in Chap. 11: Th. 11.1: see [53], Th-35, p. 49 Th. 11.3: see [55], Th-119, p. 72 Th. 11.4: see [55], Th-120, p. 72 Th. 11.8: see [59], Th-238, p. 291 Th. 11.9: see [59], Th-239, p. 291
B Theorem Reference Chart
Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th. Th.
11.10: 11.12: 11.13: 11.14: 11.15: 11.18: 11.21: 11.23: 11.24: 11.25: 11.27: 11.28: 11.29: 11.30: 11.31: 11.32: 11.34: 11.36: 11.37: 11.40: 11.41: 11.42: 11.43: 11.44: 11.48: 11.53: 11.55: 11.57: 11.58: 11.59: 11.60:
see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see see
[55], [59], [55], [55], [55], [55], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59], [59],
423
Th-95, p. 63 Th-241, p. 292 Th-96, p. 63 Th-130, p. 75 Th-98, p. 64 Th-129, p. 75 Th-246, p. 296 Th-247, p. 298 Th-248, p. 299 Th-249, p. 299 Th-250, p. 300 Th-251, p. 300 Th-252, p. 301 Th-253, p. 301 Th-254, p. 302 Th-255, p. 305 Th-256, p. 306 Th-257, p. 306 Th-258, p. 307 Th-259, p. 311 Th-260, p. 312 Th-261, p. 314 Th-262, p. 315 Th-263, p. 316 Th-264, p. 320 Th-265, p. 331 Th-266, p. 338 Th-267, p. 341 Th-268, p. 341 Th-269, p. 342 Th-270, p. 347
Theorems cited in Chap. 12: Th. 12.16: see [59], Th-271, p. 370 Th. 12.18: see [59], Th-272, p. 372 Th. 12.19: see [59], Th-273, p. 372 Th. 12.20: see [59], Th-274, p. 372 Th. 12.24: see [59], Th-275, p. 375 Note B.1. Some of the cited theorems are based on a different formalization of the DFS axioms that was later replaced with the current system (Z-1)– (Z-6). The equivalence of these formalizations has been proven in [55, Th-38, p. 38]. It is therefore guaranteed that the quoted theorems which stem from the old axioms remain valid in the new system. The original formalization
424
B Theorem Reference Chart
of the models also referred to a different construction of induced fuzzy truth functions. However, the fuzzy truth functions induced by the old construction have been shown to coincide with those obtained from the new construction assumed here, see [55, Th-36, p. 36]. This ensures that all ‘old’ results concerning induced connectives remain valid for the new construction.
References
1. P.B. Andrews. An introduction to mathematical logic and type theory: to truth through proof. Academic Press, 1986. 2. T. Arnould and A. Ralescu. From “and” to “or”. In B. Bouchon-Meunier, L. Valverde, and R.R. Yager, editors, IPMU ’92 - Advanced Methods in Artificial Intelligence, volume 682 of Lecture Notes in Computer Science, pages 116–128. Springer, 1993. 3. S. Barro, A. Bugar´ın, P. Cari˜ nena, and F. D´ıaz-Hermida. A framework for fuzzy quantification models. GSI-99-01, Departamento de Electr´ onica e Computaci´ on, Universidade de Santiago di Compostela, E15782 Santiago de Compostela, Spain, 1999. 4. S. Barro, A. Bugar´ın, P. Cari˜ nena, and F. D´ıaz-Hermida. A framework for fuzzy quantification models analysis. IEEE Transactions on Fuzzy Systems, 11:89–99, 2003. 5. J. Barwise. Monotone quantifiers and admissible sets. In Generalized Recursion Theory II, pages 1–38. North-Holland, Amsterdam, 1978. 6. J. Barwise. On branching quantifiers in English. Journal of Philosophical Logic, 8:47–80, 1979. 7. J. Barwise and R. Cooper. Generalized quantifiers and natural language. Linguistics and Philosophy, 4:159–219, 1981. 8. J. Barwise and S. Feferman, editors. Model-Theoretic Logics. Springer, Berlin, 1985. 9. J. van Benthem. Determiners and logic. Linguistics and Philosophy, 6:447–478, 1983. 10. J. van Benthem. Questions about quantifiers. Journal of Symbolic Logic, 49, 1984. 11. J.C. Bezdek. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York, 1981. 12. J.C. Bezdek, D. Dubois, and H. Prade, editors. Fuzzy Sets in Approximate Reasoning and Information Systems. Kluwer Academic Publishers, Dordrecht, NL, 1999. 13. Z. Bien and K.C. Min, editors. Fuzzy Logic and its Applications to Engineering, Information Sciences, and Intelligent Systems. Information Science and Intelligent Systems. Kluwer Academic Publishers, Dordrecht, 1995.
426
References
14. M. Black. Vagueness: an exercise in logical analysis. Philosophy of Science, 4:427–455, 1937. Reprinted with omissions in [90, p. 69-81]. 15. N. Blanchard. Cardinal and ordinal theories about fuzzy sets. In M.M. Gupta and E. Sanchez, editors, Fuzzy Information and Decision Processes, pages 149– 157. North-Holland, Amsterdam, 1982. 16. I. Bloch. Information combination operators for data fusion: a comparative review with classification. IEEE Transactions on Systems, Man, and Cybernetics, 26(1):52–67, 1996. 17. I.M. Boche´ nski. A history of formal logic. Chelsea Publishing Company, New York, 2nd edition, 1970. 18. G. Bordogna, P. Carrara, and G. Pasi. Query term weights as constraints in fuzzy information retrieval. Information Processing & Management, 27(1):15– 26, 1990. 19. G. Bordogna and G. Pasi. A fuzzy information retrieval system handling users’ preferences on document sections. In Dubois et al. [45]. 20. P. Bosc, B.B. Buckles, F.E. Petry, and O. Pivert. Fuzzy databases. In D. Dubois, H. Prade, and J. Bezdek, editors, Fuzzy Sets in Approximate Reasoning and Expert Systems, pages 403–468. Kluwer, Dordrecht, NL, 1999. 21. P. Bosc and L. Lietard. Monotonic quantified statements and fuzzy integrals. In Proc. of the NAFIPS/IFI/NASA ’94 Joint Conference, pages 8–12, San Antonio, Texas, 1994. 22. P. Bosc and L. Lietard. On the comparison of the Sugeno and Choquet fuzzy integrals for the evaluation of quantified statements. In Proceedings of EUFIT ‘95, pages 709–716, 1995. 23. P. Bosc and O. Pivert. Extending SQL retrieval features for the handling of flexible queries. In Dubois et al. [45], pages 233–251. 24. N.R. Campbell. The sorites paradox. Philosophical Studies, 26:175–191, 1974. 25. J. Cargile. The sorites paradox. British Journal for the Philosophy of Science, 1969. 26. S.J. Chen and C.L. Hwang. Fuzzy Multiple Attribute Decision Making: Methods and Applications. Springer, Heidelberg, 1992. 27. A. Church. A formulation of the simple theory of types. Journal of Symbolic Logic, 5:56–68, 1940. 28. F. D´ıaz-Hermida, A. Bugar´ın, P. Cari˜ nena, M. Mucientes, and D.E. Losoda. Fuzzy quantification in two real scenarios: Information retrieval and mobile robotics. International Journal of Intelligent Systems, 2005. to appear. 29. F. D´ıaz-Hermida, P. Cari˜ nena, A. Bugar´ın, and S. Barro. Probabilistic evaluation of fuzzy quantified sentences: independence profile. Mathware and Soft Computing, VIII(3):255–274, 2001. 30. F. D´ıaz-Hermida, P. Cari˜ nena, A. Bugar´ın, and S. Barro. Definition and classification of semi-fuzzy quantifiers for the evaluation of fuzzy quantified sentences. International Journal of Approximate Reasoning, 34(1):49–88, 2003. 31. F. D´ıaz-Hermida, P. Cari˜ nena, A. Bugar´ın, and S. Barro. Random set-based approaches for modelling fuzzy operators. In Modelling with Words - Learning, Fusion, and Reasoning within a Formal Linguistic Representation Framework, volume 2873 of Lecture Notes in Computer Science, pages 1–25. Springer, 2003. 32. M. Delgado, F. Herrera, E. Herrera-Viedma, and L. Martinez. Combining numerical and linguistic information in group decision making. Information Sciences, 107:177–194, 1998.
References
427
33. M. Delgado, D. S´ anchez, and M.A. Vila. Un enfoque l´ ogico para calcular el grado de cumplimiento de sentencias con cuantificadores lingu´ısticos. In Actas VII Congreso Espa˜ nol Sobre Tecnolog´ıas y L´ ogica Fuzzy (ESTYLF ‘97), pages 15–20, 1997. 34. M. Delgado, D. S´ anchez, and M.A. Vila. A survey of methods for evaluating quantified sentences. In Proceedings of the First European Society for Fuzzy Logic and Technologies Conference (EUSFLAT ‘99), pages 279–282, 1999. 35. M. Delgado, D. S´ anchez, and M.A. Vila. Fuzzy cardinality based evaluation of quantified statements. International Journal of Approximate Reasoning, 23(1):23–66, 2000. 36. A. De Luca and S. Termini. A definition of non-probabilistic entropy in the setting of fuzzy sets theory. Information and Control, 20(4):301–312, 1972. 37. P. Dienes. On an implication function in many-valued systems of logic. Journal of Symbolic Logic, 14:95–97, 1949. 38. J. van der Does. The dynamics of sophisticated laziness. In J. Groenendijk, editor, Plurals and Anaphora, DYANA-2 (Dynamic Interpretation of Natural Language) ESPRIT Basic Research Project 6852, Deliverable R.2.2.A, Part I, pages 1–52, 1993. 39. D. Driankov, H. Hellendoorn, and M. Reinfrank. An Introduction to Fuzzy Control. Springer, 1996. 40. D. Dubois, L. Godo, R. Lopez de Mantaras, and H. Prade. Qualitative reasoning with imprecise probabilities. Journal of Intelligent Information Systems, 2:319–363, 1993. 41. D. Dubois and H. Prade. Fuzzy Sets and Systems: Theory and Applications. Academic Press, New York, 1980. 42. D. Dubois and H. Prade. Fuzzy cardinality and the modelling of imprecise quantification. Fuzzy Sets and Systems, 16:199–230, 1985. 43. D. Dubois and H. Prade. Fuzzy sets in approximate reasoning I, II. Fuzzy Sets and Systems, 40(1):143–202, 202–244, 1991. 44. D. Dubois, H. Prade, and R. Yager. Merging fuzzy information. In D. Dubois, H. Prade, and J. Bezdek, editors, Fuzzy Sets in Approximate Reasoning and Expert Systems, pages 335–402. Kluwer, Dordrecht, NL, 1999. 45. D. Dubois, H. Prade, and R.R. Yager, editors. Fuzzy Information Engineering. Wiley, New York, 1997. 46. D. Edgington. Vagueness by degrees. In R. Keefe and P. Smith, editors, Vagueness: A Reader, pages 294–316. MIT Press/A Bradford Book, Cambridge, MA, 1997. 47. K. Fine. Vagueness, truth and logic. Synthese, 1975. ¨ 48. G. Frege. Uber Sinn und Bedeutung. Zeitschrift f¨ ur Philosophie und philosophische Kritik, 100:25–50, 1892. 49. B.R. Gaines. Foundations of fuzzy reasoning. Int. Journal of Man-Machine Studies, 8:623–668, 1978. 50. L.T.F. Gamut. Intensional Logic and Logical Grammar, volume II of Logic, Language, and Meaning. The University of Chicago Press, Chicago & London, 1984. 51. I. Gl¨ ockner. Evaluation of quantified propositions in generalized models of fuzzy quantification. International Journal of Approximate Reasoning, 37(2):93–126, 2004. 52. I. Gl¨ ockner. Fuzzy Quantifiers in Natural Language: Semantics and Computational Models. Der Andere Verlag, Osnabr¨ uck, Germany, 2004.
428
References
53. I. Gl¨ ockner. DFS – an axiomatic approach to fuzzy quantification. TR97-06, Technical Faculty, University Bielefeld, 33501 Bielefeld, Germany, 1997. 54. I. Gl¨ ockner. A framework for evaluating approaches to fuzzy quantification. TR99-03, Technical Faculty, University Bielefeld, 33501 Bielefeld, Germany, 1999. 55. I. Gl¨ ockner. Advances in DFS theory. TR2000-01, Technical Faculty, University of Bielefeld, 33501 Bielefeld, Germany, 2000. 56. I. Gl¨ ockner. An axiomatic theory of quantifiers in natural languages. TR200003, Technical Faculty, University of Bielefeld, 33501 Bielefeld, Germany, 2000. 57. I. Gl¨ ockner. A broad class of standard DFSes. TR2000-02, Technical Faculty, University of Bielefeld, 33501 Bielefeld, Germany, 2000. 58. I. Gl¨ ockner. Standard models of fuzzy quantification. TR2001-01, Technical Faculty, University of Bielefeld, 33501 Bielefeld, Germany, 2001. 59. I. Gl¨ ockner. Fundamentals of fuzzy quantification: Plausible models, constructive principles, and efficient implementation. TR2002-07, Technical Faculty, University of Bielefeld, 33501 Bielefeld, Germany, 2003. 60. I. Gl¨ ockner and A. Knoll. Fuzzy quantifiers for processing natural language queries in content-based multimedia retrieval systems. TR97-05, Technical Faculty, University of Bielefeld, 33501 Bielefeld, Germany, 1997. 61. I. Gl¨ ockner and A. Knoll. Application of fuzzy quantifiers in image processing: A case study. In Proceedings of KES’99, Adelaide, Australia, 1999. 62. I. Gl¨ ockner and A. Knoll. Natural-language navigation in multimedia archives: An integrated approach. In Proceedings of the Seventh ACM Multimedia Conference (MM ’99), pages 313–322, Orlando, Florida, 1999. 63. I. Gl¨ ockner and A. Knoll. Query evaluation and information fusion in a retrieval system for multimedia documents. In Proceedings of Fusion’99, pages 529–536, Sunnyvale, CA, 1999. 64. I. Gl¨ ockner and A. Knoll. Architecture and retrieval methods of a search assistant for scientific libraries. In W. Gaul and R. Decker, editors, Classification and Information Processing at the Turn of the Millennium. Springer, Heidelberg, 2000. 65. I. Gl¨ ockner and A. Knoll. Architecture and retrieval methods of a search assistant for scientific libraries. In W. Gaul and R. Decker, editors, Classification and Information Processing at the Turn of the Millennium, pages 460–468. Springer, Heidelberg, 2000. 66. I. Gl¨ ockner and A. Knoll. Fuzzy quantification in granular computing and its role for data summarisation. In W. Pedrycz, editor, Granular Computing: An Emerging Paradigm, Studies in Fuzziness and Soft Computing, pages 215–256. Physica-Verlag, Heidelberg, 2001. 67. I. Gl¨ ockner, A. Knoll, and A. Wolfram. Data fusion based on fuzzy quantifiers. In Proceedings of EuroFusion98, International Data Fusion Conference, pages 39–46, 1998. 68. J.A. Goguen. The logic of inexact concepts. Synthese, 19:325–373, 1969. 69. F. Hamm. Nat¨ urlich-sprachliche Quantoren. Modelltheoretische Untersuchungen zu universellen semantischen Beschr¨ ankungen. Number 236 in Linguistische Arbeiten. Niemeyer, T¨ ubingen, 1989. 70. C.J. Harris, C.G. Moore, and M. Brown. Intelligent Control: Aspects of Fuzzy Logic and Neural Nets. World Scientific, Singapore, 1993. 71. J. van Heijenoort. From Frege to G¨ odel – A Source Book in Mathematical Logic. Harvard University Press, Cambridge, MA, 1967.
References
429
72. I. Heim. The semantics of definite and indefinite noun phrases. PhD thesis, University of Massachusetts, Amherst, 1982. 73. H. Helbig. Die semantische Struktur nat¨ urlicher Sprache. Wissensrepr¨ asentation mit MultiNet. Springer, Berlin, Heidelberg, 2000. 74. L. Henkin. Some remarks on infinitely long formulas. In Infinitistic methods, pages 167–183. Pergamon Press, New York, and Polish Scientific Publishers, Warsaw, 1961. 75. F. Herrera and J.L. Verdegay. On group decision making under linguistic preferences and fuzzy linguistic quantifiers. In B. Bouchon, R.R. Yager, and L.A. Zadeh, editors, Fuzzy Logic and Soft Computing, pages 173–180. World Scientific, 1995. 76. J. Hintikka. Quantifiers vs. quantification theory. Dialectica, 27:329–358, 1973. 77. P. H´ ajek and L. Kohout. Fuzzy implications and generalized quantifiers. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 4(3):225–233, 1996. 78. I. Homann. Fuzzy-Suchmethoden im Information-Retrieval. PhD thesis, University of Bielefeld, Bielefeld, Germany, 2004. 79. F. de Jongh and H. Verkuyl. Generalized quantifiers: The properness of their strength. In J. van Benthem and A. ter Meulen, editors, Generalized Quantifiers in Natural Language. Foris, Dordrecht, 1984. ¨ Rhein80. J. Kacprzyk. Multistage Decision-Making under Fuzziness. Verlag TUV land, Cologne, 1983. 81. J. Kacprzyk. Multistage Fuzzy Control: A Model-Based Approach to Control and Decision-Making. Wiley, Chichester, 1997. 82. J. Kacprzyk. Intelligent data analysis via linguistic data summaries: A fuzzy logic approach. In R. Decker and W. Gaul, editors, Classification and Information Processing at the Turn of the Millenium, pages 153–161. Springer, 2000. 83. J. Kacprzyk and P. Strykowski. Linguistic summaries of sales data at a computer retailer: A case study. In Proc. of 8th International Fuzzy Systems Association World Congress (IFSA ‘99), pages 29–33, 1999. 84. J. Kacprzyk and S. Zadro˙zny. Fuzzy queries in Microsoft Access V.2. In Dubois et al. [45], pages 223–232. 85. J. Kacprzyk, S. Zadro˙zny, and M. Fedrizzi. An interactive GDSS for consensus reaching using fuzzy logic with linguistic quantifiers. In Dubois et al. [45], pages 567–574. 86. J. Kacprzyk and A. Zi´ olkowsi. Database queries with fuzzy linguistic quantifiers. IEEE Transactions on Systems, Man, and Cybernetics, 16(3):474–479, 1986. 87. H. Kamp. A theory of truth and semantic representation. In J. Groenendijk, T. Janssen, and M. Stokhof, editors, Formal Methods in the Study of Language, Amsterdam: Mathematical Centre, 1981. 88. J. Kamp. Two theories about adjectives. In Edward Keenan, editor, Formal Semantics of Natural Language. Cambridge University Press, 1975. 89. A. Kaufmann. Introduction to the Theory of Fuzzy Subsets, volume 1: Fundamental theoretical elements. Academic Press, New York, 1975. 90. R. Keefe and P. Smith, editors. Vagueness: A Reader. A Bradford Book/MIT Press, Cambridge, MA, 1997.
430
References
91. E. Keenan and L. Moss. Generalized quantifiers and expressive power of natural language. In J. van Benthem and A. ter Meulen, editors, Generalized Quantifiers in Natural Language, pages 73–124. Foris, Dordrecht, 1984. 92. E. Keenan and J. Stavi. A semantic characterization of natural language determiners. Linguistics and Philosophy, 9:253–326, 1986. 93. E. Keenan and D. Westerst˚ ahl. Generalized quantifiers in linguistics and logic. In J. van Benthem and A. Ter Meulen, editors, Handbook of Logic and Language, chapter 15, pages 837–893. Elsevier, Amsterdam, NL, 1997. 94. H.J. Keisler. Logic with the quantifier ‘There Exist Uncountably Many’. Annals of Mathematical Logic, 1:1–93, 1969. 95. S.C. Kleene. Introduction to Metamathematics. North-Holland, Amsterdam, 1952. 96. E.P. Klement, R. Mesiar, and E. Pap. Quasi- and pseudo-inverses of monotone functions, and the construction of t-norms. Fuzzy Sets and Systems, 104:3–13, 1999. 97. G.J. Klir and T.A. Folger. Fuzzy Sets, Uncertainty, and Information. Prentice Hall, Englewood Cliffs, NJ, 1988. 98. G.J. Klir, U.H. St. Clair, and B. Yuan. Fuzzy Set Theory: Foundations and Applications. Prentice Hall, Upper Saddle River, NJ, 1997. 99. G.J. Klir and B. Yuan. Fuzzy Sets and Fuzzy Logic: Theory and Applications. Prentice Hall, Upper Saddle River, NJ, 1995. 100. G.J. Klir and B. Yuan, editors. Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers by Lotfi A. Zadeh. World Scientific, Singapore, 1996. 101. S. A. Kripke. Semantical analysis of modal logic II. In J.W. Addison and A. Tarski, editors, The theory of models, pages 206–220. North-Holland, Amsterdam, 1965. 102. S.A. Kripke. A completeness theorem in modal logic. Journal for Symbolic Logic, 24:1–14, 1959. 103. R. Kruse, E. Schwecke, and J. Heinsohn. Uncertainty and Vagueness in Knowledge Based Systems: Numerical Methods. Springer, Heidelberg, 1991. 104. Y. Lai and C. Hwang. Fuzzy Mathematical Programming. Springer, Heidelberg, 1992. 105. G. Lakoff. Hedges: a study in meaning criteria and the logic of fuzzy concepts. Journal of Philosophical Logic, 2:458–508, 1973. 106. P. Lindstr¨ om. First order predicate logic with generalized quantifiers. Theoria, 32:186–195, 1966. 107. P. Lindstr¨ om. On extensions of elementary logic. Theoria, 35:1–11, 1969. 108. Y. Liu and E.E. Kerre. An overview of fuzzy quantifiers. (I). Interpretations. Fuzzy Sets and Systems, 95:1–21, 1998. 109. Y. Liu and E.E. Kerre. An overview of fuzzy quantifiers. (II). Reasoning and applications. Fuzzy Sets and Systems, 96:135–146, 1998. 110. D. Losada, S. Barro, A. Bugar´ın-Diz, and F. D´ıaz-Hermida. Experiments on using fuzzy quantified sentences in adhoc retrieval. In ACM Symposium on Applied Computing, pages 1059–1066, 2004. 111. K.F. Machina. Truth, belief and vagueness. Journal of Philosophical Logic, 5:47–78, 1976. Reprinted in [90, p. 174-203]. 112. E.H. Mamdani and S. Assilian. An experiment in linguistic synthesis with a fuzzy logic controller. Int. Journal of Man-Machine Studies, 7:1–13, 1975. 113. R. Mesiar and M. Navara. Diagonals of continuous triangular norms. Fuzzy Sets and Systems, 104:35–41, 1999.
References
431
114. S. Miyamoto. Fuzzy Sets in Information Retrieval and Cluster Analysis. Kluwer Academic Publishers, Dordrecht, 1990. 115. R. Montague. The proper treatment of quantification in ordinary English. In J. Hintikka, J. Moravcsik, and P. Suppes, editors, Approaches to Natural Language, pages 221–242. Reidel, Dordrecht, 1973. 116. A. Mostowski. On a generalization of quantifiers. Fundamenta Mathematicae, 44:12–36, 1957. 117. M. Mucientes, R. Iglesias, C. Regueiro, A. Bugar´ın, P. Cari˜ nena, and S. Barro. Fuzzy temporal rules for mobile robot guidance in dynamic environments. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 66(1), 2002. 118. M. Mukaidono. On some properties of fuzzy logic. Systems – Computers – Controls, 6(2):36–43, 1975. 119. H. Nakanishi, I.B. Turksen, and M. Sugeno. A review and comparison of six reasoning methods. Fuzzy Sets and Systems, 57(3):257–294, 1993. 120. C.V. Negoita. Expert Systems and Fuzzy Systems. Benjamin/Cummings, Menlo Park, CA, 1985. 121. S.E. Newstead and K. Coventry. The role of context and functionality in the interpretation of quantifiers. European Journal of Cognitive Psychology, 12:243–259, 2000. 122. S.E. Newstead, P. Pollard, and D. Riezebos. The effect of set size on the interpretation of quantifiers used in rating scales. Applied Ergonomics, 18(3):178– 182, 1987. 123. H.T. Nguyen and E.A. Walker. A First Course in Fuzzy Logic. Chapman & Hall/CRC Press, Boca Raton, FL, 2nd edition, 2000. 124. S.K. Pal and D.K. Majumder. Fuzzy Mathematical Approach to Pattern Recognition. John Wiley, New York, 1986. 125. W. Pedrycz. Fuzzy Control and Fuzzy Systems. John Wiley, New York, 1989. 126. H. Prade. Lipski’s approach to incomplete information data bases restated and generalized in the setting of Zadeh’s possibility theory. Information Sciences, 9(1):27–42, 1984. 127. H. Prade. A two-layer fuzzy pattern matching procedure for the evaluation of conditions involving vague quantifiers. Journal of Intelligent and Robotic Systems, 3:93–103, 1990. 128. H. Prade and C. Testemale. Generalizing database relational algebra for the treatment of incomplete or uncertain information and vague queries. Information Sciences, 34(2):115–143, 1984. 129. W.V. Quine. Word and Object. Cambridge University Press, Cambridge, Mass., 1960. 130. W.V. Quine. Ontological Relativity and other Essays. Columbia University Press, New York, 1969. 131. A. Ralescu, B. Bouchon, and D. Ralescu. Combining fuzzy quantifiers. In W. Chiang and J. Lee, editors, Fuzzy Logic for the Applications to Complex Systems, pages 246–252. World Scientific, Singapore, 1995. 132. A.L. Ralescu. A note on rule representation in expert systems. Information Sciences, 38:193–203, 1986. 133. Anca Ralescu, Kaoru Hirota, and Dan Ralescu. Evaluation of fuzzy quantified expressions. In Fuzzy Logic in Artificial Intelligence (IJCAI Workshop), volume 1566 of Lecture Notes in Computer Science, pages 234–244. Springer, 1999. 134. D. Ralescu. Cardinality, quantifiers, and the aggregation of fuzzy criteria. Fuzzy Sets and Systems, 69:355–365, 1995.
432
References
135. D. Rasmussen and R.R. Yager. A fuzzy SQL summary language for data discovery. In Dubois et al. [45], pages 253–264. 136. N. Rescher. Quantifiers in many-valued logic. Logique et Analyse, 7:181–184, 1964. 137. N. Rescher. Many-valued logic. Mc Graw-Hill, 1969. 138. J.B. Rosser. The introduction of quantification into a Three-valued logic. reprinted for the members of the Fifth International Congress for the Unity of Science (Cambridge, 1939) from The Journal of Unified Science (Erkenntnis), vol. 9 (never published). Abstracted in: Journal of Symbolic Logic, vol. 4 (1939), 170-171, 1939. 139. J.B. Rosser and A.R. Turquette. Axiom schemes for m-valued functional calculi of first order. Part I. Definition of axiom schemes and proof of plausibility. Journal of Symbolic Logic, 13:177–192, 1948. 140. J.B. Rosser and A.R. Turquette. Axiom schemes for m-valued functional calculi of first order. Part II. Deductive completeness. Journal of Symbolic Logic, 16:22–34, 1951. 141. J.B. Rosser and A.R. Turquette. Many-valued logics. North-Holland, Amsterdam, 1952. 142. R. Rovatti and C. Fantuzzi. s-norm aggregation of infinite collections. Fuzzy Sets and Systems, 84:255–269, 1996. 143. B. Russell. Mathematical logic as based on the theory of types. American Journal of Mathematics, 30:222–262, 1908. 144. B. Russell. History of Western Philosophy. Unwin University Books, 1946. 145. I.A. Sag and E.F. Prince. Bibliography of works dealing with presupposition. In Oh and Dinneen, editors, Syntax and Semantics, volume 11: Presupposition, pages 389–403. Academic Press, New York, 1979. 146. R. van der Sandt. Context and Presupposition. Croom Helm, New York, 1988. 147. K. Schlechta. Defaults as generalized quantifiers. Journal of Logic and Computation, 5(4):473–494, 1995. 148. B. Schweizer and A. Sklar. Probabilistic metric spaces. North-Holland, Amsterdam, 1983. 149. J. Sgro. Completeness theorems for topological models. Annals of Mathematical Logic, pages 173–193, 1977. 150. W. Silvert. Symmetric summation: A class of operations on fuzzy sets. IEEE Transactions on Systems, Man, and Cybernetics, 9:657–659, 1979. 151. R.A. Sorensen. Blindspots. Clarendon Press, Oxford, 1988. 152. M. Spies. Syllogistic inference under uncertainty. Psychologie-Verlags-Union, M¨ unchen, 1989. 153. M. Stokhof and J. Groenendijk, editors. Studies in Discourse Representation Theory and the Theory of Generalized Quantifiers. De Gruyter, 1986. 154. M. Sugeno. Fuzzy measures and fuzzy integrals: A survey. In M.M. Gupta, G.N. Saridis, and B.R. Gaines, editors, Fuzzy Automata and Decision Processes, pages 89–102. North-Holland, Amsterdam, 1977. 155. P. Suppes. Introduction to logic. Van Nostrand, Princeton, New Jersey, 1957. 156. H. Thiele. Zum Aufbau einer systematischen Theorie von Fuzzy-Quantoren. In 13. Workshop “Interdisziplin¨ are Methoden in der Informatik” – Arbeitstagung des Lehrstuhls Informatik I, Forschungsbericht Nr. 506, pages 35–55. Fachbereich Informatik der Universit¨ at Dortmund, 1993.
References
433
157. H. Thiele. On T-quantifiers and S-quantifiers. In The Twenty-Fourth International Symposium on Multiple-Valued Logic, pages 264–269, Boston, MA, 1994. 158. H. Thiele. On fuzzy quantifiers. In Bien and Min [13], pages 343–352. 159. H. Thiele. On median quantifiers. Technical report No. CI-27/98, University of Dortmund, Dept. of Computer Science/XI, 44221 Dortmund, Germany, 1998. 160. E. Thijsse. On some proposed universals of natural language. In A. ter Meulen, editor, Studies in Model-Theoretic Semantics, pages 19–36. Foris, Dordrecht, 1983. 161. H. Th¨ one. Precise conclusions under uncertainty and incompleteness in deductive databases. PhD thesis, University of T¨ ubingen, Germany, 1994. 162. H. Th¨ one, U. G¨ untzer, and W. Kiessling. Toward precision of probability bound propagation. In Proc. of the 8th Conf. on Uncertainty in AI, pages 315–322, San Francisco, 1992. Morgan and Kaufmann. 163. R. Thomason, editor. Formal Philosophy: Selected Papers of Richard Montague. Yale University Press, New Haven, CT, 1974. 164. M. Tye. Sorites paradoxes and the semantics of vagueness. In J.E. Tomberlin, editor, Philosophical Perspectives 8: Logic and Language, pages 189–206. Ridgeview, Atascadero, CA, 1994. Reprinted with omissions in [90, p. 281-293]. 165. M.A. Vila, J.C. Cubero, J.M. Medina, and O. Pons. Using OWA operator in flexible query processing. In R.R. Yager and J. Kacprzyk, editors, The Ordered Weighted Averaging Operators: Theory, Methodology and Applications, pages 258–274. Kluwer Academic Publishers, Boston, 1997. 166. W.J. Walkoe Jr. Finite partially-ordered quantification. Journal of Symbolic Logic, 35:535–555, 1970. 167. D. Westerst˚ ahl. Branching generalized quantifiers and natural language. In P. G¨ ardenfors, editor, Generalized Quantifiers, pages 269–298. Reidel, 1987. 168. T. Williamson. Vagueness and ignorance. Proceedings of the Aristotelian Society, Supplementary Volume, 66:145–162, 1992. Reprinted in [90, p. 265-280]. 169. C. Wright. Language-mastery and the sorites paradox. In G. Evans and J. McDowell, editors, Truth and Meaning: Essays in Semantics, pages 223– 247. Clarendon Press, Oxford, 1976. 170. M. Wygralak. An axiomatic approach to scalar cardinalities of fuzzy sets. Fuzzy Sets and Systems 110, pages 175–179, 2000. 171. R.R. Yager. A new approach to the summarization of data. Information Sciences, 28:69–86, 1982. 172. R.R. Yager. A new approach to the summarization of data. Information Sciences, 28:69–86, 1982. 173. R.R. Yager. Quantified propositions in a linguistic logic. Int. Journal of ManMachine Studies, 19:195–227, 1983. 174. R.R. Yager. Approximate reasoning as a basis for rule-based expert systems. IEEE Transactions on Systems, Man, and Cybernetics, 14(4):636–643, Jul./Aug. 1984. 175. R.R. Yager. General multiple-objective decision functions and linguistically quantified statements. Int. Journal of Man-Machine Studies, 21:389–400, 1984. 176. R.R. Yager. Reasoning with fuzzy quantified statements. Part I. Kybernetes, 14:233–240, 1985. 177. R.R. Yager. Reasoning with fuzzy quantified statements. Part II. Kybernetes, 15:111–120, 1985.
434
References
178. R.R. Yager. A characterization of the extension principle. Fuzzy Sets and Systems, 18(3):205–217, 1986. 179. R.R. Yager. On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Transactions on Systems, Man, and Cybernetics, 18(1):183–190, 1988. 180. R.R. Yager. Connectives and quantifiers in fuzzy sets. Fuzzy Sets and Systems, 40:39–75, 1991. 181. R.R. Yager. Fuzzy quotient operators for fuzzy relational databases. In Proceedings of IFES ‘91, pages 289–296, 1991. 182. R.R. Yager. On linguistic summaries of data. In W. Frawley and G. Piatetsky-Shapiro, editors, Knowledge Discovery in Databases, pages 347– 363. AAAI/MIT Press, 1991. 183. R.R. Yager. A general approach to rule aggregation in fuzzy logic control. Journal of Applied Intelligence, 2:333–351, 1992. 184. R.R. Yager. Counting the number of classes in a fuzzy set. IEEE Transactions on Systems, Man, and Cybernetics, 23(1):257–264, 1993. 185. R.R. Yager. Families of OWA operators. Fuzzy Sets and Systems, 59:125–148, 1993. 186. R.R. Yager and J. Kacprzyk. Linguistic data summaries: A perspective. In Proc. of 8th International Fuzzy Systems Association World Congress (IFSA ‘99), pages 44–48, 1999. 187. R.R. Yager, S. Ovchinnikov, R.M. Tong, and H.T. Nguyen, editors. Fuzzy Sets and Applications—Selected papers by L.A. Zadeh. John Wiley, New York, 1987. 188. L.A. Zadeh. Fuzzy sets. Information and Control, 8(3):338–353, 1965. 189. L.A. Zadeh. The concept of a linguistic variable and its application to approximate reasoning. Information Sciences, 8,9:199–249,301–357, 1975. 190. L.A. Zadeh. Fuzzy logic and approximate reasoning. Synthese, 30:407–428, 1975. 191. L.A. Zadeh. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems, 1:3–28, 1978. 192. L.A. Zadeh. PRUF—a meaning representation language for natural languages. Int. Journal of Man-Machine Studies, 10:395–460, 1978. 193. L.A. Zadeh. A theory of approximate reasoning. In J. Hayes, D. Michie, and L. Mikulich, editors, Machine Intelligence, volume 9, pages 149–194. Halstead, New York, 1979. 194. L.A. Zadeh. Possibility theory and soft data analysis. In L. Cobb and R.M. Thrall, editors, Mathematical Frontiers of the Social and Policy Sciences, pages 69–129. Westview Press, Boulder, 1981. 195. L.A. Zadeh. Test-score semantics for natural languages and meaning representation via pruf. In B.B. Rieger, editor, Empirical Semantics, Vol. 1, pages 281–349. Brockmeyer, Bochum, 1981. 196. L.A. Zadeh. Fuzzy probabilities and their role in decision analysis. In Proc. of the IFAC Symp. on Theory and Application of Digital Control, pages 15–23, New Delhi, 1982. 197. L.A. Zadeh. A computational approach to fuzzy quantifiers in natural languages. Computers and Mathematics with Applications, 9:149–184, 1983. 198. L.A. Zadeh. A theory of commonsense knowledge. In H.J. Skala et al., editor, Aspects of Vagueness, pages 257–295. Reidel, Dordrecht, 1984.
References
435
199. L.A. Zadeh. Syllogistic reasoning in fuzzy logic and its application to usuality and reasoning with dispositions. IEEE Transactions on Systems, Man, and Cybernetics, 15(6):754–763, 1985. 200. L.A. Zadeh. A computational theory of dispositions. International Journal of Intelligent Systems, 2(1):39–63, 1987. 201. L.A. Zadeh. The role of fuzzy logic in the management of uncertainty in expert systems. In R.R. Yager et al., editor, Fuzzy Sets and Applications: Selected Papers by L.A. Zadeh, pages 413–441. Wiley, New York, 1987. 202. S. Zadro˙zny and J. Kacprzyk. On database summarization using a fuzzy querying interface. In Proc. of 8th International Fuzzy Systems Association World Congress (IFSA ‘99), pages 39–43, 1999. 203. M. Zemankova and A. Kandel. Implementing imprecision in information systems. Information Sciences, 37(1-3):107–141, 1985. 204. M. Zemankova-Leech and A. Kandel. Fuzzy Relational Data Bases: A Key to ¨ Rheinland, Cologne, 1984. Expert Systems. Verlag TUV 205. H.J. Zimmermann. Fuzzy Sets, Decision Making, and Expert Systems. Kluwer, Boston, MA, 1987. 206. H.J. Zimmermann. Fuzzy Set Theory—and its Applications. Kluwer, Boston, MA, (2nd rev. ed.) edition, 1991.
Index
Σ-count 36, 42, 395 Σ-count approach 36, 42, 395 α-cut 180, 381 α-cut representation 183 FΩ -DFS 240, 381 FΩ -QFM 240 Fω -DFS 245 Fω -QFM 243 Fψ -DFS 264, 382 Fψ -QFM 263 Fϕ -DFS 271 Fϕ -QFM 269 Fξ -DFS 224, 277, 381 Fξ -QFM 224 MB -DFS 194, 381 MB -QFM 194 ¬-DFS 150 ¬-DFS 150 )-DFS 154 ( ¬, ∨ -DFS 151 ∨ absolute cardinality 32 absolute quantifier VI, 6, 21, 31, 35, 60, 169, 318, 383, 392 accessibility relation 83 accumulative behaviour of Σ-count approach 43, 396 adequacy criteria see plausibility criterion adjectival restriction 63, 126, 131 adjectival restriction with fuzzy adjectives see fuzzy adjectival restriction aggregation score 348
algebraic method 85, 107 algebraic product 113 algebraic sum 114 alternative construction of induced truth functions 115 ambiguity 15 ambiguity range of a three-valued subset 185 ambiguity relation see specificity order anaphor 64 antipersistent 99 antivalence see exclusive or antonym 93 approximate quantifier 21, 66, 70, 71 approximate reasoning 20 arg-continuous 163, 203, 231, 277, 380 argument compatibility 273 argument insertion 123 argument permutation 116 argument similarity 259 argument structure 72, 73, 116, 384 argument transposition 116 Aristotelian square 1, 95, 386 Aristotle 1 arithmetic mean 151 automorphism invariance VI, 12, 131, 386 axiomatic description of the standard models 157 axiomatic foundation VI, 86, 386, 387 axiomatic procedure 75 axiomatization of MCX 207
438
Index
axioms for L-DFSes
365
Barwise, J. 355 base quantifier 76 base set 5, 30, 36, 57 base set of infinite cardinality see infinite base set bibliographic databases 348 biimplication see equivalence binary quantifier 9, 31, 58 Black 18 bona fide assumption of linguistic plausibility 53 Boolean algebra 76 borderline case 14, 77 boundary conditions with respect to the crisp case 125 bounded product 113 bounded sum 114 bounding quantifier 383 branching quantification 56, 82, 355, 368, 371, 387 branching quantity in agreement 373 broad notion of quantifiers 6, 30 cardinal comparative 9, 21, 51, 62, 332, 383 cardinal determiners see absolute quantifiers cardinality bounds see cardinality coefficients cardinality coefficients 301, 302, 308 cardinality of fuzzy sets 32, 69 cardinality-based computation of quantitative semi-fuzzy quantifiers 296 case-by-case verification 69, 75 category theory 130 cautiousness 188, 381 cautiousness level 188 characterisation of S-quantifiers 139 characteristic function 18, 88 characterization of FΩ -DFSes 241 characterization of Fω -DFSes 245 characterization of Fψ -DFSes 265 characterization of Fϕ -DFSes 271 characterization of Fξ -DFSes 224 characterization of MB -DFSes 196, 198
characterization of MCX 207 characterization of ‘Gainesian’ QFMs 274 characterization of quantitative semi-fuzzy quantifiers 296 characterization of quantitative unary quantifiers 282 characterization of T-quantifiers 137 Choquet integral 222, 226, 381, 383 circularity of a quantifier 63 classification of linguistic quantifiers 63, 383 classification of the models by the induced negation 150 classification of the models by their induced negation and disjunction 151 cloudiness situation 41, 349 co-t-norm see s-norm coextensional terms 82 coherence VI, 108, 378, 384, 386 comparative determiner see cardinal comparative comparison by specificity 202, 232, 252 compatibility grade 272 compatibility with antonyms VI, 108, 119 compatibility with crisp argument insertion 125 compatibility with cylindrical extensions 118, 379 compatibility with dualisation VI, 96, 108, 378, 379 compatibility with external negation VI, 108, 119, 379 compatibility with functional application 104, 108, 378, 388 compatibility with fuzzy argument insertion 172, 206, 381 compatibility with internal complementation 119, 379 compatibility with intersections of arguments 123 compatibility with inverse images 388 compatibility with partial complementation 122 compatibility with permutations of arguments 118, 379
Index compatibility with the induced extension principle see compatibility with functional application compatibility with the nesting of quantifiers 370, 388 compatibility with transpositions of arguments 117 compatibility with unions of arguments 97, 108 composite quantifier see compound quantifier compound quantifier 57, 63, 384, 386 computation of Fowa from finite representation 290 computation of SQ,X1 ,...,Xn from cardinality coefficients r , ur 299 computation of M from finite representation 292 computation of MCX from finite representation 292 computation of cardinality coefficients
, u from histogram information 312 computation of cardinality coefficients
r , ur 309 computational analysis of general quantifiers 287 computational models 277, 382 concrete models for fuzzy quantification 381 conjunction of quantifiers 154, 165, 380 conjunctions of convex quantifiers are convex 167 conservativity 170, 297, 380, 403 consistency of the DFS axioms 109, 179 consistent view of fuzzy set theory 387 consistently less specific 151 constant quantifier see nullary quantifier constructed quantifier see non-simplex quantifier construction of L-QFM for some ordinary QFM 367 construction of membership functions 377 construction of partial QFM for existing approaches 393
439
constructions on models of fuzzy quantification 149, 151, 153, 154 constructive principle 179, 221, 237, 259, 381 context 11, 65, 377 context dependence 11, 15, 64, 73, 377 context range of a fuzzy set 135 contextual equality 135 contextual quantifier fuzzification mechanism 136 contextuality 135, 168, 379 continuity conditions 162 continuos in arguments see argcontinuos continuous in quantifiers see Qcontinuous continuous quantifier see mass quantification continuous-valued logic 76 continuous-valued model of vagueness see degree theory of vagueness convex combination 151 convex quantifier 166, 283, 380 core of a fuzzy set 135 correct generalization 87, 108, 111, 144, 378 correlativity 121 correspondence between specifications and operational forms 55, 74, 377 count NP 83 counterexample for FE-count approach 49 covariant functor 130 coverage of linguistic phenomena 376 coverage of quantificational phenomena 51, 52, 73, 80, 386 crisp adjectival restriction 126 crisp argument 71 crisp argument insertion 123, 172 crisp image mapping see crisp powerset mapping crisp powerset mapping 101 crisp range of a three-valued cut 188 crisp set 18 cross-arity coherence 108 cross-domain coherence 105, 108, 378 currying 141 cylindrical extension 77, 118, 128
440
Index
database interface 28 decomposition of convex quantifiers 167 definite description 3 definite quantifier 3, 56, 62, 79, 383 definition of Fowa 226 definition of FS 227 definition of FA 227 definition of FM 247 definition of FP 247 definition of FK 248 definition of FZ 248 definition of M 192 definition of MS 200 definition of MU 199 definition of MCX 201 definition of M∗ 201 definition of M∗ 200 degree of appropriateness 27 degree of covering the data 27 degree of imprecision 27 degree of orness 38, 45, 384, 400 degree theory of vagueness 17, 65 determinacy 78 determiner 4 determiner fuzzification scheme 106, 378 DFS see determiner fuzzification scheme DFS axioms 106, 108, 378 dilemma of fuzzy quantification 70, 81 direct interpretation of special quantifiers in MCX 282 discontinuous behaviour of Σ-count approach 44, 399 discontinuous behaviour of FG-count approach for proportional quantifiers 410 discontinuous behaviour of FGcount approach for restricted proportional quantification 48 discourse representation theory 63 discrete quantifier see count NP discriminative power 221 disjunction of quantifiers 154, 165, 380 disposition 5, 22 distance measure for quantifiers 163 distributivity 161 domain of quantification see base set
downward monotonic 99 DQL see dynamic quantifier logic DRT see discourse representation theory dual 94, 108 dual s-norm 114 dual of (semi-)fuzzy L-quantifiers 364 duplication of variables see multiple occurrences of variables dynamic quantifier logic 63 EFA
see evaluation framework assumption efficiency considerations 278 epistemic model of vagueness 16, 64 equivalence 114, 165 equivalence of both constructions of induced connectives 115 evaluation framework assumption 393 evaluation framework for existing approaches 391, 392 event 16 exception quantifier see quantifier of exception exclusive or 114, 165 existential quantification 1, 56, 136 existential quantification in a DFS 139 existing approaches to fuzzy quantification 36, 391 explicit quantification 5, 30 expressive power 6, 63, 70 expressiveness see expressive power extension 82 extension principle 102, 259, 272 extensional fuzzy quantifier 134 extensional quantifier 82 extensional quantifier fuzzification mechanism 134 external negation 92 faithful functor 130 FE-count 37, 49, 285, 412 FE-count approach 37, 48, 412 FG-count 36, 37, 47, 285, 406 FG-count approach 36, 37, 47, 213, 381, 406 fields of quantification 22, 70, 80 finite representation of Q,X1 ,...,Xn and ⊥Q,X1 ,...,Xn 290
Index finite scales of membership grades 214 first-order definable 6, 63, 376 first-order predicate 19 first-order proposition 81 first-order representation of fuzzy quantifiers see fuzzy linguistic quantifier fixed context assumption 11, 64, 65 flat representation of generalized quantifiers 57, 59 floating-point arithmetics 313, 325, 337 formal limits on the adequacy of interpretations 161, 370, 380 foundations of fuzzy reasoning 272 framework for fuzzy quantification 29, 55, 376 Frege’s principle see principle of compositionality full two-valued L-quantifier 358, 359 functional application 104, 108 functor 130 fuzzification mechanism 74, 75, 108, 274, 378 fuzziness see vagueness fuzziness in quantifiers 21, 376 fuzziness in the arguments 21, 70, 376 fuzzy adjectival restriction 126, 172, 381 fuzzy argument 66, 80 fuzzy argument insertion 172, 206, 370, 381 fuzzy arguments 21 fuzzy boundary 14 fuzzy branching quantification 387 fuzzy cluster analysis 20 fuzzy complement 18 fuzzy conjunction 113 fuzzy control 20 fuzzy data fusion 20, 29 fuzzy database 20, 28 fuzzy decision making 20 fuzzy design and optimization 20 fuzzy determiner see fuzzy quantifier fuzzy disjunction 114 fuzzy equivalence connective 260 fuzzy existential quantifier see S-quantifier fuzzy expert systems 20
441
fuzzy identity truth function 112 fuzzy image mapping see fuzzy powerset mapping fuzzy inclusion relation 97 fuzzy information retrieval 20, 57, 347 fuzzy intersection 18 fuzzy interval cardinality 284 fuzzy inverse image 141 fuzzy L-quantifier 360 fuzzy linguistic quantifier 34, 377, 383, 392 fuzzy median 152 fuzzy median-based aggregation mechanism 190 fuzzy multi-place quantification VI, 108, 141 fuzzy natural language quantification 86, 388 fuzzy negation 112 fuzzy pattern recognition 20 fuzzy powerset 18, 130 fuzzy powerset mapping 102 fuzzy pre-determiner see semi-fuzzy quantifier fuzzy projection L-quantifier 361 fuzzy projection quantifier 89, 108, 361 fuzzy quantifier 31, 66, 70, 361, 376 fuzzy querying see weighted querying fuzzy relation 18 fuzzy second-order relation 19, 31, 66, 376 fuzzy set see fuzzy subset fuzzy set theory 18 fuzzy subset 18 fuzzy truth function 91 fuzzy union 18 fuzzy universal quantifier see T-quantifier Gainesian fuzzification mechanism 157, 272, 274 general fuzzy quantifier 66 generalized fuzzy median 153, 184, 186, 222, 381 generalized Lindstr¨ om quantifier 358 generalized quantifier 56, 57, 70, 71, 375, 376 generator of a strong negation 150
442
Index
granularity of membership values 216 greatest lower specificity bound of -models 153 ∨ groups of interrelated objects 373 ‘have extension’ 133 hedge see linguistic hedge hedging see linguistic hedge higher-order logic 81 higher-order proposition 81 higher-order quantifier 81 histogram 310 homogeneity of interpretations homogeneous classes of models homomorphism 86, 378, 394
149 149
idempotence 161 identity 121 identity truth function 112 IL see intensional logic image ranking task 41, 349 imperfection of natural language 13 implementation of absolute quantifiers 318 implementation of cardinal comparatives in Fowa 337, 340 implementation of cardinal comparatives in M 337, 340 implementation of cardinal comparatives in MCX 337, 340 implementation of fuzzy quantifiers 277, 382 implementation of proportional quantifiers in Fowa 325, 328 implementation of proportional quantifiers in M 325, 328 implementation of proportional quantifiers in MCX 325, 328 implementation of quantifiers in MCX based on interval cardinality 285 implementation of quantifiers of exception 318 implementation of unary quantifiers in Fowa 313, 316 implementation of unary quantifiers in M 313, 316 implementation of unary quantifiers in MCX 313, 316 implicit quantification 5, 30
importance qualification 38, 41, 51, 325, 328, 350, 385 inclusion approach to fuzzy quantification 37, 38, 48 incompatibility of Σ-count approach with antonyms 399 incompatibility of OWA approach with dualisation of proportional quantifiers 406 indepencence of the DFS axioms 107 indeterminacy 184 indeterminate truth-value 184 indicator function see characteristic function individual correspondence assertion 75 induced connective see induced fuzzy truth function induced extension principle 103, 107, 129, 379 induced extension principle of an L-QFM 365 induced fuzzy complement 92 induced fuzzy conjunction 92, 114 induced fuzzy connective see induced fuzzy truth function induced fuzzy disjunction 92, 114 induced fuzzy equivalence 155 induced fuzzy exclusive or 155 induced fuzzy existential quantifier 139 induced fuzzy implication 114 induced fuzzy intersection 92 induced fuzzy inverse images 141, 379 induced fuzzy negation 92, 113 induced fuzzy truth function 91, 107, 112 induced fuzzy truth function of an L-QFM 363 induced fuzzy union 92 induced fuzzy universal quantifier 138 induced operation on fuzzy sets 92 induced propositional logic 112 induced truth functions of standard models 156 infinite base set VI, 57, 356, 385, 386 information retrieval 28, 347, 349 injective mapping 130
Index insertion of crisp arguments see crisp argument insertion insertion of fuzzy argument see fuzzy argument insertion integer arithmetics 315, 316, 328, 340 integral-based approach to fuzzy quantification 38 intension 82 intensional logic 56 intensional quantifier 82, 388 internal coherence 108 internal complementation 93 internal negation see internal complementation interpretation mechanism 29, 35, 36, 51, 392 interpretation of cardinality comparisons 280 interpretation of fuzzy quantifiers see modelling problem interpretation of ordinary quantifiers in an L-QFM 363 interpretation of the standard quantifiers 278 interpretation of two-valued quantifiers in standard models 157 intersection of arguments 97, 122 interval-valued quantifications 388 inverse image mapping 102 IR see information retrieval K-standard sequence logic 156 Kleene’s three-valued logic 156, 183 Kleene-Dienes implication 156 Klein group structure 95 knowledge acquisition 377 L-DFS 365 L-DFS axioms 365 L-model see L-DFS L-QFM see L-quantifier fuzzification mechanism L-quantifier see two-valued Lquantifier L-quantifier fuzzification mechanism 360 law of contradiction 161 -models 154 least upper bound of ∨ Lindstr¨ om quantifier 12, 356, 357
443
linguistic adequacy 53 linguistic data summarization 20, 26, 28, 350, 373 linguistic data summarization based on branching quantification 373 linguistic hedge 78, 81 linguistic plausibility 391 linguistic quantifiers 4, 20 linguistic summary see linguistic data summarization, 350, 373 linguistic theory of quantification see theory of generalized quantifiers local monotonicity in arguments 126 local monotonicity of a DFS 129 logic with generalized quantifiers 82, 355 logical quantifier 4, 10, 136, 375 logical symbol 10, 375 lower bound mapping ⊥Q,X1 ,...,Xn 223 main approaches to fuzzy quantification 38 many-valued logic 24, 313 mass quantification 83, 385–387 mean operator 152 measure theory 386, 387 median-based models 179 membership assessment 88, 89, 108, 361, 378 membership degree 18 membership function 18, 72 meteorological image 41, 349 metric for argument similarity 162 model aggregation scheme 151 model of fuzzy quantification 29, 85 model transformation scheme 150, 214, 379 modelling of vagueness 64 modelling problem 4, 25, 27, 55, 68, 376, 377 models based on the extension principle 259 models defined in terms of the standard extension principle 382 models defined in terms of three-valued cuts 237 models of fuzzy quantification 108 models which induce the standard disjunction 154
444
Index
models which induce the standard negation 150 monotonic quantifier 283 monotonicity of (semi-)fuzzy Lquantifiers in their arguments 364 monotonicity of a DFS 128 monotonicity of a quantifier in its arguments 98 monotonicity properties of quantifiers 126 Montague, R. 56 Mostowskian quantifier 11 multi-criteria decisionmaking 29, 57 multi-place quantifier VI, 9, 51, 57, 80, 108, 141, 384 multiple occurrences of variables 380 multiple variable binding 355, 357 mutual relationship 373 narrow notion of quantifiers 5 natural language V natural language interface 28 natural language processing 28 negation 121 negative evidence against the Σ-count approach 395 negative evidence against the FE-count approach 412 negative evidence against the FG-count approach 406 negative evidence against the OWA approach 400 nested representation of generalized quantifiers 58, 59 nesting of quantifiers 368, 369, 388 NL see natural language NL quantifiers see linguistic quantifiers NLI see natural language interface NLP see natural language processing nominal phrase 4 non-logical symbol 10, 375 non-monotonic logic 6 non-monotonic quantifier 50, 285 non-quantitative quantifier VI, 57, 80, 131, 376, 386 non-referring term 64 non-simplex quantifier 77
non-standard model 380 nondecreasing (semi-)fuzzy L-quantifier 364 nondecreasing approximation of a quantifier 371 nondecreasing quantifier 98, 283, 372 nonexistence of α-cut based models 182 nonexistence of models based on the resolution principle 183 nonincreasing (semi-)fuzzy L-quantifier 364 nonincreasing approximation of a quantifier 371 nonincreasing quantifier 98, 283, 372 novel framework for fuzzy quantification 55, 80, 375, 376 NP see nominal phrase nullary quantifier 58, 78, 87 observable behaviour 85 one-place quantifier see unary quantifier opaque context 82 operational form of a linguistic quantifier 29, 55 operations on the argument sets 116 optional semantical properties 161 ordered weighted averaging operator 37, 45 ordered weighted averaging operators 400 OWA approach 37, 38, 45, 222, 226, 381, 400 OWA operator see ordered weighted avaraging operator parametrized quantifiers 72 partial quantifier fuzzification mechanism 393, 394 partially ordered quantifiers 56 permutation 116 persistent 99 Piaget group of transformations 121, 386 plausibility criterion 53, 85, 144, 378 plausibility of induced fuzzy inverse images 141
Index plausible model of fuzzy quantification 85 possibility theory 388 possible world 83 potential model of fuzzy quantification see quantifier fuzzification mechanism power of a fuzzy set see Σ-count powerset mapping 101 practical FΩ -models are Fξ -models 255 practical model of fuzzy quantification 179, 277 precise quantifier 21, 57 precisification 17 preservation of Aristotelian squares 96, 119, 156, 386 preservation of Boolean argument structure 97 preservation of contextuality 136 preservation of convexity 168, 380 preservation of extension 134, 379, 386 preservation of inequations between quantifiers see monotonicity of a DFS preservation of local inequalities see local monotonicity of a DFS preservation of local monotonicity 127, 379 preservation of monotonicity in the arguments 100, 108, 126, 378 preservation of monotonicity properties 100 preservation of quantitativity 132, 386 preservation of strong conservativity 171 preservation of symmetry properties 118 preservation of weak conservativity 171 preservation property 86, 394 presupposition 3, 62 principle of compositionality 9, 56, 63, 124 principle of extensionality 82 principle of supervaluation 17, 381 principle of tolerance 16 probability theory 15 processing of fuzzy arguments 377
445
projection L-quantifier 361 projection quantifier 88, 108, 361 propagation of fuzziness 163, 221, 227, 249, 250, 381 propagation of fuzziness in arguments 164, 205, 229, 250, 251, 380 propagation of fuzziness in quantifiers 164, 205, 228, 249, 250, 380 propagation of unspecificity 229, 250, 251 proper name 4, 62, 89, 108, 131, 386 proper name quantifier 62 properties of Fowa 381 properties of FΩ -models 249 properties of Fξ -models 227 properties of MB -models 201 properties of MCX 206, 381 properties of induced extension principle 129 properties of induced truth functions 112 proportional comparative 21, 383, 384 proportional determiners see proportional quantifier proportional quantifier VI, 7, 21, 31, 35, 60, 321, 383, 392 propositional function 91 prototypical models 382, 383 Q-continuous 163, 203, 230, 255, 277, 380 QFA see quantification framework assumption QFM see quantifier fuzzification mechanism quantification framework assumption 76, 81, 393 quantifier fuzzification mechanism 74, 85, 377, 393 quantifier interpolation method 38, 41 quantifier nesting see nesting of quantifiers quantifier of comparison 61 quantifier of exception 21, 60, 318, 383 quantifier of the third kind 31, 385 quantifier restricted to E 12 quantifier synthesis method 404, 411 quantifiers for cardinality comparison 280
446
Index
quantifiers of the first kind see absolute quantifiers quantifiers of the second kind see proportional quantifier quantifiers of the third kind 347 quantitativity 10, 12, 131, 283, 376, 386 quantity in agreement 350 querying of information systems 347, 349 range of quantificational phenomena 29, 55, 73, 80, 383, 386 ranking criterion 348 ‘raw’ approaches to fuzzy quantification see quantifier fuzzification mechanism reasoning with fuzzy quantifiers 26, 27 reciprocal construction 82, 373 reciprocal predicate 355, 373 reciprocal summarizer 373 reciprocity 121 reduction of Fω -DFSes to Fψ -DFSes 266 reduction of Fψ -DFSes to Fω -DFSes 267 reduction of Fψ -QFMs to Fϕ -QFMs 270 reduction of Fϕ -QFMs to Fψ -QFMs 270 reduction of SQ,X1 ,...,Xn to cardinality coefficients r , ur 299 reduction of a DFS to a ¬-DFS 150 reduction of absolute quantifiers to unary quantifiers 318 reduction of branching quantification to Lindstr¨ om quantifiers 356 reduction of existential quantification to induced extension principle 131 reduction of fuzzy branching quantification to semi-fuzzy L-quantifiers 371 reduction of fuzzy existential quantification to induced disjunction 140 reduction of fuzzy quantification to cardinality comparisons 35, 37, 52
reduction of fuzzy quantification to first-order representations 32 reduction of fuzzy universal quantification to induced conjunction 140 reduction of induced extension principle to existential quantification 130 reduction of induced fuzzy disjunction to induced extension principle 140 reduction of L-DFSes to ordinary DFSes 367 reduction of multi-place quantification to unary quantification 141 reduction of quantifiers of exception to unary quantifiers 318 reduction of unrestricted proportional quantifiers to unary quantifiers 329 redundant argument see cylindrical extension regular nondecreasing 45, 400 relational structure 357 relationship between quantifiers 63 relative Σ-count 395 relative cardinality 32, 384, 395 relative FG-count 407 relative quantifiers see proportional quantifier reliability 52 Rescher’s quantifiers 24, 77, 81 resolution principle 182, 381 restricted proportional quantification 50, 325, 328 restricted quantification see restricted use of a quantifier restricted use of a quantifier 8, 31, 35, 60, 375 restriction of a quantifier 8, 59 robustness against the choice of the full domain 133 robustness of MCX 211 Russell, B. 3, 56, 62 S-function 33 s-norm 114, 138 S-quantifier 136, 138 scalar cardinality of a fuzzy set 395 scope of a quantifier 8, 36, 58, 59, 332
Index second-order predicate 19 self-consistency of quantifier interpretations 107 semantic properties of plausible models 111 semantic universal 63 semantical characteristics of quantifiers 63 semantical constraint see plausibility criterion semantical expectation 378 semantical plausibility 55, 386 semantical postulate see plausibility criterion semantical requirement see plausibility criterion semi-fuzzy L-quantifier 359 semi-fuzzy quantifier 71, 361, 376, 393 semi-fuzzy truth function 91, 363 separation into application-dependent and independent parts 377 separation of context-dependent and context-independent factors 377 set of indeterminates 189 set of similarity grades 261 Sigma-count approach see Σ-count approach similarity grade 259 simplex quantifier see base quantifier smoothness see continuity conditions Sorites paradox 14, 17, 65 spatial adverbs 5 spatial quantification 349 specification medium see specification of linguistic quantifiers specification of linguistic quantifiers 29, 55, 71, 376, 384 specificity consistent 153, 202, 233, 252, 253 specificity order 151, 202, 232, 252 SQL 28, 52 square of opposition see Aristotelian square stable symmetric sum 151 standard connectives 107 standard DFS 149, 156 standard dual 155 standard extension principle 103, 105, 107, 140, 382
447
standard fuzzy complement 113 standard fuzzy conjunction 113 standard fuzzy disjunction 114 standard fuzzy intersection 155 standard fuzzy negation 113 standard fuzzy union 155 standard model of fuzzy quantification see standard DFS standard models of fuzzy quantification 380 standard of comparison 11, 64 standard quantifier 136 strict α-cut 180 strong conservativity 171 strong negation 101, 112 structural characteristics of quantifiers 375 structural weaknesses of the traditional framework 51 structure-preserving mapping see homomorphism subclasses of plausible models 149, 379 substitution approach 39, 208 Sugeno integral 212, 381, 383 summarization see linguistic data summarization summarization of image sequences 352 summarizer 350, 373 supervaluationism 16, 65, 381 support of a fuzzy set 135 syllogism 2, 3, 99 symmetric sum 151, 152 symmetric summarizer 373 symmetrical difference of the arguments 121 symmetrization of asymmetrical relations 373 symmetry 118 symmetry of a quantifier 63, 117 systematicity 51, 384 t-norm 113, 137 T-quantifier 136, 137 target operations see operational form of a quantifier temporal adverb 5 textual documents 349
448
Index
TGQ
see theory of generalized quantifiers theoretical skeleton for fuzzy quantificaton see framework for fuzzy quantification theory of descriptions 3, 56, 62, 79 theory of generalized quantifiers VI, 57, 76, 79, 86, 320, 376 theory of vagueness 13, 64 three-valued arguments 206 three-valued cut 179, 187, 189, 237, 381 three-valued logic 17, 79, 156 three-valued model of vagueness 17, 65 three-valued propositional function 184 three-valued quantifier 206 three-valued subset 184 topological quantifier 11, 12 traditional framework for fuzzy quantification 30, 376, 392 transformations between models of fuzzy quantification 149, 150 transposition 116 trapezoidal proportional quantifier 352 truth status statistics 310 truth value gap 17 truth value glut 17 truthfulness 27 two-place quantifier see binary quantifier two-valued generalized quantifier see generalized quantifier two-valued L-quantifier 358, 359 two-valued quantifier see generalized quantifier two-valued set see crisp set Type I quantification 22 Type II quantification 22, 157 Type III quantification 22, 71, 376 Type IV quantification 22, 24, 32, 37, 70, 376, 383 typed λ-calculus 125
unary quantifier
9, 31, 62, 87, 141
undefined interpretation 58, 62, 79 underlying semi-fuzzy L-quantifier 361 underlying semi-fuzzy quantifier 75, 86, 112, 393 underspecificity 15 uniform representation of quantifiers 63, 71, 383, 384 unimodal quantifier 166, 372, 380 union of arguments 63, 96, 108 unions of arguments for L-QFMs 364 unions of the arguments of (semi-)fuzzy L-quantifiers 364 uniqueness of compatible extension principle 130 universal model 384 universal quantification 1, 56, 68, 74, 136 universal quantification in a DFS 138 universal specification medium 51, 80 universe of discourse 5, 30, 36 universe of quantification see base set unrestricted proportional quantification 329 unrestricted quantification see unrestricted use of a quantifier unrestricted use of a quantifier 8, 31, 35, 60 upper and lower bounds on three-valued cuts 221 upper bound mapping Q,X1 ,...,Xn 223 upward monotonic 98 user interfaces 28 vacuous argument see cylindrical extension vagueness 13, 78 value judgment det 65 verbal phrase 58, 59, 98, 332 weak conservativity 170 weak preservation of convexity 169, 208, 381 weighted aggregation 349, 350 weighted quantification see importance qualification weighted querying 347
List of Symbols
∀ pp. 3, 60 ∀T p. 24 ∀ p. 67 ∃ pp. 3, 60 ∃I p. 24 ∃ p. 67 ∃I p. 77 ¬ crisp negation and complementation ¬ p. 18, standard negation and complementation ¬ pp. 92, 113, fuzzy negation and complement ¬ Q p. 93 p. 93 ¬ Q ∧ crisp conjunction ∧ p. 113, fuzzy standard conjunction pp. 92, 113, fuzzy conjunction ∧ p. 154, Q ∧ Q ∧ ∧ p. 154, Q Q ∧ m p. 113 ∧ Q p. 137 ∧ ∨ crisp disjunction
∨ ∨ ∨ ∨ Q ∨
p. 114, fuzzy standard disjunction pp. 92, 114, fuzzy disjunction Q p. 154, Q ∨ Q p. 154, Q ∨ p. 139 → implication ↔ biimplication/equivalence ∩ crisp intersection ∩ p. 18, standard fuzzy intersection p. 92, fuzzy intersection ∩ ∪ crisp union ∪ p. 18, standard fuzzy union p. 92, fuzzy union ∪ \ set difference A \ B = A ∩ ¬B p. 122, symmetric difference p. 122, QA p. 122, fuzzy symmetric difference A p. 122, Q [≥k] pp. 6, 279 [≥k; ≤u] pp. 7, 286 [>k] p. 7 [=k] p. 7 [≤k] p. 7 ≤(U, V ) p. 129, Q ≤(U, V ) Q
450
List of Symbols
≤(U, V ) Q ≤(U, V ) p. 129, Q ∼ = isomorphic ∼ p. 294, (Y1 , . . . , Yn ) ∼ (Y1 , . . . , Yn ) ∼∀ p. 168 ∼∃ p. 169 ∼(X1 ,...,Xn ) p. 135, Q ∼(X1 ,...,Xn ) Q ≈ p. 299, (Y1 , . . . , Yn ) ≈ (Y1 , . . . , Yn ) ⊆ subsethood ⊆ p. 98, X ⊆ X p. 264, A A p. 271, f f p. 240, S S p. 244, s s p. 249, S S c p. 151, F c F c p. 163, Q c Q c Q c p. 163, Q c p. 249, S c S c p. 163, X c X c p. 151, x c y c p. 254, ω c ω p. 245, s s p. 123, QA p. 123, QA p. 172, Q A @Q @ 369, Q p. 369, Q @ Q @ ∅ empty set, empty tuple {∗} p. 115 ◦ functional composition × cartesian product n × fi p. 104 i=1
! pp. 131, 140 j p. 289 Q,X1 ,...,Xn p. 223
S p. 242 ⊥j p. 289 ⊥Q,X1 ,...,Xn p. 223 ⊥S p. 242 | restriction of f to A, f |A |•| p. 58 min p. 284 |•|γ max p. 284 |•|γ •iv p. 284 ♦ p. 204 (•) (•)γ p. 190
• p. 142, Q
• p. 142, Q 2 p. 18 A p. 264 A p. 264 β p. 362 Γ (X1 , . . . , Xn ) p. 288 γ j p. 289 p. 260 δX,Y ζi p. 366 ζP p. 306 η p. 91 η p. 91 Θ p. 273 Θ(X, Y ) p. 273 ϑ p. 362 θ(x, y) p. 272 (e) p. 142 ın,E i κi p. 366 Λ p. 303 µ∃ p. 50 µant Q p. 404 µ[j] (X) pp. 31, 45, 212 µQ p. 33 µQ,E p. 400 µX p. 18
List of Symbols
νX p. 185 ΞY1 ,...,Yn (X1 , . . . , Xn ) p. 260 V,W p. 208 Ξ ξ p. 224 ξA p. 227 ξowa p. 226 ξS p. 227 Π p. 39 π∗ p. 115 πe p. 88 π ∗ p. 115 π e p. 89 'd p. 348 sum 67 Σ-Count pp. 42, 395 Σ-Count(X2 /X1 ) p. 395 σP∗ p. 305 σQ p. 150 σX p. 150 τ p. 42 τi p. 116 τi, j p. 116 Υ p. 213 Φ1 ,...,n p. 296 ϕ p. 269 χA pp. 18, 88 Ψ [(Fj )j∈J ] p. 151 ψ p. 263 Ω p. 240 Ω ∗ p. 252 ω p. 243 ω c ω p. 254 ωK p. 248 ωM p. 247 ωP p. 247 ωZ p. 248
451
ω ∗ p. 253 A A p. 264 the set of all mappings AB f : B −→ A Ad p. 348 AQ,X1 ,...,Xn p. 262 (n) AQ,X1 ,...,Xn p. 262 A(X1 , . . . , Xn ) p. 288 AΓ p. 311 A(Q) p. 181 A p. 262 about k p. 320 abs manyρ, τ p. 72 all p. 59 all except about k p. 320 all except at most k p. 320 all except exactly k p. 320 all except k p. 59 almost all p. 72 almost always p. 353 always p. 353 as many as possible p. 73 at least k p. 59 at least k more than p. 333 at least once p. 353 at least p percent p. 61 at least twice as many p. 333 at most k p. 59 Bj p. 292 B p. 194 B p. 197 p. 199 B B ∗ p. 200 p. 201 B∗ B CX p. 201 B S p. 200 B U p. 199
452
List of Symbols
B p. 192 B+ p. 192 B− p. 192 1
En p. 142 E p. 102 eq p. 280 exactly k p. 59 exactly k more than p. 333 exactly p percent p. 61 exactly twice as many p. 333 F p. 14 F pp. 74, 360 F c F p. 151 F pp. 103, 365 F−1 p. 141 F pp. 91, 363 F p. 115
p. 192 B2 between p and q percent p. 61 between r and s p. 59 Cj p. 289 C(Q) p. 121 C(Q) p. 182 c∅ p. 103 ca p. 193 c1 ,...,n p. 296 c(P ) p. 304 c(p) p. 302 FA p. 227 [card ≥] p. 280 Fowa p. 226 [card >] p. 280 Fglb p. 153 [card =] p. 280 FK p. 248 (Ch) X dQ p. 226 FL p. 367 Ch(Q) p. 394 FM p. 247 core(X) p. 135 p. 154 F Count p. 35 lub FP p. 247 cxt(X) p. 135 FR p. 363 D(A) p. 262 FS p. 227 Dt p. 304 FZ p. 248 D(p) p. 302 Fξ p. 224 DX1 ,...,Xn p. 261 (n) F σ p. 150 DX1 ,...,Xn p. 261 Fϕ p. 269 D p. 261 Fψ p. 263 d(f, g) p. 203 FΩ p. 240 d(Q, Q ) p. 163 Q ) p. 163 Fω p. 243 d(Q, f f p. 271 d((, ⊥), ( , ⊥ )) p. 230 p. f −1 p. 102 d((X1 , . . . , Xn ), (X1 , . . . , Xn )) f p. 101 162 ˆ fˆ p. 103 d (f, g) p. 203 f˘ p. 184 d ((, ⊥), ( , ⊥ )) p. 231 f p. 91 E pp. 5, 57
List of Symbols
f♦ f f f∗ f0∗ f1∗ f∗0
p. 204 p. 195 p. 195 p. 115 p. 195 p. 195 p. 195
1
f∗2 p. 195 f∗1↑ p. 195 fA p. 268 f |A restriction of f to A fQ,X1 ,...,Xn p. 268 (n) fQ,X1 ,...,Xn p. 268 FE-count pp. 49, 412 (1) FEabs p. 412 FG-count pp. 47, 406 FG-count(X2 /X1 ) p. 407 (1) FGabs p. 406 (2) FGabs p. 407 p. 407 FG(1) prp p. 408 FG(2) prp gα p. 400 H + p. 311 H − p. 311 HA p. 310 HS p. 48 H p. 197 h p. 78 HistX p. 310 I p. 18 I(Q) p. 121 id2 p. 112 idI p. 112 Im f image of f inf infimum J ∗ p. 291 J p. 293
j ∗ p. 291 j p. 293 jpk p. 283 K p. 238 L p. 243 L(Q, V, W ) p. 208 "(j) p. 309 p. 302 "V less than k p. 59 M I p. 24 M T p. 24 M p. 192 M∗ p. 200 M∗ p. 201 MB p. 194 MCX p. 201 MS p. 200 MU p. 199 p. 152 m 12 many p. 371 many more than p. 333 max maximum p. 152 med 1 2
min minimum more than p. 62 more than k p. 59 more than p percent most pp. 61, 356 N p. 39 N cardinals 0, 1, 2, . . . N(Q) p. 121 NV(A) p. 265 no p. 59 O p. 168 often p. 353 only p. 117 orness p. 400
p. 61
453
454
List of Symbols
OWA(1) p. 400 prp (2) OWAprp p. 401 P p. 304 P∗ p. 304 P(E) p. 11 P˘ (E) p. 185 P(E) p. 18 pol(p) p. 305 Prop p. 35 Q pp. 57, 71, 359 Q p. 369 Q@ Q c Q p. 163 Q ≤(U, V ) Q p. 129 Q ∼(X1 ,...,Xn ) Q p. 135 Q p. 154 Q∧ Q p. 154 Q∨ Q+ p. 167 Q− p. 167 Q¬ p. 93 pp. 94, 364 Q Q∪ pp. 96, 364 Q∪k p. 143 Q∩ p. 123 QA p. 122 QA p. 123 Q A p. 172 Qf p. 91 L Q p. 208 V, W U QV, W p. 208 Qβ p. 116 Qγ p. 190 Qτi p. 117
Q p. 142 pp. 31, 66, 360 Q @Q p. 369 Q c Q p. 163 Q ≤(U, V ) Q p. 129 Q
∧ p. 154 Q Q p. 154 Q Q∨ ¬ p. 93 Q pp. 94, 364 Q Q∪ pp. 96, 364 ∪ k p. 143 Q ∩ p. 123 Q A p. 122 Q QA p. 123 z p. 274 Q Qβ p. 116
Q p. 142 ˘ p. 186 Q Q p. 357 Q the set of rational numbers q(j) p. 282 q max (", u) p. 283 q min (", u) p. 283 R the set of real numbers R+ = [0, ∞) the set of non-negative reals Rj p. 300 RQ,X1 ,...,Xn p. 153 Rγ (X1 , . . . , Xn ) p. 299 RγΦ1 ,...,ΦK (X1 , . . . , Xn ) p. 299 R(f ) p. 182 R(Q) p. 121 r+ p. 261 r+ (A) p. 262 r+ (D) p. 261 r+ (f ) p. 269 [rate ≥ r] p. 61 [rate > r] p. 61 [rate ≤ r] p. 61 [rate < r] p. 61 [rate = r] p. 61 [r1 ≤ rate ≤ r2 ] p. 61
List of Symbols
rel manyρ, τ p. 72 RES p. 183 S p. 33 (S) X dQ p. 212 S S p. 240 S c S p. 249 S S p. 249 S p. 254 S p. 240 S p. 240 S ‡ p. 241 Sj p. 293 S(p) p. 302 S(P ) p. 304 SQ,X1 ,...,Xn p. 238 s s p. 244 s s p. 245 s(z) p. 239 sQ,X1 ,...,Xn p. 239 s‡ p. 246 p. 246 s,0 ∗ ,∗ p. 246 s1 p. 246 s⊥,0 ∗ ⊥,∗ p. 246 s1 1 ≤2
s∗
1 ≥ s∗ 2
p. 246
p. 246 same number of p. 333 (1) SCabs p. 395 (2) SCabs p. 395 p. 395 SC(1) prp p. 395 SC(2) prp some p. 59 sometimes p. 353 spp(X) p. 135 sup supremum Tγ (X) p. 187 T ∗ p. 295
455
T (X) p. 185 T (x) p. 184 Tγ (X) p. 188 T p. 223 tγ (x) p. 187 thesg p. 62 thepl p. 62 trpa,b,c p. 352 type(p) p. 305 U pp. 75, 361 Um p. 316 U (Q, V, W ) p. 208 u(j) p. 309 p. 302 uV V p. 302 v0 p. 321 X p. 18 X p. 269 X ⊆ X p. 98 X c X p. 163 X≥α p. 180 X>α p. 180 X [p] p. 302 X v p. 301 X min p. 185 X max p. 185 Xγmin p. 188 Xγmax p. 188 min p. 284 |X|γ max p. 284 |X|γ x c y p. 151 xor exclusive or/antivalence Yj p. 282 Y () p. 296 (Y1 , . . . , Yn )∗ p. 294 (Y1 , . . . , Yn ) ∼ (Y1 , . . . , Yn ) p. 294 (Y1 , . . . , Yn ) ≈ (Y1 , . . . , Yn ) p. 299
456
List of Symbols
Z p. 35 Z the set of integers p. 302 ZV (1) Zabs pp. 35, 392 (2) Zabs pp. 35, 392 (1) Zprp pp. 35, 392
(2)
Zprp pp. 35, 392 z+ p. 262 z+ (A) p. 262 z+ (f ) p. 269
List of Figures
1.1 1.2 1.3
1.6 1.7
A possible membership function for “almost all” . . . . . . . . . . . . . . Cloudiness in Southern Germany (guiding example) . . . . . . . . . . “About ten percent of Southern Germany are cloudy” (Sigma-count) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “At least 60 percent of Southern Germany are cloudy” (Sigma-count) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “At least 60 percent of Southern Germany are cloudy” (OWA approach) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “At least 5 percent of Southern Germany are cloudy” (FG-count) “The image region X is nonempty” (FE-count) . . . . . . . . . . . . . .
3.1
The Aristotelian square of “all” . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.1
Plot of the fuzzy median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7.1 7.2
α-cut as a function of α and µX (v) . . . . . . . . . . . . . . . . . . . . . . . . . 181 Three-valued cut as a function of γ and µX (v) . . . . . . . . . . . . . . . 188
1.4 1.5
34 42 44 44 46 49 50
11.1 Ranking for “As much as possible of Southern Germany is cloudy” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 11.2 “Q-times cloudy in the last days”: an example of image summarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 13.1 Coverage of quantificational phenomena: DFS vs. existing approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
458
List of Figures
A.1 “About 10 percent of Southern Germany are cloudy” (Sigma-Count) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 A.2 “At least 60 percent of Southern Germany are cloudy” (Sigma-Count) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 A.3 “At least 60 percent of Southern Germany are cloudy” (OWA approach) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 A.4 “More than 60 percent of Southern Germany are cloudy” (OWA approach) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 A.5 “At least 5 percent of Southern Germany are cloudy” (FG-Count) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 A.6 “Less than thirty percent of Southern Germany are cloudy” (FG-count) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 A.7 “The image region X is nonempty” (FE-count) . . . . . . . . . . . . . . 414
List of Tables
1.1 1.2 1.3 1.4 1.5
7.1 7.2
Examples of NL quantification in various areas of everyday life . 2 Quantifying propositions and their translations into predicate logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 The fields of fuzzy quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Rescher’s quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 “Less than 60 percent of Southern Germany are cloudy” (OWA approach) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Computation of L(Q, V, W ) and U (Q, V, W ) . . . . . . . . . . . . . . . . . 209 U (X) . . . . . . . . . . . . . . . . . . . . 210 L (X) and Q Computation of Q V, W V, W
11.1 Algorithm for unary quantifiers in Fowa (floating-point arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 11.2 Algorithm for unary quantifiers in M (floating-point arithmetics)314 11.3 Algorithm for unary quantifiers in MCX (floating-point arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 11.4 Algorithm for unary quantifiers in Fowa (integer arithmetics) . . . 317 11.5 Algorithm for unary quantifiers in M (integer arithmetics) . . . . 318 11.6 Algorithm for unary quantifiers in MCX (integer arithmetics) . . 319 11.7 Examples of quantifiers of exception . . . . . . . . . . . . . . . . . . . . . . . . 320 11.8 Algorithm for proportional quantifiers in Fowa (floating-point arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 11.9 Algorithm for proportional quantifiers in M (floating-point arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 11.10Algorithm for proportional quantifiers in MCX (floating-point arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
460
List of Tables
11.11Algorithm for proportional quantifiers in Fowa (integer arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 11.12Algorithm for proportional quantifiers in M (integer arithmetics)329 11.13Algorithm for proportional quantifiers in MCX (integer arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 11.14Algorithm for cardinal comparatives in Fowa (floating-point arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 11.15Algorithm for cardinal comparatives in M (floating-point arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 11.16Algorithm for cardinal comparatives in MCX (floating-point arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 11.17Algorithm for cardinal comparatives in Fowa (integer arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 11.18Algorithm for cardinal comparatives in M (integer arithmetics) 341 11.19Algorithm for cardinal comparatives in MCX (integer arithmetics) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 11.20Complexity of the algorithms for Fowa , M and MCX : Main loop348 A.1 “Less than 60 percent of Southern Germany are cloudy” (OWA approach) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406