Foreword Logic studies the way we draw conclusions and express ourselves, and deals with how to formalise it. In Logic, ...
20 downloads
562 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Foreword Logic studies the way we draw conclusions and express ourselves, and deals with how to formalise it. In Logic, propositions are elementary statement and conclusion units; they are analysed both as to the form, namely their syntax, and the interpretation, namely their semantics. The relation between the syntax and the semantics is also examined. The first steps in Logic are credited to the Ionian and Elean philosophers and to the sophists. It is however Aristotle who is considered as the founder of Logic as a science.
Somehow, interest in Logic declined with the Roman prevalence
in the Mediterranean Sea. And in the Middle Ages, as most of the work of the ancient philosophers
except for Plato and Aristotle
was already lost or had
disappeared, Aristotle's syllogistics became the priviledge of only a few monks. Logic regained its interest as non-Euclidian geometries were discovered and as the need for a theoretical foundation of analysis became evident.
As soon as
1879, Frege presented the first formal language for Mathematics and Logic. But it was the paradoxes of set theory, and the many conversations and disputes among mathematicians of that period on that subject, that gave Logic its final impulsion. In 1895 and 1897, Cantor published the first and the second part of his study on cardinal and ordinal numbers. In this study, which constitutes the first foundation of set theory, each collection of objects is regarded as a separate entity called a set. Buralli-Forti discovered a paradox in Cantor's theory as soon as 1897. And in 1899, Cantor himself published a paradox concerning his theory on cardinal numbers. Russell published in 1902 a simpler form of that same paradox, known in the history of mathematics as Russell's paradox: In Cantor's theory, each set is defined by the characteristic property of its elements. Let A be the set of all sets X defined by property X r X,
A = {XIXr But then: AEA
if and only if A C A
which is obviously an antinomy. vii
viii
Foreword The paradoxes of set theory generated a great deal of doubt and concern among
mathematicians of that period about the well-founding of mathematics. They also made clear that only rigorous formalising of mathematical concepts and method could lead to "sound" theories without antinomies. And so, under Hilbert's influence, Remark 1.4.1 (2), the axiomatic method's development began and Logic started taking its actual form. With the use and the development of computers in the beginning of the 1950's, it soon became clear that computers could be used not only for arithmetical computation but also for symbolic computation. Hence, the first arithmetical computation programs, and the first programs created to answer elementary questions and prove simple theorems, were written simultaneously. The basic steps towards a general method based on Logic, which would allow statement procedures as well as manipulation of symbols, were first accomplished in 1965 by Robinson with the use of the resolution method, and later by Kowalski and Colmerauer who made use of Logic directly as a Logic Programming language. This book unfolds Logic's basic elements; it analyses the relation of and the transition from Logic to Logic Programming. The first chapter describes Propositional Logic (PL). It examines, in the first place, the propositional language, as well as the syntax and the semantics of PL formulae.
Three proof methods are presented afterwards: the axiomatic proof
method, mainly for historical reasons, the Beth tableaux method, with proofs using mathematical analysis of propositions, and the resolution method. The last method uses proofs in a similar but much more effective formalism that can easily be materialised in a program. These proof methods are validated by corresponding soundness and completeness theorems. The second chapter examines Predicate Logic (PrL). In Predicate Logic, structures of propositions are more extensively analysed, for the predicate language is much richer than the propositional language. The predicate language contains indeed quantifiers and constants, as well as predicates and functions. PrL semantics are hence much more complex. Herbrand interpretations are of great importance to these semantics, because they make satisfiability tests for PrL propositions possible by using PL methods. The axiomatic proof method, the Beth tableaux method as well as the resolution method, which by means of the unification algorithm constitutes the basis of Logic Programming, are examined afterwards. Corresponding soundness and completeness theorems confirm the validity of these proof methods.
Foreword
ix
The third chapter deals with Logic Programming as well as with the programming language PROLOG, and it also analyses PROLOG's derivation mechanism. Logic supplies Logic Programming and PROLOG with both a very rich language and the theoretical foundation which guarantees correct results. Each chapter includes solved as well as unsolved exercises provided to help the reader assimilate the corresponding topics. The solved exercises demonstrate how to work methodically, whereas the unsolved exercises aim to stimulate the reader's personal initiative. The contents of this book are self-contained, in the sense that only elementary knowlegde of Analysis is required to study them. This book can therefore be used by students in every academic year, as a simple reading, or in the context of a course. It can also be used by those who utilize Logic Programming without having any particular theoretical background knowledge of Logic, and by those simply interested in Logic and its applications in Logic Programming. All topics developed in this book are constantly analysed within the perspective of a transition from Logic to Logic Programming. Therefore, certain proofs, mainly those of completeness of the axiomatic method proofs, are omitted. References to the international bibliography direct to more complete developments of concepts and proofs that are only indirectly related to the perspective of the book, and therefore not extensively examined. The material in this book is based on the book published by the first author in Greek in 1992 and which, in turn, drew on the authors' seminars on the Principles of Logic Programming, as well as on teaching at the universities of Cornell and Rochester in the United States, and at the University of Patras, in Greece. At this point, the contributions of Aneta Sinachopoulos-Svarna in sections 1.1, 1.8, 2.3, 2.11, 3.2, 3.3, 3.6, 3.8, 3.9 and in the material's general configuration and presentation, and of George Potamias in units 3.4, 3.5 and 3.7 must be stressed. Costas Kontogiannis, Spiros Papadakis and Petros Petropoulos, postgraduate students of the university of Patras, also contributed to the material's preparation. All programs included in the text and in the exercices have been run by George Potamias in TURBO-PROLOG, v.2.0. George Metakides Anil Nerode
I Propositional Logic &q oOx ~az[. Man is the measure of all things; of the things that are, that they are; of the things that are not, that they are not. Protagoras
1.1 Introduction Until 1850, mathematical proofs were ba~sed on experience and intuition and were generally accepted.
Leibniz was the first to outline, as early as 1666, tile
need of a formal language, called "universal characteristic", which would be created and used in mathematical formulations and methodology. However his ideas were ahead of their time. The initial development of a formalism for mathematics is credited to Boole, two hundred years later, with his studies on "The Mathematical Analysis of Logic" published in 1847, and "The Laws of Thought" published in 1854. De Morgan also contributed to the efforts for tile creation of a formalism with publications of his own in 1847 and 1864, as well as Peirce in 1867 and Frege in 1879, 1893, and 1903. The publication of Russell and Whitehead's threevolumed "Principia Mathematica" (Mathematical Principles) marked the acme of these efforts, for it presented a logical system within which all mathematics was formalized and all corresponding theorems were derived by means of logical axioms and rules. Logic flourished as an independent
science after 1920, with Lukasicwicz
(1878-1956), Lewis (1883-1964), Ghdel (1906-1978), Warski (1901-1983), Church (1903-1995) and Kleene (1909-1994) as its main representatives.
2
Propositional Logic
Starting in the nineteen fifties, computer technology dealt with how to utilise a computer for symbolic computation. A solution to that problem was provided by Functional Programming, created by McCarthy among others, and used mainly in the United States. Another solution was provided by Logic Programming, created mainly in Europe by Colmerauer and Kowalski among others. Logic Programming gave Logic a new impulse: it solved many problems concerning forms of logical propositions which should allow the computer to execute syllogisms and judgement procedures. Logic Programming also brought answers to questions concerning the nature and the mechanisms of logical procedures executed by the computer. We thus need to examine, in the first place, how mathematical and logical propositions are formalised. Logical connectives are the basic elements of such a formalisation, as we can see in the following examples.
(1)
Conjunction is formalized by A. Let us assume that we know two properties of a certain x: A:
x>3
B:
x<10
Then we know about x that it is greater than 3 and that it is smaller than 10. In other words, we know the proposition: AAB
where the logical connective A corresponds to the grammatical conjunction "and". Proposition A A B thus states that "x > 3 and x < 10" which means "3 < x < 10".
(2)
Negation is formalised by means of the symbol -~. For example, consider the proposition C :
50 is divisible by 7
~ C denotes that "50 is not divisible by 7"
(3)
Disjunction is denoted by tile symbol V. If D and E are the propositions D :
60 is a multiple of 6
E :
60 is a multiple of 5
then the proposition DVE
Introduction
3
states t h a t "60 is a multiple of 6 or 60 is a multiple of 5". The symbol V is not exclusive, since 60 is a multiple of b o t h 6 and 5, as well as a multiple of 20 which is not mentioned in propositions D and E. (4)
The implication " i f . . . t h e n . . .
" is denoted in Logic by " ~ " .
If F and G
are the propositions F
the number a is a multiple of 10
:
the number a is a multiple of 5
G: then the proposition
F --+ G
states t h a t "if a number a is a multiple of 10, t h a n it is a multiple of 5". (5)
"If...
and only i f . . .
" is denoted by the equivalence symbol " ~ " .
For
example, if H and I are the propositions H
:
16 is a multiple of 2
I
:
16 is an even n u m b e r
then formally we can write: H~I
Logical connectives do not all have the same properties.
In tile example of
propositions A and B, writing A V B does not differ conceptually from writing B V A. Thus intuitively, the logical connective V seems to be commutative, and tile same holds for A. However the question whether the connective + is c o m m u t a t i v e cannot be answered positively: the proposition G ~
F , "if a is a multiple of 5,
then a is multiple of 10", does not seem "right", since 15 is a multiple of 5 but no multiple of 10. Therefore we need to study the properties of logical connectives. There are also several kinds of propositions. For instance, propositions such as: "6 is a multiple of 6" "7 is a prime number" "a rational number is 0 or r 0" which are "good", "correct" propositions,
4
Propositional Logic
or propositions such as "50 is an even number" "50 is divisible by 7" "if x = 3, then x = 5" which are "bad", "erroneous" propositions, or propositions such as "It's raining" "I'm hungry" "x is equal to its absolute value" which are sometimes "correct" and sometimes "erroneous". Intuitively, each proposition has a certain interpretation: it can express t r u t h or falsehood, t h a t is to say, it m a y be true or false. But what makes a proposition true? W h a t is the relation between the interpretation of a c o m p o u n d proposition and the interpretations of the propositions which constitute it? And what role do logical connectives play in the interpretations? We need to study interpretations of propositions and determine the role of logical connectives in these interpretations. O u r own logic allows us to make decisions and draw conclusions. For instance, we say "if x > 3 then x > 0", or formally: (x > 3) ~ (x > O) Let us assume t h a t x > 3. Then x > 0. Which rule allows us to conclude "x > 0" when the a s s u m p t i o n "x > 3" is valid? And does this rule have a general validity; in other words, if M --4 N is valid and M is valid, is proposition N always valid? Here is another example. We know t h a t in order to go to the movies we need m o n e y for our ticket. movies -~ money for the ticket
(1)
Let us assume t h a t we do not have enough money for the ticket: -~ (money for the ticket)
(2)
The Language of Propositional Logic
5
Are we able to go to the movies? W h a t is the impact of negation? W h a t form will (1) take with the negation? (movies) --+ ~ (money for the ticket) ? Or
~ (movies --+ money for the ticket) ?
Or
-~ (money for the ticket) --+ ~ (movies) ?
And what comes as a valid conclusion when (1) and (2) are valid? We therefore need to examine the syntax of propositions, and the rules that determine when and how we can deduce valid conclusions from premises from available data.
Propositional Logic, as well as Predicate Logic, which is more
complex, deals with these topics. In the Propositional Logic (PL) chapter, we will examine PL propositions with respect to both their syntax and semantics. We will also study methods which deduce conclusions from premises and we will determine the adequate forms of PL propositions for knowledge representation, decision procedures and automatic
theorem proving in Logic Programming.
1.2 The Language of Propositional Logic Alphabet, Syntax and Semantics The language of Propositional Logic (PL) is a formal language. By means of this language, we formalize that specific part of everyday speech which is necessary to the formulation of logical and mathematical concepts. Having formalized propositions with the use of that language, we next examine them according to their structure and validity. For every formal language, we have: (a) an a l p h a b e t containing all the symbols of the language (b) a s y n t a x by means of which the symbols are utilised and the propositions are formed (c) s e m a n t i c s by means of which the language interprets and allocates meanings to the alphabet symbols.
6
Propositional Logic The alphabet of the language of Propositional Logic consists of: (i) Propositional symbols" A, A1, A2, . . . ,
B, B1, Be, ...
(ii) Logical connectives" V, A , - - , - - + , ++ (iii) Commas and parentheses: " , "
and " ( "
," ) "
Logical connectives intuitively correspond to conjunctions used in everyday speech. We thus have the following correspondence: V ,
disjunction 9
or
A ,
conjunction"
and
--+,
implication 9
if... then...
,
equivalence "
. . . if and only i f . . .
negation
not
-7,
9
Any sequence of symbols belonging to the alphabet of a language is called an expression. For instance, A V V B , A V B , ++ A are expressions of the Propositional Logic's language. Certain w e l l - f o r m e d expressions are considered by the syntax a.s propositions. The following definition looks into the concept of propositions and, furthermore, describes the syntax laws of the
PL
language.
D e f i n i t i o n 1.2.1: Inductive definition of propositions: (i) Tile propositional symbols are propositions, called a t o m i c p r o p o s i t i o n s or a t o m s . (ii) If a, r are propositions, then the expressions (a A r ) , (a V r ) , (a --+ r ) , (a ++ r ) , (--a) are also propositions, called c o m p o u n d p r o p o s i t i o n s . (iii) Tile expressions constructed according to (i) and (ii) are the only expressions of the language which are propositions,
m 1.2.1
The expressions VA V B and ~ A are thus not propositions since they were not constructed according to ( i ) a n d (ii), whereas (A V B) and ((~A)V (B ~ (~C)) are indeed propositions. The above definition uses the method of induction to define compound propositions; (i) is the first s t e p and
(ii)
is tile i n d u c t i v e s t e p . The induction is on
the logical length, the structure of the propositions. By the "logical length" of a proposition A, we mean a natural number denoting the number, the kind and
The Language of Propositional Logic
7
the order of appearance of the logical connectives used in the construction of A, starting from the atomic propositions which constitute A. We do know t h a t the p r i n c i p l e of m a t h e m a t i c a l
i n d u c t i o n holds for tile natural numbers:
D e f i n i t i o n 1.2.2: If
P(a) denotes a property of a natural number a, and if we have:
(1)
P(0),
and
(2)
for any natural number n, if
P(n) holds true then we can prove that
P(n + 1) holds true, then we can conclude that for all the natural numbers n,
P(n) holds,
m 1.2.2
The well-ordering of the natural numbers (which means t h a t every non-empty subset of the natural numbers has a least element), together with the principle of induction, leads to a second form of induction, called c o m p l e t e or c o u r s e o f values induction. D e f i n i t i o n 1.2.3: Complete Induction: If
P(a) is a property of a natural number a, and if we have
(1)
P(0),
and
(2)
For all tile natural numbers m and n, such t h a t
rn < n, we can derive
P(n) from P(m), then for all the natural numbers
n, P(n) holds true.
m 1.2.3
We used complete induction to give the inductive definition of propositions above (Definition 1.2.1), and will use it again to give the following definition. D e f i n i t i o n 1.2.4: General scheme of induction for propositions: Let P be any propositional property. proposition a has the property
P.
We shall write
P(a) to mean t h a t
If we prove that-
(a)
P(A) holds true for any a t o m A of the language
(b)
if
al,a2 are propositions and P((71) , P((72), then P((al A a 2 ) ) ,
P((o" 1 V 02)) , P((or 1 --+ 0"2)) , P((o" 1 ~ 0"2)) , P((-'nO'l) ) and
then we can conclude
P(a) for any proposition a of the language,
P((--'a2))
m 1.2.4
8
Propositional Logic
E x a m p l e 1.2.5: (i)
The property P: "the number of left parentheses is equal to the number of right parentheses" holds true for all propositions of PL. Indeed, according to the general scheme of induction, we have:
(a)
In the propositional symbols, the number of left and right parentheses is equal to 0.
(b)
If in the propositions or1 and a2, the number of left parentheses is equal to the number of right parentheses
let n and m be these
numbers respectively for al and if2 .... then in the propositions
(ffl A O'2) , (O"1 V if2), (O'1 ---+ 0r2), (O'1 ~ if2), (--70"1)and (~a2) the number of left parentheses is equal to the number of right parentheses, which is n + m + 1. (ii)
Tile expression E : (~(A A B) ~ C) is a proposition. The proof is based upon Definition 1.2.1.
(iii)
step 1 :
A A B is a proposition by Definition 1.2.1 (ii)
step 2 :
-~(A A B) is a proposition by Definition 1.2.1 (ii)
step 3 :
C is a proposition by Definition 1.2.1 (i)
step 4 :
(--(A A B) -+ C) is a proposition by Definition 1.2.1 (ii)
The expressions below are not propositions: --
:
the symbol '-~' is not an atom
A --+ A :
the symbol 'A' is not a proposition
(A A B :
there is a deficient use of par#ntheses Ig 1.2.5
E x a m p l e 1.2.6: The proposition F of our everyday speech is given: F :
"If it doesn't rain, I will go for a walk"
To formalize tile above proposition in the context of PL, we can use auxiliary propositional symbols: A :
"It's raining"
B :
"I'm going for a walk"
The Language of Propositional Logic
9
Then F becomes ((--A) -+ B), which is a proposition. If there is no risk of confusion whatsoever, the parentheses can be omitted: F:
-~A --+ B
II 1.2.6
Each atom which occurs in a well-formed formula D is considered a subforinula of D. For example, A and B are subformulae of F in Example 1.2.6. Moreover, each compound well-formed part of D is also a subformula of D. For example, if D: then -~A , C V ~ A
(A A ~ B ) --+ [(C V-~A) --+ B]
, A A - ~ B as well as ( C V - ~ A ) --+ B are subforrnulae of D.
Moreover, D itself, i.e., (A A-~B) --+ [(C v-~A) --+ B], is considered a subformula of D. The set subform(a) of all subformulae of the well-formed formula cr is given by the following inductive definition: D e f i n i t i o n 1.2.7:
(1) if a is an a t o m A, then subforrn(a) = {A} (2) if a has the form -~T, then subform(a) = subform(7)t2 {a} (3) if a has the form T o ~, where o C {V, A , ~ , e+}, then subform(a) Remark
9
= s u b f o r m ( T ) U s u b f o r m ( ~ ) t 2 {a}.
1.2.7
1.2.8: To avoid confusion by using connectives in formulae without
parentheses, we consider ~ to bind more strongly than all the other connectives, V and A to bind more strongly than ~
and -+, and ~
to bind more strongly
than ~ . Thus -~A ~ B V C ,
AAB
~C,
A ~ B ~C
read respectively (~A)--+ (B V C ) , Remark
(A A B) --+ C ,
A -+ (B ~ C)
I
1.2.8
1.2.9: For historical reasons we will consider the symbol "+--" to be a
logical connective, "B +-- A" meaning exactly the same as A -+ B.
m 1.2.9
10
Propositional Logic
Semantic Concepts in Propositional Logic
1.3
Valuations and truth valuations Propositions, according to Definition 1.2.1, are general and abstract syntactic objects. We wish to interpret these abstract objects by means of semantics. This means t h a t we are interested in determining the conditions under which a proposition is true or false.
We therefore create a structure, the domain of which is
{t, f } , with t, f denoting respectively tile t r u t h values " t r u t h " , and "falsehood", and we try to assign one of these values to each proposition. Tile method used in the allocation of the t r u t h values, as well as the necessary definitions, constitute the
PL
semantics.
D e f i n i t i o n 1.3.1: A v a l u a t i o n is any function"
F-Q:
~,{t,/}
where Q is the set of tile atoms of the language.
E
1.3.1
A valuation thus a~ssigns t r u t h values to the a t o m s of tile language. We now provide the set {t, f } of t r u t h values with internal algebraic operations, so as to transform it to an algebraic structure. which will be defined, i.e., ~ , v1 , IA , ~ logical connectives -~ , A , V , ~
and ~ .
Tile internal operators of t, f
and ~
, will correspond to the
The similarity of tile corresponding
symbols is not fortuitous, it outlines the correspondence. Consider, for instance, the proposition A V B. By assigning certain t r u t h values to A and B, wc interpret A V B using the interpretations of A and B. The operationbwhich does this is U, ms we will see more clearly in the next definition. D e f i n i t i o n 1.3.2: The internal operations
, I I , v1, -,'+ and ~
defined by the following tables.
U
t
f
n
t
f
f
t
t
t
t
t
f
t
f
t
f
f
f
f
of {t, f} are
Semantic Concepts in Propositional Logic
t
f
t
f
t
t
~
t
f
t
t
f
f
f
t
11
. _
In the tables defining R , U , ~
and ~
, the first column defines the first
c o m p o n e n t and the first row defines the second c o m p o n e n t of the corresponding operation,
m 1.3.2
The structure
({t, f } , ~ , U, R) with the operations ~ ,
u,
[q being defined by
these tables, is a two-valued Boolean Algebra. Boolean Algebras are of great i m p o r t a n c e to PL semantics [RaSi70, Curr63, Rasi74, Raut79] ms well as to the general studies of the theoretical foundation of c o m p u t e r science.
D e f i n i t i o n 1.3.3: Let S be the set of the propositions of our language. By t r u t h v a l u a t i o n or B o o l e a n v a l u a t i o n we m e a n any function:
V :S~-+ {t,f} such that, for every a, T C S: (a)
if a is an a t o m then V(a) e {t, f }
(b)
V(~)
(c)
v ( ~ v ~) = v ( ~ ) u v(~)
(d)
V(a A T) = V(a) n V(T)
= ~ V(~)
(e) V(~ ~ T) (f)
=
v(~).,~ v(~-)
V(a ++ T) = V(a) ~
V(T)
m 1.3.3
As we can see in the last definition, the values of the t r u t h valuations of the a t o m s of some proposition are interconnected with the algebraic operations ,,~ , LJ , V1 , ~
and ~-~, and provide the value of the t r u t h valuation of the
proposition under consideration [Smul68].
12
Propositional Logic A valuation assigns a truth value, t or f , to the atoms of the language.
A
truth valuation is the extension of a valuation to the set of the propositions of the language. As proved in the following theorem, each truth valuation extends uniquely to a valuation on the set of the language's propositions. T h e o r e m 1.3.4:
f or each valuation F, there is one and only one truth valuation
V such that V extends F. P r o o f : We will use induction on the length of the propositions. Let F be a valuation and Q the set of the atoms. We define inductively a truth valuation V in the following way: (a)
V(A) = F(A)
for every A C Q
Let a, ~ be two propositions. If V(a) and V(~) are already defined by (a), we impose: (b)
V(--a) =
,,~ V(a)
(c)
V(o- v ~)
(d)
v ( ~ A ~) = v(~) n v ( ~ )
(e)
v ( ~ ~ ~) = v ( ~ ) - ~ v ( ~ )
(f)
V ( a ~ ~) = V(a) ~
=
v(o-)
u v(~)
V(~)
V is obviously a truth valuation extending F. What is now left to prove is that if the t r u t h valuations V1 and V2 are extensions of F, then VI(~) = V2(~) for any proposition ~. The property P used for the induction is: P(~) (a)
:
VI(~) =
V2(~)
For every propositional symbol A, we have VI(A) = V2(A), since V1, V2 are extensions of F.
(b)
Let us suppose that V l ( o ) = V2(a) and V I ( ~ ) = Vz(~), then v,(-~o-)
=
,--, v, (o-) =
,.-, V~(o-)
=
v~(-~o-)
v~ (o- s ~) =
v~ (o-) n v~ ( f )
=
V2(o-)n v 2 ( f )
=
v2(o- A ,,,:,)
v~ (o- v ,,,:,) =
v, (o-) u v, (,,,:,) =
v2(o-)u v2(~)
=
v2(o- v ~).
Semantic Concepts in Propositional Logic
13
Treating similarly the other conditions of Definition 1.3.3, we prove t h a t I/1 and V2 have the same value for every proposition; they therefore coincide.
II 1.3.4
The next useful corollary is a direct consequence of T h e o r e m 1.3.4: C o r o l l a r y 1.3.5:
Let a be a proposition containing only the atoms A 1 , . . . , Ak.
If two truth valuations take the same values in the set { A 1 , . . . , Ak}, i.e., VI(A1) = V2(A1),
...
, VI(Ak)=
V2(Ak),
then V1 (a) = V2(a). Example
II 1.3.5
1.3.6: Calculation of a t r u t h valuation from a valuation:
Let S be the set of atomic propositions S = {A1, A2}, and F be a valuation such that: F(A1) = t
F(A2)= f By Theorem 1.3.4, the t r u t h valuation VF extending F which we are seeking is uniquely defined. Let us impose:
VF(AI) := F(A~) VF(A2) := F(A2) where by " := " is meant "equal by definition". We are now able to calculate the values of the t r u t h valuation VF for any set of propositions which contains only the a t o m s A1 and A2:
VF(A1A A2) = VF(A1) • VF(A2) = t rl f
= f
VF(A1V A2) = VF(AI) kl VF(A2) = tU f
= t
VF((A1V A2) ~ A2) = VF(A1V A2) "* VF(A2) = t ~ f
= f,
etc.. [] 1.3.6
We can classify the P L propositions according to the t r u t h values they are getting.
14
Propositional Logic
D e f i n i t i o n 1.3.7: A proposition a of
PL
is l o g i c a l l y t r u e , or a t a u t o l o g y , if f o r
every truth valuation V, V ( a ) = t holds. This is denoted by ~ a. We shall write
~= a to mean that a is not a tautology, i.e., that there exists a truth valuation V such that V(a) = / . A proposition a is s a t i s f i a b l e or v e r i f i a b l e if there exists a truth valuation V such that V ( a ) = t. A proposition a such that, f o r every truth valuation V, V ( a ) = f , is called l o g i c a l l y false, or n o t verifiable, or is said to be a c o n t r a d i c t i o n , Remark
m 1.3.7
1.3.8: Let V be a truth valuation, and Sv
-
{a 6 PL propositions I Y ( a ) - t}
be a set consisting of the propositions of our language satisfiable by V.
If for
every truth valuation V, the proposition ~ belongs to the corresponding set S v , then ~ is a tautology. Every S v set, with V being a truth valuation, constitutes a possible world [Fitt69, Chel80] of the set of
PL
propositions. Every possible world
is A r i s t o t e l i a n , meaning that for every proposition a, only one from a, -~a (:an be true in a certain possible world. ( Y ( a ) - t or V ( a ) - / ,
thus Y ( ~ a ) - t according
to Aristotle's principle of the E x c l u d e d M i d d l e , Remark 1.4.4.) Possible worlds are the basic concept of K r i p k e s e m a n t i c s . Kripke semantics is used in the studies of Modal Logic. Modal Logic contains special symbols besides the logical connectives
called modal operators, such as <) for instance,
that widen the expressive capacity of the language. Thus in a modal language, besides the expression A, there is also the expression <>A, which can be interpreted as "sometimes A is true", or "A will be valid in the future". Modal logic allows us to express the specific properties of the statement of a program which are related to the place, the ordering, of the statement within the program. For example, consider the following FORTRAN program which calculates n! f o r l < n < 1 0 . k = 1 do
I0
k=k* I0
write
i =
i ..... I0
i (*,,)k
end
Let A and B be the propositions of the modal logic A "- w r i t e
(,,*)k
and
B "- e n d
Semantic Concepts in Propositional Logic
15
When the program starts running, 0A and 0 B are valid, which means that the program will find n! at some point in the future and that at another future moment the program will end.
m 1.3.8
D e f i n i t i o n 1.3.9: Two propositions a and T, such that V(a) - V(T) for every truth valuation V, are said to be l o g i c a l l y e q u i v a l e n t . This is denoted by a - 7. m 1.3.9
Example
1.3.10: The propositions A V ~A and ((A ~
B) --+ A) --+ A are
tautologies. Proof." First, we will prove that A V--A is a tautology. Let Vi and 1/2 be two truth valuations such that" V~ ( A ) -
t
and
V2(A) - f
We notice that:
V~ (A V ~ A )
-
V~(A) U V ~ ( ~ A )
=tUf V2(A V - , A )
-
V~(A) u ( ~ V i ( A ) )
-
tu(~t)
-
V2(A) U ( ~ V 2 ( A ) )
-
fu(~f)
-t
-
V2(A) UV2(-,A)
=
I u t -
t
A random t r u t h valuation V on A agrees, either with Vi or with V2. Thus, based on Corollary 1.3.5, we have"
V(A v-~A)
-
V~(A V-~A)
-
t,
V(A V-.A)
-
V2(A V - . A )
-
t
or
So, the proposition is true for any t r u t h valuation V. Then it must be a tautology. We will now prove that ((A ~ B) --+ A) ~ A is a tautology. Here we have four different valuations Fi, F2, F3, F4 on Q -
{A, B}
Fi (A) = t
and
F1 (B) = t
F2(A) = t
and
F2(B) = f
F3(A) = f
and
F3(B) = t
F4(A) = f
and
F4(B) = f
m
=9
~
--.
~
e
~
~
q
~
~
q
.
~
"M
~ II
~
<
~
qo
q
~
~
.
.
~
-.
~
J
II
II
>
-~
>
II
$
~
<
.
$
~
II
~
~
J
II
j
II
J
i-1
=r'
<
II
II
II
;:~
e
--.
:~ ,~"
~,
~
~'~
~
o
N
II
~
II
~'~
II
~
9
~
~
~. ~o
~
~
~ --"
~
i~
e-,-
~
~n
>
j
III
~
,--.
'~
~]
..
e-,-
m
~
~"
<
-.
I~
0 ~
~
~.~ 0
~
9
~"
~~
~
II
4-
~
~.
$
~
~< ~
0
I~I~ ~
9
~
~
o ~
"~
9 ~
~~" ~
~ t~
rlh ~
~
~
~.~
~ ~
$
~
=" ~
,-,.
~ ~.;
~
"
C~
~
--
~
~"
~-~ 0
<
~"
0
0
o
O~
"u o
Truth Tables
17
1.4 Truth Tables In the preceding section, we gave the definition of the valuation of the atoms of the language and then, by extending the valuation from atoms to compound propositions, we defined truth valuations. W i t h this method, we wish to find out if a proposition is a tautology or a contradiction, or if it is satisfiable.
But the
more compound a proposition gets, the more complicated the m e t h o d turns out to be. To make things simpler, we gather all the possible valuations of a proposition's atoms in a table called the proposition's t r u t h table. And so, for the comt)ound propositions-~A, AVB,
AAB,
A~B
and
A~B,
we have the following
t r u t h tables:
A
-~A
A
B
t
f
t
t
f
t
t
AvB
A
B
t
t
t
t
f
t
t
f
f
t
t
f
t
f
f
f
f
f
f
A
B
Ae+B
t
t
t
t
f
f
f
f
f
t
t
f
t
f
f
t
f
f
t
A
B
t
t
A~B
AAB
Each row in these tables corresponds to a valuation and its unique extension to a t r u t h valuation. For example, the second row of the table A A B is t, f, f , according to which A takes value t, the first element of the row, and B takes t, the second element of the row. Based on Definition 1.3.2, as well ms on Definition 1.3.3, we know t h a t the corresponding t r u t h value of A A B is t n f , namely f . We thus find f , the third element of the row. From now on, we will use the above t r u t h tables as definitions of the values resulting from the use of logical connectives, without referring to valuations and t r u t h valuations.
18
Propositional Logic
R e m a r k 1.4.1: (1) The
PL
disjunction is inclusive; that is A V B can take value t also when both
A and B have value t, as opposed to the everyday speech disjunction which is often exclusive: for instance, we say "I will stay home or I will go to the movies", and by that we mean choosing one of the options and not doing both. (2) Tile proposition A -+ B takes value f only if A has value t and B and value f. Thus, if A i s / , A --+ B is t, whatever the value of B is. These properties of and consequently the part of Propositional Logic which is based on these same properties, have not always been accepted by all the logic schools. Back in 1910-1920, the prevailing concept about applied sciences in Europe was that of the Hilbert school (Hilbert 1862-1953) [Boye68, Heij67]: "Mathematics and Logic must be axiomatically founded, and the working methods within the limits o / t h e s e sciences have to be extremely formal." But in 1918, Brouwer (1881-1966) published a very severe criticism of
PL.
The alternative system recommended in his work was called i n t u i t i o n i s t i c logic.
This name finds its origin in the fact that Brouwer stated, based
on Kant's belief, that we perceive the natural numbers (and consequently the logic which characterises the sciences) only by means of our intuition [Dumm77, Brou75]. Intuitionistic logic never fully replaced the logic founded in Hilbert's concepts (see Remark 1.8.8), however, nowadays it has several applications in Computer Science. On the other hand,
PL
offers a very effective instrument (in the Aristotelian
sense) for studying and solving problems, and constitutes the foundation of today's technology and of the sciences generally.
9
1.4.1
Let us recapitulate what we have seen about the truth tables of a proposition: Based on the inductive definition o/propositions as well as on the definition o / t h e values o / t h e logical connectives, we can construct the truth table o / a n y proposition by assigning values to the atoms used in its formation.
T r u t h Tables
19
1.4.2: The t r u t h table of the proposition A A B -+ C is:
Example
AAB
A
B
C
t
t
t
t
t
f
f
t
f
f
f
t
t
f
f
t
f
f
f
f
t
f
f
f
f
f
AAB-+C
For the triplet of atoms (A, B, C), the triplet of t r u t h values (f, t, f) makes A A B -+ C true, whereas the triplet (t, t, f ) makes it false,
m 1.4.2
The s h o r t t , - u t h t a b l e of the proposition A A B -~ C is the t r u t h table remaining when the fourth auxiliary column is omitted. The short truth table of a proposition containing n atoms consists of 2 n rows and n + 1 columns. By means of the truth tables, we can determine whether a proposition is truc or false. By Corollary 1.3.5, if the t r u t h tables of two propositions coincide in the columns of their atom's values and in their last columns, then the two propositions are logically equivalent. Example
1.4.3:
(i) The proposition ((A -+ B) -+ A) ~ A is a tautology
A
B
A-+B
t
t
t
t
f
f
f
t
t
f
f
t
((A-+ B)-+ A)
((A -+ B)--+ A)--+ A
20
Propositional Logic
(ii) The proposition ( P -+ Q) A ( P A -~Q) is not verifiable
P
Q
~Q
P-~Q
t
t
y
t
t
f
t
f
y
t
f
t
f
f
t
t
P A --,Q
( P --+ Q) A ( P A ~Q)
I1 1.4.3
Remark
1.4.4: Based on Definition 1.3.7, we have:
(i)
A proposition is a tautology if and only if its negation is not satisfiable.
(ii)
A proposition is satisfiable if and only if its negation is not a tautology.
(iii)
A proposition which is a tautology is satisfiable, whereas a satisfiable proposition is not necessarily a tautology.
(iv)
There are certain basic tautologies which are often used:
(1) (2) (3) (4) (5) (6) (7) (S)
--(A A B) ++ (-,A V ~B)
De Morgan Law
~ ( A V B) e+ (--,A A --,B)
De Morgan Law
--,(--,A) ++ A
Double Negation Law
(A -+ B)+-+ (--,B ~ --,A)
Contrapositive Law
( B --+ C) --+ ( ( A --+ B) --+ ( A --+ C ) )
First Syllogistic Law
( A --+ B) --+ ( ( B --+ C) --+ ( A --+ C ) )
Second Syllogistic Law
(A --+ (B --+ C)) e+ ((A A B) --+ C)
Transportation Law
AV-,A
The Excluded Middle Law
Aristotle was the first to present propositions (5), (6) and (8). Propositions (5) and (6) are even referred to as Aristotle's Syllogistic Laws.
II 1.4.4
The method of finding the value of a compound proposition with the use of truth tables is quite simple, as long as the proposition contains a small number of atoms. In the case of a proposition with three atoms, the corresponding truth table has 23 = 8 rows. The truth table of a proposition with 4 atoms already has 24 - 16 rows, and with 10 atoms, it has 21~ = 1024 rows!
Consequences and Interpretations
21
Furthermore, we cannot use t r u t h tables in Predicate Logic. Wc will thus study, in the following sections, more advanced and economical methods of determining the t r u t h value of a proposition or a set of propositions.
These methods will
constitute the base for the study of Logic Programming.
1.5
Consequences
and Interpretations
In Example 1.4.2 of the t r u t h table of proposition (A A B) -+ C we saw that this proposition takes value t when all three atoms A, B and C take value t. This is to say that the validity of (A A B) -+ C was a consequence of the fact that every proposition of S = {A, B, C} took value t [Chur56]. w e thus give the following (|efinition: D e f i n i t i o n 1.5.1: Let S be a set of propositions.
A proposition a is called a
c o n s e q u e n c e o f S (denoted by S ~ a), if for every t r u t h valuation V, for which V(~a) = t for any 99 E S, we can deduce t h a t V(a) = t holds. The set Con(S) = {o IS ~ a} is the set of all consequences of S. Formally:
(7 E Con(S)
r
(for every t r u t h valuation V) (for every ~a C S)
= t
= t)
Instead of (for every (p E S) (V(~) = t) we often write V[S] = {t} or even informally V[S] = t.
Remark
1.5.2: The symbols " r
I
1.5.1
" and " :~ " used in the above definition as
"if and only if" and "implies" are symbols of the m e t a l a n g u a g e .
The meta-
language is the language used to reason about eL formulae and to investigate their properties. Therefore, when we for instance say
~ ~a , proposition ~a is a
tautology, we express a judgement a b o u t ~a. " ~ T " is a m e t a p r o p o s i t i o n
of
PL, namely a proposition of tile PL metalanguage.
Tile metalanguage can also be formalized, just like the eL language. In order to avoid excessive and pedantic use of symbols, like for example -+ of the language and =v of the mctalanguage, we use as metalanguage, a careflllly and precisely formulated form of the spoken language.
22
Propositional Logic
We will end our comment about the metalanguage with another example: Let A be a proposition of PL. Then the expression " A ~
A " is a PL proposition,
whereas "if A is a PL proposition, then A is a PL proposition" is a proposition of the PL metalanguage, Example
m 1.5.2
1.5.3: Let S = { A A B , B -+ C } be a set of propositions.
Then
proposition C is a consequence of S, i.e., S ~ C. Proofi
Let us suppose t h a t V is a truth valuation validating all S propositions: and
V(A A B) = t
V ( B --~ C) = t .
Then we have, by Definition 1.3.3: V(A) ~ V(B) = t
(1)
and
V ( B ) -,~ V(C) = t
(2)
Then by (1) and Definition 1.3.2 we have V(A)=V(B)=t
(3)
(2) thus becomes t ~ V(C) = t
(4)
By Definition 1.3.2 and (4) we can conclude V ( C )
= t.
T h a t means that ev-
ery t r u t h valuation verifying all S propositions, verifies C too. C is therefore, a consequence of S , S ~ C. If we denote by Taut
m 1.5.3 the set of all tautologies, we can prove the following
corollary. C o r o l l a r y 1.5.4: Proofi
Con(O) = Taut,
where 0 is the e m p t y set.
Consider a C T a u t . Then for every t r u t h valuation V,
V(a) = t. Then
for every Y such that Y[f)] = {t},
v i e ] = {t}
=t
Then a C Con(O). Conversely: Consider a c C o n ( O ) . Then for every truth valuation V we have: For every 9 9 C 0 ,
ifV(~o)-t
then
V(a)-t
(1)
Consequences and Interpretations
23
But ~ has no elements. We can thus write: For e v e r y ~ C O ,
ifV(~)=f
(2)
then V ( a ) = t
By (1) and (2) we have V ( a ) = t, whatever the value of qo c ~. This means that for all truth valuations V, V ( a ) = t holds, a is thus a tautology,
m 1.5.4
Thus tautologies are not consequences of any particular set S , they are independent from S, for every set S of propositions, and they are only related to the truth tables of section 1.3. The underlined parts in the Proof of Corollary 1.5.4 reveal three very technical parts of the proof. The empty set has no elements. How can we therefore write the implications with the underlinings? Let us think of the third and fourth line of the definition of the truth table of proposition A --+ B (see page 17). If A has value f and B value t or value f, then A --+ B is t. This means that an implication A --+ B, with A being false, is always true! This is exactly what we use in the metalanguage, the language in which our theorems are proved. In (1) for example, "~o c ~" cannot be true because ~ has no elements. Not depending on whether "Y(~o) = t then Y ( a ) = t " is true or false, (1) is thus true! As mentioned before,
PL
deals with the study of sets of propositions.
The
following two definitions are related to the satisfiability of a set of propositions. D e f i n i t i o n 1.5.5: A set of propositions S is (semantically) c o n s i s t e n t , if there is a truth valuation to verify every proposition in S. Formally: consistent(S)
~
(there is a valuation V) [(for every a e S) ( V ( a ) = t)]
Tile fact that S is consistent is denoted by V ( S ) =
t.
For the consistent set S, we also use the terms v e r i f i a b l e or s a t i s f i a b l e which have the same import. S is correspondingly i n c o n s i s t e n t , n o n - v e r i f i a b l e or n o n - s a t i s f i a b l e if for every truth valuation there is at least one non-satisfiable proposition in S. inconsistent(S)
=~ (for every V) [(there is a e S) ( V ( a ) = f)] m 1.5.5
24
Propositional Logic
Example
1.5.6: The set of propositions S - {A A ~ B ,
A -+ B} is inconsistent.
Proof.-
Let us assume there is a t r u t h valuation V such thatfor e v e r y C E S ,
V(C)-t
Then:
V(AA-,B)-t
and
V ( A --+ B) - t
and
V(A) ~ V ( B ) -
which means that:
V(A) M V(-,B) - t
(1)
By (1) and Definition 1.3.2, we have V ( A ) -
V(A) - t
V(=B)-
and
f,
t
(2)
so:
V(B) - f
Then (2) becomes"
t ~f-t which contradicts Definition 1.3.2, where t ~
f -
f.
Consequently, no t r u t h
valuation can verify all propositions in S, and so S is inconsistent, Definition
1.5.7: A trllth valuation which satisfies a set of prot)ositions S is
called an i n t e r p r e t a t i o n
Iut(S),
m 1.5.6
of S. The set of all interpretations of S is denoted by
where"
Int(S)
-
{V i V t r u t h valuation and for e v e r y a E S , V(a) - t} m 1.5.7
We now present a corollary with some llseflll conclusions a b o u t consequences
and interpretations. C o r o l l a r y 1.5.8:
For the sets of propositions S, $1, $2 we have:
(1)
$1 C_ $2 =v Con(S1) c_ Con(S2)
(2)
S c_ Con(S)
(3)
Taut C_ Con(S), for any set of propositions S
(4)
Con(S) - con(Con(S))
Consequences and Interpretations
(5)
$1 c_ $2 =~ Int(S2) c_ Int(S1)
(6)
Con(S)-
(7)
e
{a I V ( a ) - t ,
25
for every V e Int(S) }
Con({~,... ,~,~}) r
e Taut
~, -~ ( ~ ~ (~, -~ ~ ) . . . )
Proof." (1)
Let us a s s u m e a E Con(S1). Let V be a t r u t h v a l u a t i o n such t h a t for every ~o C $ 2 ,
V ( ~ ) - t holds. We t h u s have V ( ~ ) - t for every ~ C $1, (since
S~ c_ $2). T h e n Y ( a ) -
(2)
t, since a C Con(S~). H e n c e Con(S~) c_ Con(S~).
If a c S, t h e n every t r u t h v a l u a t i o n V which v a l i d a t e s all the p r o p o s i t i o n s of S also validates a. T h u s a C Con(S). B u t t h e n S c_ Con(S).
(3)
Let us a s s u m e t h a t a C Taut. Let V be a t r u t h v a l u a t i o n such t h a t , for every ~0 C S ,
Y(~o) - t holds. T h e n Y ( a ) - t, t h u s a C Con(S). Hence
Taut C Con(S). (4)
By (2), we have S c_ Con(S). By (1), Con(S) c Con(Con(S)).
We j u s t
need to prove
Co~( Co.(S)) c Co~(S) Let us a s s u m e t h a t a E Con(Con(S)). t h a t for every ~ c S ,
Let V be a t r u t h v a l u a t i o n such
Y(qo) = t. T h e n for every 7- c C o n ( S ) ,
V(T) = t.
By the definition of Con(Con(S)), we thus have V ( a ) = t which m e a n s t h a t a e Con(S). Hence Con(Con(S)) c_ Con(S).
(5)
If V c Int(S2), t h e n for every a c $2 we have V ( a )
-
t. Since Sx C_ $2,
for every a e $ 1 , V(a) - t holds, a n d hence V C Int(S1).
(o)
If a C Con(S), t h e n for every V C Int(S) , V(a) - t. T h e n
e {~ I v ( ~ ) - t, for ~very Y e / ~ t ( S ) } F u r t h e r m o r e , if ~o c {a I V ( a ) - t, for every V E Int(S)} t h e n it is obvious t h a t ~ c Con(S).
26
Propositional Logic
(7)
Let V be a t r u t h
( ~ ) Let us assume that ~ e C o n ( { o . l , o ' 2 , . . . , o ' n ) ) . valuation. For V, we have the following possibilities: (a) For each o.i, 1 <_ i <_ n ,
V(o.i) -
t
(b) There is at least o n e o ' j , l < j < n ,
such t h a t V ( a j ) - f .
We shall analyse these two cases separately. (a) Since for every a i ,
with 1 _< i < n , and
V ( o'i ) - t
we have V(~) - t. v(o,,
~
e
,~,})
Co,({~,...
Thus ~)
-
v(,~,,)
..~ v ( ~ o )
-
t ~
t
-
t
If V ( o"k - 9 ( o"k + l --9 ( o'n - 9 r ) . . . ) ) - t, then
v(~k_~ -~ ( ~ ~ (~k+~ - ~ . . . ~ ( ~ -~ ~ ) . . . ) ) = v ( o k - ~ ) - ~ v ( ~ k ~ (~k+~ -~ . . . ( ~ =
t-"~t-
~ ~)...))
t
Hence (inductively),
V ( o ' 1 -+ (o'2 - ~ . . . (o'n -9 ~o)... ) ) -
t.
(b) Let r be the least natural n u m b e r for which V ( o ' , ) - f . Thus y(o,
-~ (o,+~ =
v(o,)
=
f ~
Since V ( o ' , - 1 ) v(~,_, =
~
. . . (~
-~ v(~,+~ v(o,+~
~...
(~
-+ ~)...)
( o , , --~ ~ ) . . . )
-
t
we have
- t, --~ ( o ,
--~...
- ~ ~ ) . . . ))
--~ . . . ( ~ , , - ~ ~ ) . . . ))
v(o,_,...
( ~ , , --~ ~ ) )
-
t ~
t
-
t
If V ( o ' k - 9 ( o ' k + l - 9 . . . ( o ' r - 9 . . . ( o ' n - 9 ~ ) . . . ) . . . ) ) - - t ,
v(~k_~ -~ ( ~ ~ (~k+~ -+ ... ( ~ ~ ~ ) . . . ) ) ) = v(o~_,)~
y ( ~ k ~ (~k+~ - ~ . . . ( ~ -~ ~ ) . . . ) )
= t ~ t - t
Hence V(o.1 -9 (o2 - 9 . . . (o'n -9 ~ ) . . - ) ) - t.
then
Adequacy of Logical Connectives- Normal Forms
27
In (a) as well as in (b), we have" V ( 0 . 1 --+ (0.2 - + . . . (0.,~ --+ g ) ) - - . ) ) - t for any t r u t h valuation V, and hence
(ax -+ (0.2 - + . . . (0.,~ -+ g ) ) - - . ) )
is
a tautology. (r
Let us assume t h a t 0.1 ----} (0"2 ---+" "" (fin ---+ ~ ) ' ' "
) is a tautology. Let
V be a t r u t h valuation such t h a t V(0.~) - t for every i E {1, 2, . . .n}. If we assume t h a t V(qo) - f , then V(0.n-+g))
-
V(an)--~V(q))
-
t~f
Therefore, if V(0.k --+ (0.k+X --+--. (0.,, --+ q D ) . . . ) ) - f ,
-
f. then
v(~k-~ ~ (~k -~ (~k+~ -+..-(~n -~ V)---)) = V(~k-~) "~ V(~k -~ ( ~ + 1 . . . (~,~ ~ ~)---))
Hence
V(al
=
t-,~f
~
(an
assumption that
+
t.
(p)...))
-
f
is a contradiction because of the
(al --+ (a2 - + . . . (an --+ ~o)...))
is a tautology. m 1.5.8
Corollary 1.5.8(7) provides us with a m e t h o d to determine w h e t h e r qD is a consequence of a finite set of propositions S, by checking in 2 n steps if tile right side of (7) is a tautology. However, for an infinite set S, this m e t h o d would require an infinite number of steps. Tile use of semantic tableaux becomes, in t h a t case, more appropriate.
1.6
Adequacy
of Logical Connectives-
Normal
Forms
The finding of the t r u t h value of a proposition, as the proof of consistency or lack of consistency of a set of propositions, often depends oil the n u m b e r and the kind of connectives appearing in the propositions. The five logical connectives which we use are the connectives one deals with more frequently in m a t h e m a t i c a l texts. In tile following paragraphs, it will be proved t h a t any set of logical connectives can be expressed by means of the set {-~, A , V}, and we will thus have proven t h a t the set of logical connectives {-~ , A , V} suffices to express any P L propositions [Smu168, Mend64, Schm60].
28
Propositional Logic
Definition every
1.6.1- A set Q of logical connectives is said to be a d e q u a t e ,
proposition there is a logically equivalent proposition which does not
PL
contain connectives different from those contained in Q.
Theorem Proofi
if for
1.6.2:
m 1.6.1
{--1 , A , V} is an a d e q u a t e set.
Let a ( A 1 , A 2 , . . .
, A k ) , be a proposition in which only the a t o m s A1, . . . ,
Ak appear 9 Let us c o n s t r u c t tile short t r u t h table of a: A1
A:~
...
Ak
a(A1, . . . , Ak)
O'11
O'1~
- . .
O'1, ~
(71
9
0"1/1
O'v~
9
a27
o
9 99
Or v
9
o
o
9
o
a27,
0"2 ,~
0"2,,
ant denotes the c o r r e s p o n d i n g t r u t h value on row n and in c o l u m n g, and a~ is the t r u t h value of a ( A 1 , . . . , Ak) in the nth row. (1)
Let us suppose t h a t in the last c o l u m n there is at least one t. We will prove t h a t there is a proposition, the short t r u t h table of which has tile same last column as tile above table, and which only uses the connectives -1, A and V. In t h a t case, the proposition will be logically equivalent to a ( A 1 , . . . ,Ak) 9 For any a t o m A , A t denotes A and A / denotes -~A. From the
n TM r o w
we fornl the conjunction" t,,"
O'~, 1
(A t
A...AA
O'~,k
k
)
which only contains ttle connectivcs -7 and A. Let g l , . . . , gm be the rows with a t oil the last column. T h e n the proposition:
ttl V...
V t~.,
is the proposition we are seeking since its short trllth table has a t ill its last column, exactly on those specific rows where tile short t r u t h table of a(A1,...
, A k ) has a t.
Adequacy of Logical Connectives - Normal Forins
(2)
29
If the last column does not contain any t, then the proposition is false, and thus logically equivalent to (A A -~A), where A is any propositional symbol.
II 1.6.2
On that account, using the technique of the above theorem, we can express any proposition by means of the connectives 9 , A and V. The resulting equivalent proposition is said to be ill a D i s j u n c t i v e N o r m a l F o r m (DNF). Let F be a proposition such t h a t the atoms of F are A i , . . . , An. The general form of F in a DNF is: DNF(F) :
(Ai, A . . . A Ai,,) V (A2~ A . . . A A2,,) V . . . V (Ak, A . . . A Ak,,)
where Aij E { A i , . . . , An} or Aij E {-~AI,...,-~A,~}, i.e., Aij are atoms or negations of the atoms of F and ill every conjunctive component of DNF(F), each atom of F occurs only once, negated or unnegated. The dual concept to DNF is called a C o n j u n c t i v e
Normal
F o r m (CNF), and
has tile following h)rIn: CNF(F) : Example
(At, V . . . V A i . ) A (A2, V . . . V A2n) A . . . A (Ak, V . . . V Ak~)
1.6.3: Find the DNF of proposition F , A
B
C
F
1
t
t
t
t
2
t
t
f
f
3
t
f
t
f
4
t
f
f
f
5
f
t
t
t
6
f
t
f
f
7
f
f
t
f
8
f
f
f
t
given its short t r u t h table:
We follow the method described ill Theorem 1.6.2.
Step 1 : We find all the rows with a t in the last column. Those are rows 1, 5, 8. Step 2: DNF(F) -- t l V t5 V t8 -- (A A B A C) V (~A A B A C) V (~A A ~ B A ~C) II 1.6.3
30
Propositional Logic We will now give two interesting and useful corollaries concerning the adequacy
of concrete sets of logical connectives. Corollary 1.6.4:K1 Proofi
= {9, V} and K2 = {-7, A} are adequate sets.
Using truth tables, we can prove that: A - + B =_ - - , A A B ,
A ~ B -
and
(A ~ B) A ( B - + A) ,
A A B = -~(-~A V ~ B ) Consequently, any
PL
proposition can be expressed with connectives from {-7, v}.
K1 is thus adequate. We can also prove that K2 is adequate by: AVB
_= - ( - ~ A A - B ) , A~B
Example
A~B
=
and
= --,AVB,
I1 1.6.4
(A-+B) A(B-+A)
1.6.5: Apart from connectives -7, A, V,--~ and +-~, we can create other
connectives such as I and : defined by the following truth tables:
A
B
A[B
A
B
A:B
t
t
y
t
t
y
t
f
t
t
f
f
f
t
t
f
t
f
f
f
t
f
f
t
C o r o l l a r y 1.6.6: Proofi
(El):
m 1.6.5
The sets E1 = { [ } and E2 = { : } are adequate.
Intuitively A [ B
A ] B = ~(A A B).
means that A and B cannot be both true. Then
(This equivalence can easily be verified with truth tables).
Then: = -~(AAA)
A[A
=
-,A
thus negation is expressed by I 9 In the same way, for conjunction AAB
(E2):
-
--,~(AAB)
-
--,(A I S ) -- (A I S ) [ ( A
IS )
The negation then becomes: -~A _= -~(A v A) _-- A : A, and tile conjunc-
tion: A A B
-
-,(-,AV-,B)
-
(A:A):
(B:B)
m 1.6.6
Semantic Tableaux
31
Note that {L} and {: } are the only singletons of logical connectives which are adequate (see Exercise 20). Furthermore, every set of logical connectives with two elements, one of them being -7 and the other being one of A, V and --+ is adequate, [Schm60, Smu168, Mend64]. R e m a r k 1.6.7: The conversion of a proposition cr to another proposition ~o, which
is logically equivalent but containing connectives different from those in a, is very useful.
For example, in the Resolution Proof Method, which will be developed
in the following sections, we express all implications, i.e., propositions of the kind a --+ ~a, using the connectives --, A and V. For this conversion we use tile equivalence A-+
B
-
--,AvB
which can be verified by truth tables,
m 1.6.7
In the following sections we will describe the proof methods used in PL. Their presentation will simplify the introduction to Logic Programming.
1.7 Semantic Tableaux Proof mctho(ts are algorithmic procedures which we can follow to find whether a proposition is or is not a tautology and whether a set of propositions can or cannot be satisfied.
These methods are included in the development of Auto-
matic Theorem Proving, a theory which constitutes a basic application of Logic Programming. The first method of algorithmic proofs which will be described uses semantic tableaux.
Gentzen (German logician, 1909-1945) was tile first to prove in 1934
that all tautologies are produced by applications of certain rules, that for every tautology ~0 there is a certain procedure resulting in ~, [Klee52, Raut79]. Proof theory was used in 1955 by Beth and Hintikka to create an algorithm determining whether a proposition is a tautology or not. With tile help of Beth semantic tableaux, or simply semantic tableaux, we can examine what the possibilities are that a given proposition takes truth value t or truth value f. The semantic tableau of a compound proposition K is constructed inductively, based on the semantic tableaux of the propositions appearing in K. We thus start by defining atomic semantic tableaux [Smu168, Fitt90]:
32
Propositional Logic
D e f i n i t i o n 1.7.1" (1) Let a be a proposition, f a denotes tile assertion "a is false" and ta denotes the assertion that "a is true", ta and f a are called s i g n e d f o r m u l a e . (2) According to the inductive definition of propositions, tile atomic semantic tableaux for propositions A, al, a2 and for the propositions formed by A, al and a 2 are as in the following table.
la
2b
2a
Ib
t(orl A a2)
f(ol ^ ~
tA
fA
tal
forl
for2
ta2 3a
3b
4b
4a
f(orl V 0"2)
t(-~a)
f(-~or)
t(~l V 0"2)
fcrl r
ta
tal
ta2
f~,2 5b
5a
6a f(orl ~ 0"2)
6b t(orl +-~ 0"2)
f(orl +-'>0"2)
t(ol ~ o2)
fal
tal
t~l
f~l
tal
fG1
f~2
ta2
]ae
f G2
ta2
ta2
Intuitively, the signed formula tA or f A is regarded as tile assertion that A is respectively t r u e or false. In the atomic semantic tableau (4a), the assertion that a l V a2 is t r u e t h u s r e q u i r e s a l to b e t r u e or a2 to b e t r u e ( b r a n c h i n g ) , whereas in the tableaux (5b), the assertion that al +
a2 is false requires
Semantic Tableaux
33
al to be true and a2 to be false (sequence). T h a t means that in an atomic semantic tableaux, a branching denotes a disjunction whereas a sequence stands for a conjunction.
II 1.7.1
To build the semantic tableau of a compound proposition K, we start by writing the signed formula t K or f K , at tile origin of the semantic tableau. We then unfold the semantic tableau of K, according to Definition 1.7.1. We now give an example which will be followed by the formal definition of the general rule for construction of semantic tableaux. E x a m p l e 1.7.2: Let K 9 (A A ~A) V (B V (C A D)) be a proposition. The atoms of K are A, B, C and D. We start the semantic tableau with origin t K .
t((A A ~A) V (B V (C A D)))
.......... 4 j t(AA A)
9
tA
9
2a
t(BV(CAD))
tB
t(CAD)
tr
t( A)
tC
/A
tD
3a
1";,3
| t~l
This is a complete semantic tableau with three branches, namely three sequences n l, n2 and ~3- The branches start from the origin. The left branch, ~1, is contradictory since it contains the mutually contradictory signed formulae t A and f A . Tile contradiction of a branch is denoted by the symbol |
at tile b o t t o m of the
branch. The other two branches are not contradictory. By means of the semantic tableau of K, we see that tile t K hypothesis is correct under certain conditions, such as for instance t B in tile branch ~2 or t D in the branch ~3. It is however sometimes false, as in branch ~1.
34
Propositional Logic It is easy to set apart the constituent atomic semantic tableaux fl'om the above
semantic tableau. For example:
t(AA~A)
tA
t(-~A) in the dotted area is the semantic tableau 2a.
m 1.7.2
We now define the concepts needed for the construction of semanti(: tableaux. D e f i n i t i o n 1.7.3: (1) The n o d e s of a semantic tableau are all the signed formulae which occur in the table. (2) A node of a semantic tableau is said to be u s e d if it occurs as the origin of an atomic semantic tableau. Otherwise it is said to be u n u s e d . (3) A branch of a semantic tableau is said to be c o n t r a d i c t o r y
if for a certain
proposition a, ta and f a are nodes of the branch. (4) A semantic tableau is said to be c o m p l e t e if none of the non-contradictory branches has unused nodes. Otherwise it is said to be i n c o m p l e t e . (5) A semantic tableau is c o n t r a d i c t o r y
if all its branches are contradictory. m 1.7.3
D e f i n i t i o n 1.7.4: Inductive construction of semantic tableaux: We will construct a s e m a n t i c t a b l e a u for proposition K as follows: We start with tile signed formula t K (or f K )
as the origin and we proceed
inductively.
Step n: We have an atomic semantic tableau Tn. Step n 4- 1 : The atomic semantic tableau Tn will be extended to tableau Tn+l by using certain nodes of Tn which will not be used again. From the unused Tn
Semantic T a b l e a u x
35
nodes nearest to the origin, we select the one most on tim left. Let X be t h a t node. We now extend every non-contradictory branch passing through X by concatenating an atomic semantic tableau with an X origin at the end of each of these branches. W h a t comes as a result is the semantic tableau T~+I (in practice, one avoids rewriting the node X, since it is already a node of the non-contradictory branch). Tile construction finishes when every non-contradictory branch has no unuse(t nodes.
9
1.7.4
Tile h)llowing example, a contradictory semantic tableau, clarifies the above construction.
E x a m p l e 1.7.5: P e i r c e ' s Law
((A -+ B) -+ A) -+ A
(1) We start with the assertion t h a t ((A ~ B) --+ A) --+ A is false:
f(((A--+ B)--+ A)---~ A)
t((A ~ B ) ~ A)
IA tA
|
f (A -+ B)
tA
IB |
The claim that Peirce's Law is false has led us to a contradictory semantic tableau, therefore the corresponding formula is true.
36
Propositional Logic
(2) If we start with the assertion that the proposition is true, the conclusion still remains tile saIne:
t(((A --+ B)--+ A)--+ A)
tA
f ( ( A - + B ) - + A)
t ( A - + B)
fA
fA
tB
We observe that there is no contradictory branch. Then, if f A or t B and f A , or t A ,
((A--+ B)--+ A ) - + A becomes true. Even if f B, then t A or f A holds.
Hence Peirce's Law is logically true.
m 1.7.5
Intuitively:
If a complete semantic tableau with an f K origin is found to be contradictory, that means that we have tried all possible ways to make'. proposition K false and we have failed. Then K is a tautology.
This central idea is expressed by the following definition: D e f i n i t i o n 1.7.6: A B e t h - p r o o f of a proposition K is a complete contradictory semantic tableau with an f K origin. A complete contradictory tableau with a t K origin is called a B e t h - r e f u t a t i o n of K. Proposition K is said to be B e t h - p r o v a b l e called B e t h - r e f u t a b l e
if it has a B e t h - p r o o f .
K is
if there is a B e t h - r e f u t a t i o n of K. The fact that K is
Beth-provable is denoted by ~ K.
m 1.7.6
As we will prove in Theorem 1.10.7 and Theorem 1.10.9, every tautology is Beth-provable (completeness of Beth-proofs) and conversely every Beth-provable proposition is a tautology (soundness of Beth-proofs).
Semantic Tableaux
37
E x a m p l e 1.7.7: Let us assume that the following propositions are true: (1) George loves Maria or George loves Catherine. (2) If George loves Maria, then he loves Catherine. Who does George finally love? Let us denote "George loves Maria" by M and "George loves Catherine" by K. Then, (1) - M V K A-
(MVK)
and (2) - M -+ K = ~ M V K .
A(-,MVK)-
We form the proposition
(1) A(2) which is by hypothesis true. We wish to
know if George loves Catherine, or equivalently, whether tK. Let us suppose that he does not love her, that f K . We then construct a semantic tableau with an f K origin.
fK
t((M V K) A (~M V K))
t(MVK)
t(~M v K)
fM
tK
tM
tK
tM
tK
<9
<9
<9
<9
In step 2 we added t((M V K) A ( ~ M V K)), since (1) and (2) are given to be true. Starting with tK, we finish with a contradictory tableau, which means that proposition K is always true, in other words George loves Catherine!!! If we construct a semantic tableau with an f M origin, we will end up with a non-contradictory tableau, and thus we will not, be able to conclude whether George loves Maria or not!!
m 1.7.7
38
Propositional Logic
1.8 A x i o m a t i c
Proofs
Propositional Logic, just like other mathematical systems, can also be presented as an axiomatic system with logical axioms and derivation rules instead of the semantic tableaux.
The axioms are some of the tautologies, and a derivation
rule R d e r i v e s a proposition a from a sequence of propositions al, a2, . . . , an. We will now give a short description of such an axiomatic presentation of P L [Schm60, RaSi70, Mend64]. D e f i n i t i o n 1.8.1: The Axioms: We regard as an axiom any proposition of the following form: (1)
~ - + (r -+ ~)
(2) (~ ~ (~ ~ ~))-~ ((~-~ ~)-~ (~ ~ ~)) (3) ( - ~ ~ - ~ )
~ (~ ~ ~)
Notice that propositions ~o, T and a can be replaced by just any propositions. We thus have axiomatic schemes (or schemes of axioms), leading to an unlimited number of axioms. One can easily verify that all these axioms arc well-formed PL formulae an(t, of course, tautologies,
m 1.8.1
D e f i n i t i o n 1.8.2: Tile Modus Ponens rule: We only use one derivation rule, the rule of M o d u s P o n e n s , which states that proposition 7- can be derived from propositions ~ and ~ --+ 7-. Modus Ponens
(mode according
to Diogenes Laertius [Zeno76]) is denoted by:
~-+7
(1)
~, ~ --+ r ~- r
(2)
or even with
9
1.8.2
A x i o m a t i c Proofs
39
In (1), which is a characteristic definition of a logical rule, the horizontal line separates the hypothesis from the conclusion. In (2), the symbol "F-" denotes derivation through the axiomatic system. We consider the three axioms of Definition 1.8.1 as formulae derived in our axiomatic system. New propositions are also produced by means of these three axioms and the Modus Ponens rule. The following example shows how axioms and Modus Ponens can be applied for the derivation of the P L formula A --4 A. E x a m p l e 1.8.3: Prove that F- A -+ A. Proofi F- A - ~ ( ( B - + A) -+ A)
(1)
by the first axiom. However, by the second axiom we have F- (A --+ ((B --4 A)--+ A)) --4 [((A -+ (B --+ A ) ) - + (A --+ A)]
(2)
By (1) and (2) and Modus Ponens, we have F-(A ~ (B -+ A)) ~ (A -+ A)
(3)
But k- A --9 (B --4 A) by the first axiom, and then with the Modus Ponens rule, (3) leads to k A -9 A. Then proposition A --9 A is derived in our axiomatic systeIn. m 1.8.3
The next theorem allows us to replace one of the subformulae of some P L proposition by a logically equivalent one. The proof can be found in Exercise 22. Theorem
1.8.4:
Substitution of equivalences:
I f proposition a ~ al is derived in P L and cr is a subformula of proposition 99, then proposition 99 +9 991 can also be derived in PL, where 991 is the proposition resulting [rom 99 by replacing none, one, or more than one occurrence of a by its equivalent al.
Formally: [-- O" ~-~ O"1
~
]-- 99 ~
991
9
1.8.4
We will now give the formal definition of the proof of a proposition within the axiomatic method.
40
Propositional Logic
D e f i n i t i o n 1.8.5- Let S be a set of propositions. (1) A p r o o f from S is a finite sequence of propositions ~1, a 2 , . . . , a,~, such that for every 1 < i _~ n" (i) ai belongs to S, or (ii) oi is an axiom, or (iii) ai follows from fly, ak, 1 _~ j, k _~ i, by application of Modus Ponens. (2) A proposition a is S - p r o v a b l e from a set S of propositions, if there is a proof a l , . . . , an from S, such that a,, coincides with a. Formally, we write SFa.
(3) Proposition a is p r o v a b l e if F- a, that is to say if a is derived in the axiomatic system of Definition 1.8.1 with the use of Modus Ponens. Obviously, the concept of the S-provable proposition coincides with the concept of the provable proposition fOT S Example
O.
B 1.8.5
1.8.6: We give the proof of ~ B ~ (C --+ A) from S -
{A}
(1)
A
AES
(2)
A --+ (C -+ A)
axiom 1
(3)
(C ~ A)
Modus Ponens on (1) and (2)
(4)
(C --+ A) ~ (-~B --+ (C --+ A))
axiom 1
(5)
-~B ~ (C -~ A)
Modus Ponens on (3) and (4) m 1.8.6
Note that if proposition a is provable from S and S is an infinite, set, then a is provable from a finite subset of S, since proofs are always finite. The theorem below is fundamental in proof theory. Theorem
1.8.7:
Theorem of Deduction-
Let S be a set of propositions and let K, L be two
PL
propositions. Then:
SU{K} F L ~ = > SF K-+ L Proof:
(r
If S F- K ~ L there is a proof of K ~ L denoted by o1,o2,...o,~,
where an ~ K --+ L and, for every i c { 1 , . . . , n }
, ai is an axiom, or ai C S, or
ai is derived from two preceding propositions by an application of Modus Ponens.
Axiomatic Proofs Thus the sequence a : , a 2 , . . . a n
41
, an+: , ff~+2, where a,~+: is K and a,,+2 is L, is a
proof of L; since for every i C { 1, 2 , . . . , n, n 4- 1}, ~ is an axiom, or ai C S LJ {K }, or an+2 is derived from a,~ and a,,+: by Modus Ponens. (3)
By the assumption, we know that there is a proof of L from S O {K}
denoted by L1, L 2 , . . . , L~ where Ln is L. Let us consider the following sequence of propositions.
K --~ L:
K --~ L2 9
o
.
K -~ Ln This sequence is not a proof.
However it be(:omes a proof if we "add" some
propositions between its successive terms using the following inductive method: For L: we have the following cases: L: is an axiom,
or
L: belongs to S,
or
L: is K
By axiom (1) we have L: -~ (K -~ L:). In the two first cases we thus add L~ and L1 -~ (K -~ L:) above K -~ L:. In the last case, the theorem holds true by Example 1.8.3. Let us assume that we have completed the sequence up to K --~ L ~ , so that the m first terms of the sequence constitute a proof of K -~ Lm. We will now add u propositions in the sequence, between propositions K ~ L,,~ and K -~ Lm+l, so that the m 4- 1 first terms of the sequence, complete(| by these u propositions, constitute a proof of K --~ Lm+l.
Lm+: is an axiom, or it belongs
t o S , or it is K, or it is derived from L j , Lk (1 <_ j , k < m) by Modus Ponens. In other words, there is an Lk with the form Lj ~ Lm+l. For the first three cases, we will work just as we (lid above, by adding between K ~
Lm and K ~
Lm+: the propositions Lm+: and Lm+: ~
(K ~
Lm+:).
In the fourth case, we know by the inductive assumption that the sequence has been completed so that the m first terms constitute a proof of K ~
Lj and
K ~ (Lj ~ Lm+:). We thus "add" proposition
( g ~ (Lj --+ L m + l ) ) ~
( ( g ~ Lj) ~ ( g ~ Lm+:))
which is legitimate since the last proposition is axiom (2). K -~ Lm+: is thus derived using two successive applications of Modus Ponens. Note that we have only used two of the three axioms of the axiomatic system to prove the deduction theorem,
m 1.8.7
42
Propositional Logic
The axioms of PL arc often called logical axioms. We usually define a theory extending the axiomatization of PL by a set S of additional axioms which characterize tile theory.
The theorems of the theory S are the elements of the set
{qolS I- qo} (see Remark 1.8.8). R e m a r k 1.8.8:
(1) The axiomatic system: The axioms of PL and tile Modus Ponens rule constitute the axiomatic system of Frege-Lukasiewicz [Heij67, Boyc68, Schm60].
Frege (German, 1898-1925) was the first philosopher and lo-
gician to define a formal language suitable for logic. Lukasiewicz (Polish, 1878-1926) dealt with the axiomatization of eL. (2) Modus Ponens: If cr and a ~
r are Beth-provable, then a and a --+ r
are logically true, thus r is also true (why?).
There is an algorithmic
method which constructs a Beth-proof of r from the Beth-proofs of a and cr --+ T. This method is an application of the theorein of Gentzen (Gentzcn Hauptsatz), however its proof is beyond the scope of this book. (3) Theorems: A theorem is any proposition appearing in a proof. This means that the theorems of the theory S are exactly the elements of the set of propositions {a [ S t- a}. We usually have the conclusion occurring as the last proposition of a proof, but actually every initial part of a proof is also a proof. (4) The selection of axioms: The axiomatic proof method is sound and con> plete, as we will show in the following chapters.
The axiomatic system
chosen is thus complete, meaning that every tautology can be proven from the axioms by successive applications of Modus Ponens. (5) Beth-provability: Since the axioms themselves are logically truc propositions, they are also Beth-provable, and Modus Poncns leads from logically true propositions to propositions which are also logically true. Thus, each ttmorem is Beth-provable. (6) Axioms and rules: We could replace one or even more than one of tile axioms of the PL axiomatic system by rules. For example, the third axiom
( - ~ - ~ -~r) -+ (r -~ ~)
Resolution
43
could be replaced by the rule ~
~ 97-
The selection between axioms and rules is usually a m a t t e r of personal valuation of the specific theoretical requirements. (7)
Derivationsfrom axioms: To prove a proposition from the axioms, we have to try various combinations in order to determine the adequate combination of propositions for the application of Modus Ponens and the axioms. A single derivation, for instance I-- (A ~ B ) - + ((C ~ A) --+ (C -+ B)) thus becomes difficult and time consuming, even with the hint that the first and the second axiom shall both be used twice. On the contrary, Beth-proofs, as defined in Definition 1.7.6, provide a systematic algorithmic method with a certain result. For this reason we prefer to work with Beth-proofs.
m 1.8.8
1.9 Resolution Terminology and Notation in Logic Programming The resolution method is the most efficient
PL
algorithmic proof method and,
as we will see in the second chapter, for Predicate Logic as well. It constitutes the proof method on which the Logic Programming language PROLOG is based. The resolution method is a proof method by refutation, just like Beth-proofs. It generally has a lot of similarities with the Beth-proof method, but it is more suitable to the writing of logic programs, where the programming language is almost the
PL
language.
In order to introduce the resolution, we need to define several essential concepts of contemporary Logic Programming.
44
Propositional Logic
D e f i n i t i o n 1.9.1: A l i t e r a l is any a t o m or its negation.
II 1.9.1
For example, -~A, B , -~C are literals. We know that we are able to develop any PL proposition into a Conjunctive Normal Form, CNF, which is equivalent to the initial proposition. A CNF is in fact a di.:junction of literals, such that in every disjunction no literal occurs more than once. We now present an algorithm for the construction of a CNF for a given proposition, which operates a lot faster than constructing the proposition's truth table, then selecting the columns, etc.. This algorithm comes as an application of (i) the laws of De Morgan: --(AAB)~AV~B
--,(A v B) +->,--,A A ~ B
and
(ii) the associative properties of A and V:
( A A B ) AC <-~ A A ( B A C )
(A v B) v C ~ A v (B v C)
and
(iii) the commutative properties of A and V:
AABe+BAA
AVBe+BVA
and
(iv) the distributive properties of A over V and of V over A
AA(BVC)~(AAB)V(AAC)
and
A V(BAC)~(AVB)A(AVC)
(v) the propositions
A v A ++ A,
AAA
++ A,
A V (B A -~B) e+ A
A A (B v --B) +~ A, and
-.--A ~ A
as well as the theorem of substitution of equivalences. (As an exercise, prove that the above propositions are tautologies). This method will be presented by means of an example.
Resolution
E x a m p l e 1.9.2: Develop proposition S into S :
45 a CNF,
where
-~((A v B) A (-~A V-~B)) A C
Step 1 : We move the negations forward into tile parentheses, using the Laws
of De Morgan: a :
S ~
b:
S ~ (-~A A ~B) V (-,~A A -~-~B)] A C
[~(A V B) V-~(~A V ~B)] A C
Step 2: We use commutative and associative properties in order to bring
together literals of the same atom.
We can then simplify double
negations, double terIns of tile kind A V A or A A A, and superfluous terms of the kind B A -~B or B V ~B, by using tile theorem of substitution of equivalences: S ~ [(-~A A ~B) V (A A B)] A C
Step 3 : By the distributive properties we have:
S e+ [((--A A -~B)
V
A) A ((--A A -,B)
We then continue by repeating tile 2 nd and
3 rd
V
B)] A C
steps until the final
CNF
is
determined. Step 1':
S ~ ( ( ~ A A --,B) V A) A ((--,A A --,B) V B) A C
Step 3 t :
e+ (~A v A) A (-~B V A) A (-~A V B) A (~B V B) A C
Step 2 ' :
e+ (=B v A) A (=A V B) A C
which is the
CNF of
S we are seeking.
II 1.9.2
The last h)rm of S is a conjunction of literals' disjunctions, and is equivalent to the initial formula. This algorithm generally finishes when the following form of S is determined: (A~VA~V...V
A~) A
...
A
(A~'VA~V...VA~.)
where the elements of {A~,... A k , , . . . , A~',... A~} are literals.
(,)
46
P r o p o s i t i o n a l Logic
In the context of the resolution proof method, formulating a proposition as a set of literals proves to be very practical. For instance, the proposition in the first parenthesis in (.) becomes:
{A1,A2,... ,Ak,} We consider t h a t such a set denotes a disjunction of literals, namely a
PL
propo-
sition. We now give the formal definition of the set-theoretical form of a proposition. D e f i n i t i o n 1.9.3: The disjunction of a finite set of literals can be set-theoretically represented as a set, the elements of which are the considered literals. This set is called a c l a u s e . A clause is thus equivalent to a
PL
disjunctive proposition.
For technical reasons, we also introduce the concept of the e m p t y c l a u s e , a clause which contains no literals and is always non-verifiable. An empty clause is denoted by i-l. Definition 1.9.4:
II 1.9.3 The conjunction of a finite set of clauses carl be set-theoretically
represented as a set, the elements of which are these clauses. This set is called a set of clauses. namely a Example
PL
A set of clauses thus constitutes a conjunction of disjunctions,
conjunctive proposition,
m 1.9.4
1.9.5: The set of clauses { { A , B } , {~B,--C}, {D}} 1
2
3
represents the proposition: ((AVB) 1
Remark
A (~BV-~C) A D) 2
3
m 1.9.5
1.9.6:
(1) A t r u t h valuation obviously verifies a set of clauses if it verifies every clause in the set. For example, let S = {{A, B } , {--C}} be a set of clauses and let V be a t r u t h valuation such that:
V(A) = V ( B ) =
V(C)= t
Resolution
47
Then V does not verify S, since it does not verify one of its elements, namely {-~C}. (2) Naturally, we can also consider the e m p t y
s e t of c l a u s e s {0}, which
is not to be confused with the empty clause VI. Formally, every truth valuation verifies the empty set of clauses, since it validates every one of its elements (there is no proposition contained in the clauses of {0}, see the Proof of Corollary 1.5.4). On the contrary, each set of clauses which contains the empty clause can be verified by no truth valuation, because [-7 is not verifiable. Intuitively, the empty set of clauses denotes that there is no "claim" (proposition) for the "world" (the set of propositions), whereas tile empty clause denotes that we at least have one proposition for our "world", the clause l-], which always creates contradictions by making our "world" inconsistent, namely non-verifiable,
ll 1.9.6
In Logic Programming, as well as in most of the PROLOG versions, the above symbolism has prevailed: Let S be the proposition
A1 V . . . V Ak V (-~B1) V . . . V (-~B1) where A 1 , . . . , A k , S ~
B1...,B1
are atoms. Then we have:
A1 v . . . v Ak V---,(B1 A . . . A B1)
by De Morgan
(B1A...AB1)-+(A1
by (--B V A) e+ (B -+ A)
V...V
Ak)
and finally S e+ ((A1 V . . . V Ak) +-- (B1 A . . . A Be)
(1)
For the use of +- as a logic connective see also R e m a r k 1.2.9. If now, instead of the logical connectives A, V and +-- we wish to use the corresponding symbols ", " (comma), " ; "
(semicolon), and " ' - "
(neck symbol), then S can be equivalently
denoted by: A1; . . . ; Ak "- B1, . . . , Be
If in proposition (2), k -
(2)
1 holds, then we have: A I " - B1, . . . , B e
(3)
48
Propositional Logic
D e f i n i t i o n 1.9.7: Each clause of form (3) is a H o r n c l a u s e . Tile atom A is tile h e a d or thc g o a l of S, and tile conjunctive components B 1 , . . . , Be, arc tile t a i l or the b o d y or the s u b g o a l s of S.
II 1.9.7
The intuitive interpretation of a Horn clause is that, for a goal A to be valid, subgoals B 1 , . . . , Be also need to be valid. D e f i n i t i o n 1.9.8: If k - 0 in a (2) form clause, then the clause 9- B t , . . . , B e is called a p r o g r a m
(4)
g o a l or d e f i n i t e goal.
If g - 0, the clause:
(5)
AI'is called a u n i t c l a u s e or fact. Remark
II 1.9.8
1.9.9:
(1) A given set S of clauses can informally be considered as a database, see also section 3.1.1, since the clauses in S represent information about the relationship of the atoms they contain. (2) Tile verification of goal A in tile clause A ' - B1, . . . , B1 is inferred from the validation of subgoals B 1 , . . . , B1. In such a case, goal A is said to s u c c e e d , or t h a t there is a proof of A. Otherwise, goal A is said to fail. (3) Tile form (4) Horn clause, denoting the absence of a goal, states that at least one of the Bi, 1 <_ i < g, fails. The form (5) Horn clause means that A1 always succeeds.
In t h a t case, A1 constitutes a claim, a fact of our
database,
m 1.9.9
Generally speaking, r e s o l u t i o n is a deductive rllle through which we can derive a proposition in a clause from two other propositions. Before describing formally the method, we shall give. an example. Example
1.9.10: Consider the following clauses {-~A, B}
{A, c} Using resolution, we can deduce
{ B , C}
Resolution
49
Tile intuition in the use of such a rule becomes very clear when we reformulate the previous clauses according to tile classical
formulation.
PL
given propositions
(-,A V B) (ARC)
conclusion
( B V C)
This rule is an application of the tautology (=A V B) A (A V/3) --+ (B V C) From the Completeness Theorem 1.10.9, we know that tautologies are derivable by means of the axioms and Modus Ponens. Thus F- ( = A V B ) A (A V B) --+ (B V C) Then from the Theorem of Deduction, Theorem 1.8.7, we obtain {(--A V B) A (A V B)} !- (B V C) The rule of resolution is thus derivable in
m 1.9.10
PL.
As a generalization of the previous example, let us consider as given the following clauses"
where
A1,
. . . , Ak,
Ct
=
{A1,A2,...
C2
=
{ D1,
,
B1,
. . . , Bt~
D2, ,
,Ak,,--,B1,... . . . , Dk2 D1,
,--,Be,}
, -,F1,
. . . , Dk2
,
. . . ,--,Ft.2} F1,
. . . , Ft 2
are atoms. Let us
also assume t h a t A1 coincides with D1. We can then rewrite the two clauses as follows: Cx
=
{A,}UC i
where
Ci
=
{A2,...,Ak,,--B,,...,~Be,}
62
=
{-~dl}UC~
where
C~
=
{91,...,Dk2,~F2,...,~Ft2}
Then the resolution rule that we wish to develop will have to enable us to produce the following clause as a deduction: c
-
ci u c;
In other words, given"
conclusion"
C,
=
{A,}UC~
C2
=
{-~A,} U C i
C~ U C~
-
(C, - {A, }) U (C2 - { ~ A , })
(.)
50
Propositional Logic
We can consider that the two clauses C1 and C2 "collide" because C1 contains literal A1 and C2 literal ~A1. The removal of the cause of the collision leads to clause (,), which resolves the clash. The method owes its name to this resolution. We can now define formally the resolution method. D e f i n i t i o n 1.9.11: Resolution, a formal description" Let C1 and C2 be two clauses and let L be a literal such that L c C1 and (~L) C C2. We can then deduce the r e s o l v e n t D of C1 and C2" D -
(CI-{/})U(C2-{~L})
m 1.9.11
E x a m p l e 1.9.12: given-
C1
=
C2conclusion:
D
{P,Q}
{-,P,--,Q} -
{Q,-'Q}
m 1.9.12
If we have a set containing more than two clauses, we can introduce the concept of the resolvent set: D e f i n i t i o n 1.9.13:
Let S - {C1, C 2 , . . . C , } be a set of clauses. Then the set
R(S) - S U { D I D is the resolvent of the clauses C~,Cj c S, i ~ j ,1 <_i,j <_n} is the r e s o l v e n t of S.
m 1.9.13
E x a m p l e 1.9.14: Let S be a set of clauses: S
__
{{A,--,B,~C}, { B , D } , { ~ A , - - D } } 1
2
3
By applying the resolution rule on the pairs of clauses of S, we have:
1 {A, ~B,C}
2
{B, D}
3
{--,A,--,D}
2
{B,D}
3
{~A,~D}
1
{A,~B,-~C}
4
{A,D,~C}
5
{B,~A}
6
{~B,~C,-~D}
And finally,
R(S) -
{{A,~B,~C}, {B,D}, {~A,-~D}, {A,D,~C}, {B,~A}, {~B,~C,~D}} 1
2
3
4
5
6
Resolution
51
We can of course continue with the application of the method, by taking successively the following sets:. R~
= S,
~'(S)= R(S), R*(S)= R ( R ( S ) )
,... , R'(S)
= I~(R'-'(S))
And finally: oo
n*(S)-
UR"(s) =
{ C i l C i e R j(S) and j C N}
n--1
where Ci are the clauses contained in the jth resolvent of S. Note that R*(S) is a finite set if and only if S is finite. Remark
m 1.9.14
1.9.15:
(1) In Example 1.9.12, we chose to apply resolution through the literal P. We could have done this through Q, since Q is obviously also a cause of collision. (2) Intuitively for every resolution application, if a truth valuation verifies C1 and (72 then it also verifies their resolvent D. Likewise, whenever a truth valuation verifies S, it also verifies R(S). (3) Note that the resolvent D of C1 captures less information than C1 and C2. This becomes clear with the following example,
m 1.9.15
E x a m p l e 1.9.16: Let S = {{A, B } , {-~B}} be a set of clauses. Then, by resolution, we have given
resolution
C1 =
{A,B}
C1 :
{-nB}
D =
{A}
By applying resolution on S, we produce D = {A} which contains no information about literal B.
m 1.9.16
We will now give the formal definition of proofs by means of the resolution method.
52
Propositional Logic
D e f i n i t i o n 1.9.17:
Let S a set of clauses. A r e s o l u t i o n p r o o f from S is a finite
sequence of clauses C 1 , . . . , Cn, such that, for every (7/, i C~ c S
C~ c R( { Cj, Ck } ),
or
1 , . . . , n, we have:
1 <_j, k <_ i <_ n
A clause C is p r o v a b l e by r e s o l u t i o n f r o m a set of c l a u s e s S, S t- C
formally
if there is a resolution proof from S, the last clause of which is C
Obviously C c R*(S).
m 1.9.17
E x a m p l e 1.9.18: Find all tile resolvents of the set of clauses S -
{ { A , B } , {--A,~B}}
Let us number all the clauses in S:
1.
{A, B}
2.
{--,A,~B}
3.
{B,-B}
from 1 and 2.
4.
{A,-~B}
from 1 and 3.
Then
RI(S) -
{ { A , B } , {-,A,--,B}, {B,--,B}, {A,--A}}
R*(S) -
R~
Finally
u R (S)
= { { A , B } , {--,A,--,B}, {B,--,B}, {A,~A}} Clauses of the kind {A,--A}, namely A V ~A, are tautologies.
E x a m p l e 1.9.19: The following proposition is given:
S" ((A ~ (B ~ C)) A (A ~ B) A (A ~ --,C)) Prove that S is not verifiable.
m 1.9.18
Resolution
53
Proofi Step 1 9 Determine the
CNF
of S
S ++ ((A -+ (B --+ C)) A ((B -+ C) ~ A) A (A -+ B) A (B -+ A) A (A -+--C) A (--,C -+ A) (-~AV--BVC) A (BVA) A (-,CVA)A (~AVB) 9
J
~
J
1
~
J
2
~
3
J
4
A (~B V A ) A (-,A V -"B)A (C V -"A) -1
~
5
J
~
6
J
7
Step 2" Form the corresponding set of clauses"
S-
{ {-,A,
-~B,C}, {B,A}, {-,C,A},-,A,B}, {-,B,A}, {-~A,-~C}, 1
2
3
Step 3" Determine tile various resolvents"
(see also Definition 1.9.8.
4
5
6
{C, A}} 7
8.
{A}
by 2 and 5.
9.
{--A,--B}
by 2 and 6.
10.
{-~A}
by 4 and 9.
11.
[--1
by 8 and 10.
The literal -"A is eliminated and clause 11
contains no literals). Since tile empty clause belongs to resolvent by 11, the set of clauses S is not verifiable. Thus, proposition S is not verifiable, m 1.9.19
Example 1.9.20: Prove that the proposition --/3 is provable by resolution from the set Q -
Proof:
{ { A , ~ B } } , { - - A , - , B , - , C } , {-"A, -,B, C} }
1. 2. 3. 4. 5.
{A, -"B} -"B, -"C}
{--,A,--,B,C} {-"A, --,B}
by 2 and 3. by 4 and 1.
Tile resolution proof we are seeking is the sequence of clauses 1, 2, 3, 4 and 5. m 1.9.20
54
Propositional Logic
R e m a r k 1.9.21: The proof in Example 1.9.20 could also have been conducted as follows: We apply resolution on $1 =
{{A,-~B}, y
~
{~,A,~B,~C}, {~A,~B,C}, {B}} ~
1
y
J
~
y
2
= StO{B}
J
3
4
to give 5.
{A}
by 1 and 4.
6.
{~A,-~B}
by 2 and 3.
7.
{-7, A}
by 4 and 6.
8.
Wl
by 5 and 7.
Namely we have S t2 {B} kV1. Since the rule of resolution is a derivable rule of R PL
as we saw in Example 1.9.10, we have also S tO {B} k- [-7. But in that case, we
have, by the Deduction Theorem 1.8.6, that S k- B --+ [-7. By the tautology (B --+ I-1) ~ -~B, we can conclude S t- -~B; in other words ~ B is provable from S.
m 1.9.21
1.10 Soundness and Completeness of Tableaux In the next sections we will give the basic theorems on soundness and completencss of tile proof methods we have presented. We will start with soundness and completeness of Beth-proofs. For each of the following definitions or theorems which contain the notions "Beth-proof" and "logically true", there are dual definitions and theorems corresponding to the dual notions "Beth-refutation" and "logically false", respectively. The formulation of those definitions and theorems is left as an exercise to tile reader. We will prove that all Beth-provable propositions are true (soundness), and conversely, that every logically true proposition is Beth-provable (completeness) [Smul68]. The proofs which we will present are inductive on the length of a proposition, or on the length of a semantic tableau. induction scheme for propositions. scheme for semantic tableaux.
We have already described tile
We will now describe the general induction
Soundness and Completeness of Tableaux
55
D e f i n i t i o n 1.10.1: Induction Scheme for Semantic Tableaux: Let P be a property of some semantic tableaux T, symbolically P(T).
If we
prove that: (a) every atomic semantic tableau has property P (b) if P is a property of a semantic tableau T, and if T' is a new semantic tableau formed by the concatenation of an atomic semantic tableau at the end of one of T's branches, then P is also a property of T' we can then deduce that P is a property of all semantic tableaux, namely that P(T) holds for every semantic tableau T.
m 1.10.1
For a semantic tableau T, the induction is carried out on the l e n g t h of T, namely the number of atomic semantic tableaux in T.
The analogy between the induction scheme for propositions, Definition 1.2.2, and the induction scheme for semantic tableaux is obvious. Example
1.10.2: Let P be the property "The number of nodes of a semantic
tableau is greater than or equal to the number of branches". Proof." (a) In all atomic semantic tableaux, the number of nodes is greater than or equal to the number of branches.
(b)
Let us assume that in the semantic tableau T, the number of nodes is greater than or equal to the number of branches.
We proceed by con-
catenating at the end of one branch of T an atomic semantic tableau. If the concatenation node is t(-,a) or f(-,a), then we have one more node while the number of branches remains the same. For any other type of concatenation node, there will be no more than two new branches and there will be at last two new nodes, according to Definition 1.7.1. Thus, in the new semantic tableau T ~, the number of nodes is once again greater than or equal to the number of branches,
m 1.10.2
We now give several auxiliary definitions and l e m m a t a concerning soundness and completeness theorems for Beth proofs.
56
Propositional Logic
Definition
1.10.3:
Let n be a branch of a semantic tableau T, and let
P = { P 1 , . . . , Pn} be the set of nodes of n, where for each proposition a, either or Pi = f a .
Pi = ta
T h e n the t r u t h
valuation
V agrees with branch
n if
for every Pi E P: Pi = t a ~
Lemma
1.10.4:
V(a) = t
and
Pi=fa
=~ V ( a ) = f
m 1.10.3
Let V be a truth valuation which agrees with the origin o f
a s e m a n t i c tableau, which m e a n s that if the origin is ta, then V ( a ) = t, and if the origin is f a, then V ( a ) = f .
T h e n V agrees with a branch o f the semantic
tableau.
Proof.- By induction: (a) T h e l e m m a obviously holds for atomic semantic tableaux. (b) Let us assume t h a t the l e m m a holds for a semantic tableau T. Let T ~ be the semantic tableau built by c o n c a t e n a t i n g an atomic semantic tableau with an origin X at the end of a branch ~ o f T .
We wish to prove now
t h a t the l e m m a holds for the semantic tableau T ~. Case 1 :
V agrees with all nodes in branch n. T h e n V agrees with the
node X and thus agrees with one of the branches of the atomic semantic tableau with X at the origin. Let nl be the agreement branch of V and the atomic semantic tableau. The branch nnl resulting from the concatenation of ~ and ~1 is the branch of T we are looking for and which V agrees with. Case 2:
V does not agree with ~ but it agrees with the origin of T,
otherwise we need not prove anything. Then, there is another branch nt of T which V agrees with. But nt is also a branch of T. Hence V agrees with n~. Lemma
1.10.5:
m 1.10.4 Hintikka's Lemma:
Let ~; be a non-contradictory branch o f a c o m p l e t e s e m a n t i c tableau. We define a truth assignment, and thus the corresponding truth valuation, as follows: V(A) = t
if t A is a node o f
V(A) = f
if t A i s n o t a n o d e o f
T h e n V agrees with branch ~.
Soundness and Completeness of Tableaux
57
Proof." By induction on the length of propositions: (a) If A is an a t o m and tA is a node of n, then V ( A ) = t and V agrees with n. If f A is a node, since n is not contradictory, t A is not a node and hence
Y(A)=
f.
(b) If t ( a l A a2) is a node of ~, then, since the semantic tableau is complete, this node was used at some point and
tal
tff2 has been concatenated at the end of branch a. Hence, tal and tff2 are nodes of b r a n c h ~. According to the hypothesis, Y ( a l ) = t and Y(a2) = t and so Y ( a l A a 2 ) = t. If f ( a l Aa2) is a node of ~, then, since the semantic t a b l e a u is complete, the node f ( a l A a2) was used at some point and
f~1
I~2
has been concatenated at the end of branch g. But in t h a t case, either
f ( a l ) or f ( a 2 ) are nodes of ~. And therefore, according to the hypothesis, V ( a l ) = f or V ( a 2 ) = f , and hence V ( a l A a 2 ) = f . T h e remaining possible cases analysed in the same way are left as exercises for the reader,
m 1.10.5
Hintikka's lemma provides us in practice with an algorithm for the construction of a counterexample to the claim that a proposition is logically true. Let us assume we are given a proposition a. We then build a complete semantic tableau with an f a origin. If the semantic tableau is contradictory, then the proposition is indeed logically true. If the semantic tableau is not contradictory, then it has at least one non_contradictory branch ~. Hintikka's lemma indicates to us how to construct a truth valuation such that V ( a ) = f based on ~. Let us e x a m i n e this procedure with an example.
58
Propositional Logic
E x a m p l e 1.10.6: Find a truth valuation V such that V((A -+ t3) ++ (AVB)) - f
f ( ( A -+ B) ~ (A V B))
t(A--+ B)
I ( A - ~ B)
f(AvB)
t(AvB)
IA
tB
tA
fA
fA
fB
IB
IB
tA
tB
~3
|
|
~2
~;4
Branches ~2 and ~4 are contradictory. We can use any of the non-contradictory branches ~l and ~3 to apply Hintikka's leinma.
We have, for example, truth
valuations V1 and V3, with: VI(A)-f,
VI(B)-f
and
V3(A)-t,
V3(B)-f
such that-
VI((A--+B) e + ( A V B ) ) -
V3((A--+B) e + ( A V B ) )
-
f
This means that we have found two truth valuations giving the proposition (A ~ B) ~ (A V B) truth value f. T h e o r e m 1.10.7:
II 1.10.6
Soundness Theorem:
If a proposition a is Beth-provable, then it is also logically true. Formally: B
Soundness and Completeness of Tableaux Proof:
59
If proposition a is not logically true, then there is a t r u t h valuation V
such that V ( a ) = f . According to L e m m a 1.10.4, every semantic tableau w i t h / a at the origin has at least one branch n which agrees with V, and consequently is not contradictory (why?). Thus a is not Beth-provable. Remark
1.10.8:
II 1.10.7
In the proof of Theorem 1.10.7 we used in our metalanguage
the third Axiom
Instead of proving ?2 ~ T'I, i.e., if cr is Beth-provable then a is logically true, we proved that ~T'l =~ ~72, in other words if a is not logically true then a is not Beth-provable. This proof method is used quite often and is called an i n d i r e c t or c o n t r a p o s i t i v e proof. (The direct proof m e t h o d would be to prove ?2 ==~ T'I directly.)
Theorem
9
1.10.9:
1.10.8
Completeness Theorem-
I f a proposition a is logically true, then it is also Beth-provable. Formally: ~cr
Proof: V(a)
:=~
I-(r B
If the proposition a is logically true, then for every t r u t h valuation V, = t holds. Let us assume there is no Beth-proof for a. We construct a
complete semantic tableau with an f a origin. This semantic tableau must have a non-contradictory branch. According to L e m m a 1.10.4, there is an adequately defined truth valuation V which agrees with this branch and, hence, with the f a origin. But then V ( a ) = f , a contradiction. Hence, there is a Beth-proof for a. I
1.10.9
It is obvious from the completeness proof t h a t if we tried to construct a Bethproof for a proposition a, (namely a complete semantic tableau with / a at its origin) and failed, t h a t is, when the constructed complete semantic tableau has at least one non-contradictory branch, then, exactly as in L e m m a 1.10.5, a t r u t h valuation can be defined which is a counterexample to the claim t h a t a is logically true. In other words by constructing a semantic tableau o/ a we are guaranteed either to obtain a proof o / a
or a counterexample to the claim that a is true.
60
Propositional Logic
1.11 Deductions from Assumptions The Compactness Theorem A u t o m a t i c t h e o r e m proving from a s s u m p t i o n s and d a t a is an i m p o r t a n t objective in Logic P r o g r a m m i n g . In section 1.5, we discussed the consequences of a set of a s s u m p t i o n s S. A proposition a was called a consequence of S, S ~ a, if every t r u t h valuation which makes all the propositions in S valid also assigns value t to a. We can now define w h a t derivation of a proposition from a set of propositions and assumptions means. D e f i n i t i o n 1.11.1:
Let a , 991, 992, . . . ,
99n,---
be a finite or infinite sequence of propositions, a is said to be a B e t h - d e d u c t i o n of 991,992, . . . , 99,~, . . . if there is a contradictory semantic tableau constructed as follows:
Step O:
we s t a r t with
Step S2n :
we place
f a as the origin.
t99n at tile end of every non-contradictory branch.
Step $2n+1 : we apply the development rules on the semantic tableau T2n of the previous step. m 1.11.1
If the sequence of propositions is infinite, this construction may never finish, a is a Beth-deduction only if the construction finishes and if a contradictory semantic tableau is built. T h e n intuitively, there is no t r u t h valuation which assigns t r u t h value t to all propositions 99,~ used in the construction of the semantic tableau, and which also assigns t r u t h value f to a.
If the initial sequence is finite, then tile
construction will certainly finish, yielding a complete semantic tableau. Let us illustrate the above construction by means of an example. Example
1 . 1 1 . 2 : We wish to prove t h a t proposition A is a Beth-deduction of
propositions -~B and (A V B). We s t a r t w i t h / A the assertions
and we c o n c a t e n a t e successively
t(-~B) and t(A V B), as in the tableau on the facing page.
Deductions from Assumptions
61
fA t(~B)
t(AVB)
tA
tB
|
fB
|
We examine t(AVB). T h e left branch is contradictory and need not be e x a m i n e d further. In the right branch we e x a m i n e t(~B). This branch is also contradictory. Thus A is a Beth-deduction of -~B, (A V B).
m 1.11.2
The two following theorems refer to finite sets of assumptions. T h e y correspond to the soundness and completeness theorems of the previous section. Theorem
1.11.3:
Soundness of Deductions:
If proposition a is a Beth-deduction of of r
r
r
,r
t h e n a is a
consequence
. . . , 9~n. Formally: {Vl
Proof:
r
v:,...,~}
~B ~
~
{v~,v:,... v~} ~
Indirectly:
Suppose a is not a consequence of ~01, . . . , ~on. T h e n there is a t r u t h valuation V with the property v(~)
while V(a) = f.
.....
v(~)
-
t
From L e m m a 1.10.4, each semantic t a b l e a u with an f a origin
has at least one branch which agrees with V, and so is non-contradictory.
Thus
all semantic tableaux with an f a origin are non-contradictory. T h e n a c a n n o t be Beth-provable from {~al,... , qan }.
m 1.11.3
62
Propositional Logic
Theorem
1.11.4:
C o m p l e t e n e s s of Deductions:
I f a p r o p o s i t i o n a is a c o n s e q u e n c e o f qot, 9~2, . . . , ~,~ then a is a B e t h - d e d u c t i o n o f 9~t, ~ 2 , . . . ,
9~n. Formally: {
Proof:
.
}
.
{
. . .
}
o
Let us assume t h a t a is not a B e t h - d e d u c t i o n of ~1, 9~2 , . . . , 9~n- T h e n wc
can c o n s t r u c t a c o m p l e t e n o n - c o n t r a d i c t o r y semantic t a b l e a u with an f a origin, every b r a n c h of which contains the nodes t ~ l , t ~ 2 , . . .
,t~n.
T h e r e thus exists
a b r a n c h of this semantic t a b l e a u which is non-contradictory, and according to L e m m a 1.10.5 there is a t r u t h valuation V which agrees with this branch. Hence we have
Y ( ~ l ) = V(~2) . . . . .
Y(~)
= t
and
Y(a) = f.
However, this is
a contradiction, since 0" is a consequence of ~1, 9~2, .-. , 9~n. Remark
m 1.11.4
1 . 1 1 . 5 : T h e Soundness and C o m p l e t e n e s s T h e o r e m s 1.11.3 and 1.11.4,
respectively, hold even if the sequence {(pk, k C N} has an infinite number of terms,
m 1.11.5
We can now formulate, prove and apply the P L c o m p a c t n e s s theorem. First we will give a basic definition.
Definition 1.11.6: A sequence a l , t r u t h valuation Y such t h a t
0"2,
...
, an
is called s a t i s f i a b l e if there is a
Y ( a t ) = V(0.2) . . . . .
Y ( a n ) = t.
Y is then said to
s a t i s f y the sequence a l , a2, . . . , 0.,,. Example
1.11.7:
9
1.11.6
If A1, A2, . . . is a sequence of atoms, t h e n the infinite sequence
A t , A2, A1 A A2, A3, A1 A A3, A2 A A3, . . .
is satisfiable. (A t r u t h valuation
U such t h a t t = V ( A 1 ) = V ( A 2 ) = V(A3) . . . .
satisfies the sequence). On the
contrary, A1, A2, (A1 --+ A3), (--A3) is a finite sequence which is not satisfiable. Indeed, if we assume t h a t it is satisfiable, then there is a t r u t h valuation V such that: V(A1) In o t h e r words,
=
V(A2)-
V(A3) -- f
while
V(A1 ~ A3)=
V(At) ~
V(-'A3)=
V(A3) = t,
V(At) - - / , and t h a t is a contradiction.
t which means t h a t II 1.11.7
Before we formulate the c o m p a c t n e s s theorem, we will give a definition and a l e m m a which are necessary for tile c o r r e s p o n d i n g proof.
D e d u c t i o n s from A s s u m p t i o n s
63
D e f i n i t i o n 1.11.8:
(1) Let X, Y be two nodes of a semantic tableau. Y is a d e s c e n d a n t
of X if
there is a branch passing through X and Y, and X is closer to the origin than Y. Y is a i m m e d i a t e
descendant
of X if Y is a descendant of
X and there is no other node between X and Y in the branch passing through X and Y. X is said to be s u i t a b l e if it has an infinite number of descendants. (2) A semantic tableau is said to be of a f i n i t e d e g r e e if each of its nodes has only finitely many immediate descendants,
m 1.11.8
Koenig's Lemma:
L e m m a 1.11.9:
A semantic tableau o f finite degree with an infinite n u m b e r o f nodes has at least one infinite branch.
Proof:
Note that tile origin of a semantic tableau with infinitely m a n y nodes
is a suitable node.
Furthermore, if X is a suitable node, then at least one of
its immediate descendants is also a suitable node, since the semantic tableau is of finite degree.
Thus, if X0 is tile node of the origin, if X1 is an immediate
descendant of X0 and a suitable node as well, if X2 is an i m m e d i a t e descendant of X1 and a suitable node as well, i f . . . , and so on, then the branch X o X 1 . . . has an infinite number of nodes.
Theorem
1.11.10:
9
1.11.9
Coinpactness Theorem:
Let al, a2, . . . be an infinite sequence of propositions. I f for every n, the finite sequence al, O'2, . . . ,
Proof:
O'n
is satisfiable, then the sequence al, or2,.., is satisfiable.
We will describe inductively the construction of a semantic tableau which
may be infinite. Step 1 :
Start with f ( - ~ a l ) at the origin.
Step 2n :
Place t(an) at tile end of every non-contradictory branch of the T2,-1 tableau constructing thus tableau T2,.
Step 2n-t- 1 : Select the non-used node of T2n which is furthest left and
apply the development rules of Definition 1.7.1.
64
Propositional Logic Let us s u p p o s e t h a t t h e c o n s t r u c t i o n finishes if, at an o d d step 2n + 1, all
b r a n c h e s are c o n t r a d i c t o r y .
(If t h e y are not c o n t r a d i c t o r y , wc have to continue tile
c o n s t r u c t i o n with the next s t e p 2n + 1).
If t h a t h a p p e n s , we have a c o n t r a d i c t o r y
s e m a n t i c t a b l e a u with origin f ( - ~ a l ) a n d w i t h a l , a2, . . . ,
a n as hypotheses. By
T h e o r e m 1.11.3, we know t h a t -~al is a c o n s e q u e n c e of a2, a3, . . . , an, and hence or1, . . . , an is not satisfiable, which c o n t r a d i c t s the a s s u m p t i o n s . T h e c o n s t r u c t i o n thus never finishes, a n d so results in a s e m a n t i c t a b l e a u w i t h an infinite n u m b e r of nodes. T h i s s e m a n t i c t a b l e a u is of finite degree, since each one of its n o d e s has finitely m a n y direct d e s c e n d a n t s .
By a p p l y i n g K o e n i g ' s
L e m m a , we t h u s have a s e m a n t i c t a b l e a u w i t h one infinite b r a n c h a, where every tai is a node of a for every i = 1, 2, . . . . We now define a t r u t h v a l u a t i o n V such t h a t : V(A) = t
,,
A is a p r o p o s i t i o n a l s y m b o l a n d t A is a n o d e of a.
T h e n from H i n t i k k a ' s l e m m a we can d e d u c e t h a t V ( a i )
-
t for every i, a n d
a l , a2, . . . , a,~ is satisfiable,
m 1.11.10
1.12 Soundness and Completeness of Axiomatic Proofs We now present the s o u n d n e s s , c o m p l e t e n e s s a n d c o m p a c t n e s s t h e o r e m s of t h e a x i o m a t i c m e t h o d . T h e proofs of these t h e o r e m s are not given here, but the r e a d e r can find t h e m in m o s t classical logic books, such as [Klee52, Rasi74, RaSi70]. Theorem
1.12.1:
a consequence o f S.
is provable from a set S o f propositions if and only if a is Formally: S F- a ~:~ S ~ a
Corollary
1.12.2:
a is provabh; from S = 0 r
Theorem
1.12.3:
Compactness:
II 1.12.1
a is logically true.
II 1.12.2
S is a satisfiable set o f propositions if and only if every finite subset of S is satisfiable.
II 1.12.3
Soundness and Completeness of Resolution
65
1.13 Soundness and Completeness of Resolution In the following, we will deal with the soundness and completeness theorems of the resolution method. Theorem
1.13.1:
Soundness and C o m p l e t e n e s s of Resolution:
S is a non-verifiable set of clauses if and only if R * ( S ) contains the e m p t y clause. Formally: D E R*(S)
** S is non-verifiable,
m 1.13.1
For the proof of the soundness t h e o r e m we first need to prove an auxiliary lemma. Lemma
1.13.2:
/f
{C1,C2} is a
verifiable set of clauses and if C is the resolvent
of C1 and (72, then C is verifiable. Proof." For some l i t e r a l p , we have C
-
C~t0C[
or V(--,p) = t.
.
C1 -
{pIUC[
, C2 -
If V is a t r u t h valuation verifying {C1,C2},
Let us suppose V(p) = t.
V(C~) = t, and so V ( C ) = t.
{-,p} U C ;
, and
then V(p) -
Since V(C2) = t and V(-,p) = f ,
t
then
If V(-,p) = t, we simply replace (72 with C1. m 1.13.2
Theorem
1.13.3:
If [-1 c R * ( S ) ,
Soundness of Resolution:
then S is non-verifiable. Formally: [3 C R * ( S ) =:> S non-verifiable.
Proof:
Let C 1 , . . . , Ck be a proof from S by resolution. T h e n with the inductive
use of the above lemma, we prove t h a t any t r u t h valuation verifying S also verifies C1. If the conclusion is the e m p t y clause, then Ck coincides with [3. Since the e m p t y clause [3 is non-verifiable, S is non-verifiable,
m 1.13.3
To prove the completeness of resolution, we will use the following auxiliary
lemma. Lemma
1.13.4:
Let S be a non-verifiable set of clauses in which only the
literals A1, A2, . . . , Ak occur.
Let S k-1 be the finite set of clauses which are
provable by resolution, and in which the only atomic propositions occurring are A1, A2, . . . , A k - 1 .
Then S k-1 is non-verifiable.
66
Propositional Logic
Proofi
Let us suppose that S k-1 is verifiable. Then there is a t r u t h valuation V
on { A 1 , . . . , Ak-1} which verifies S k-1.
Let 1/1 and 1/2 be the two extensions of
V on {A1,... ,Ak} such that: VI(Ak)-
t
and
V2(Ak)- f
Since S is non-verifiable, there is a clause C1 c S which is not verifiable by V1. However, then -~Ak E C1, since otherwise there are two possibilities: (a) Ak ~ C1.
But then, 1/1 does not contain Ak, C1 c S k-1 and 1/1 verifies C1, contradiction.
(b) Ak C C1.
Then V1 verifies S k- 1 and V1 (Ak) = t.
1/1 therefore
verifies C1, contradiction. We can also conclude, in a similar way, that there is a clause (72 C S, which is non-verifiable by V2, such that Ak C (72. Let us consider D = (61 - {--Ak}) t2 (62 - {Ak}). D is the resolvent of C1, C2 and D E S k-1 . Hence, V verifies D. In other words, one of the following hohts:
(a')
V ve.rifies C1 - {-~Ak}. But then 1/1 verifies C1, contradiction.
(b')
V verifies C 2 - {Ak}. But then V2 verifies C2, contradiction.
Then S k-1 is non-verifiable. Theorem
1.13.5:
I1 1.13.4
Completeness of Resolution:
If S is a non-verifiable set of clauses, then the, e m p t y clause is provable by resolution from S.
Formally: S non-verifiable
Proofi
~
E] C R* (S).
By Definition 1.9.3, S contains a finite number of literals. Let A 1 , . . . , Ak
be these literals. By applying the previous le~nma once, we conclude that S k-1 is non-verifiable. Applying the same lemma k times we conclude that S is nonverifiable, since no atomic propositions occur in its clauses and S O c_ R* (S). Hence, [3 C S O c_ R*(S).
Remark
I1 1.13.5
1.13.6: If a is a proposition provable by resolution from tile set S of
propositions, then a is provable from S (Definition 1.8.5). We then have by the
~...
~
b~
9
~--.
.
m
~"
,:.n
~
~
~
t:::u
>
~
~-~
.
9
--
~-~
~
~'~
,-'.
~.
9
~.
9
~
0
"-d
~
~~
o
~
>
~
~
<
$
>
~~
~
9
~
~ ~ < ~
9
r../;'.
o ~
o
~
~
~
r162
,-..
~
i.-,,,, 9
~,
~
Oa
0~.
~
~
o
~<
""
-
9
,-.
=.
0
~.,
<<
~.~
~.
-"
~"
9
<
~...
.
j
a
~
~ >
~. ~
~
~
'-.d
~cu
9
N
(1)
~
~,
~-"
~
T
8
T 9
~
~:
~m
~T
9
~
~
-T-,..
o~
~,-,.
i,,.,1
~
~
e-,-
o.
~
o ,.o
,r b- 1
0~
c"
N
bo
:
P
;
e
e9
,.
r
0
r..cs,,
9
~
~
7-
8
T
q
~.
dl::::,
r
i.,,-,
c";
68
Propositional Logic
1.14.2 Determine which of the following expressions are propositions and which are not:
(a)
( A A B ) V--,
(b)
((AAB) V(-~C))--+D
(c)
(AVB)V-~C
(d)
A ++ B) -+ (A V B)
(e)
(AAB)--+A
(f)
(A1AA2) e+-~A3
Solution:
(a)
It is not a proposition. V is followed by a logical connective without any propositional symbol.
(b) (c)
It is a proposition according to Definition 1.2.1 (proof: see exercise 1). It is not a proposition. V is followed by a connective and not by a proposition.
(d)
It is not a proposition. There is a defective use of parentheses.
(e)
It is a proposition according to Definition 1.2.1 (proof: see exercise 1).
(f)
It is a proposition according to Definition 1.2.1 (proof: see exercise 1).
1.14.3
Represent the following propositions by atomic symbols and use those symbols to create compound propositions. (a)
"2 divides 12" "3 divides 9" "2 divides 11"
(b)
"a square is a parallelogram" "a rhombus is a parallelogram" "the diagonals of a parallelogram bisect"
(c)
"George is a father" "George has a child" "Mary is a father" "Mary has a child"
Exercises
69
Solution" (a)
Consider
A 9
"2 divides 12"
B"
"3 divides 9"
C 9
"2 divides 11"
A V B , A A B --+ C , (A V B) A (--,C), and A --+ (B A --,C) are examples of compound propositions using the symbols A, B and C.
(A A B ~ C) is a
proposition; however, intuitively, the s t a t e m e n t of t h a t proposition does not seem correct. (b) and (c) (:an be solved similarly. 1.14.4 The following atomic propositions are given: A1 "
"3 is a prime number"
A2 "
"3 divides 15"
A3 9
"3 divides 2"
A4 9
"3 divides 13"
(a)
Determine a valuation F for the above propositions.
(b)
Let WF be the t r u t h valuation extending F .
Calculate
WF((A1 A A2) -+ (A3 V A4)) Solution:
(a)
(b)
Let F be a valuation such that:
F(A1)- f
F(A2)-t
F(A3)- f
F(A
) -
t
WF[(A1 A A2) --+ (A3 V A4)] = WE(A1 A A2) ~'* WF(A3 V A4) = [WE(A1) n WE(A2)] -"* [WE(A3)U WE(A4)] =
[F(A1)[-1F(A2)] ~ IF(A3) U F(A4)]
=
(fRt)~
=f~t-t
(fUt)
70
Propositional Logic
1.14.5 Prove that the following propositions are tautologies: (a)
(A/x--A)-+ A
(b)
(A-+ B) V (A ~
(c)
A ~--,-,A
(d)
[(A A B) ~ C] ++ [A --+ (B -+ C)I
,B)
Solution:
(a)
Consider the valuation W ( A ) - t, and the truth valuation W t extending W. Then:
W'[(A A-,A)--+ A ] -
W ' ( A A-,A) ~ W'(A)
=
[W'(A)FI W'(-A)] ~ W'(A)
=
[W'(A)FI (~ W'(A))]--~ W'(A)
= [ W ( A ) n (,-~ W(A))] ~ W(A) =
[tn(,~t)l~t
=
[t I q
f]
.,,.-,
= f-,-~t -
t t
Consider the valuation W ( A ) - f , and tile truth valuation W' extending W. Using the same method we can determine that"
W'[(A A - . A ) - + A]
-
t
Then for every truth valuation, proposition (a) is true, hence (a) is a tautology. (b), (c) and (d) ca,, be solved similarly. 1.14.6 Prove that the following propositions are contradictions (a)
A A -A
(b)
(~A
(c)
(A -+ B) A (B --+ C) A (A A -~C)
(d)
-,(A A B) A (A -+ B) A A
V
(B/x--B)) ~ A
i
i
i,,
<
J
$
$
J
J
J
,
oo
9
,,"p
~~
~0~
-
~
~
~
~-
~
-
a
~
--I
~
~
~
~,~
~
,l. ,l.
~
~
>
~
0~
~
~
=
~
o
~
~
i~
;~-
..
o
O"
.-
<
<
$
<
>
<
$
>
9 9
N
N
9
~..,o
9
9
~..,~
~
=.
~u~
9
-
-~.
"~
~
9
m
.m
'--"
,_. 0
c~
<
~
<
.
.
.
.
!
.
.
. ~
.
.
.
.
.
.
.
i
~
~
<
~
>
<
~
.
~09
'--"
o
~
o
m-
p_..
~
~:
0 ~
~
o
:~
o
~
o
j
<
~
>~ ~ <
J <
('I)
=_.
"~
-.
[l~
~...
""1
o
,-,. O ,...,
o
O
~
o
II
II
~
~
~
<
c~
~
<
r-
:~
9 .
~.":"
~
=
~
~:~
9
~ o
9
~-
~
=
~.
<
r ~ ..
9
~
~
<
-IF
9 <
$
<
lJ
9
J ~
< J
> J
> ~
<
J
J ;~
,...
J ;~
~--.
< >
<
>
;~
>
<
;~
$ ~
~.
~
$ ~
~ <
;~ >
(lq
~ >
;~ <
~r'
;~
~ $
J
~ >
;~ <
~
;~
$
J
~ <
~ >
~
J J
$
~,
~
~
C
i-i
C
Irh
~
q
9
~
ICI
~ ~
~
c
~
Lr~
,-.
c
it1
~
r~
,~
~'~
9
..
c
o
C
~
r~
<
-"
~"
~..~.
G~.
0
9
o
~
Z)
~
D
m
C
~
C
m
~
~.~
"
~.
9
o
~,
='~
J
9
-IT-
o
raO
~
~
~
i~
.'-:
g
~
9
:~11
~
"
p__,. II
o i-i
~.-~
~'~-~r~
~
~o
~
~
9
I:::I
II
i~
<
'~
~
~
~
~"
J
Q
>
-TF C~
J
l::r"
c~
~
O~
-. 9
~
> F"
>
~
~.
~ .
~ ~
o~
~< =
~
r
~
<
>
$
~
9 0~
o
Exercises Then a C Con(S1 U $2).
75
In other words:
Con(S~) U Con(S~) c However,
Con(S~ u S~)
Con(S~ w $2) C_ Con(S~)U Con(S2)
Consider $1 - {A},
$2 -
does not hold true:
{A -+ B}. Then for all valuations W vali-
dating both A and A -~ B, we have by definition that W also validates B. Then
B e Con(S1 u $2) However, B ~ Con(S1);
WI (A) -
(1)
for example, consider a valuation W1 such that
t and WI (B) -
f .
Moreover, B ~ Con(S2);
consider a valuation W2 such that W2(A ~
B) -
for example,
t, while W2(A) -
f
and W2(B) - t. Then B ~ Con(S1)U Con(S2), and by (1)
Co (S u
r
Co (S ) u Co (S )
Hence claim (a) is not true. (b)
We will give a counterexample: Let us assume that $1 N
$2
--
~.
By corollary 1.5.4, we have
Con(S1 n $2) = Con(C)), which is the set of all tautologies. way round, let us assume that $1 = {A} and $2 = {B}.
The other
Then A V B
belongs to Con(S1) as well as to Con(S2), since every truth valuation validating all the elements of $1 or all the elements of $2, also validates A V B. However A y B is not a tautology (why?). Hence claim (b) is not true. 1.14.14 Prove that if S U { A } ~ B ,
then S ~ A - + B .
Solution: S U (A} ~ B. Then every truth valuation W validating all the propositions of S U {A} also validates B. This is to say that, for every truth valuation W for which W ( C ) -- t holds for every C e S, we have W ( A ) - t and W ( B ) = t. Then
W ( A --+ B ) = W ( A ) ~ W ( B ) = t ~ t = t. Hence S ~ A --+ B.
76
Propositional Logic
1.14.15 Given that S is a consistent set of propositions, prove t h a t S t2 {~o} inconsistent ~=~ S ~ --~
Solution:
(i)
S 12 {~o} inconsistent
=:v S ~ --,~o.
Let us assume S t_J {~o} inconsistent.
Since S is consistent, there is
at least one t r u t h valuation W such that, for every a c S, W ( a ) holds.
If W(~o) -- t, then S t2 {~o} is consistent, which contradicts our
assumption.
Hence W(~o) = f; which means that W(--~o) = t for every
t r u t h valuation W such that, for every cr E S, W ( a )
(ii)
= t
= t holds.
Hence
S ~ --~o =, S t2 {~o} is inconsistent. Let us assume S ~ --~o. Then for every t r u t h valuation W such that, for every a E S, W ( a ) = t holds, we also have W(--~o) = t, and therefore W(~o) - f. This is to say t h a t , if the elements of S take value t, then ~o takes value f . But then S u {~o} is inconsistent.
1.14.16 Write in PL the propositions of the following sets of propositions and determine which of the sets are consistent: $1 : The witness was scared or, if John c o m m i t t e d suicide, a note was found. If the witness was scared, then John killed himself. $2 : Love is blind and happiness is at reach, or love is blind, and women are more intelligent than men. If happiness is at reach, then love is not blind. Women are not more intelligent than men. Solution:
Formalise the propositions occurring in $1 and $2 and construct the truth tables of $1 and $2. Use Definition 1.5.5.
Exercises
77
1.14.17 If $1 c_ Con(S2)
then
Con(S, U $ 2 ) = Con(S2).
1.14.18 Let Q be a set of propositions and S a consistent subset of Q. said to be a m a x i m a l
consistent
T h e n S is
s e t if, for every S' c_ Q with S c S', S' is
inconsistent. Prove t h a t a consistent set S is a m a x i m a l consistent set if and only if, for every proposition a c Q, only one of the following two propositions holds: (i)
~ C S
(ii)
-~
c S
Solution: Let us assume t h a t S is a m a x i m a l consistent set. Consider a E Q such t h a t a ~' S. T h e n S is a proper subset of S' = S to {a}, which is inconsistent by definition of S. So if a t r u t h valuation W validates all tile propositions of S, it must refute a. Since W ( a ) = / , we have W(~er) = t. However, --a belongs to S; otherwise S would be a proper subset of S" = S U {--a}, and we would have W(~o) = t for every ~o E S, as well as W(---,a) = t.
In
other words, S" would be consistent, which contradicts S being m a x i m a l consistent. In the same way, we can prove t h a t if -~a r S then a C S. Let us assume t h a t S is consistent, and t h a t for every a E Q, only one of (a c S) and (-~a c S) holds.
We will prove t h a t S is a m a x i m a l
consistent set. Let S' be a set such t h a t S c S'. Consider ~o E (S' - S). We then have ~o C S', qo r S, and ~qo c S (why?).
However, S is consistent.
Hence,
for every t r u t h valuation W validating all the propositions of S, we have W(-~qo) = t, or W(~o) - f . But in t h a t case, there is no t r u t h valuation validating tile propositions of S', since ~qo c S C S' and q0 c S'. ttence, since every S' such t h a t S c S' is inconsistent, S is m a x i m a l consistent.
78
Propositional Logic
1.14.19 The following propositions are given: is a parallelogram"
A:
"KLMN
B :
"the diagonals of K L M N
C"
"the opposite angles of K L M N
D :
"the opposite sides of K L M N
bisect" are equal" are equal"
Determine whether S = {A, A ~ B, A +4 C, A ~ (B A D)} is a consistent set.
1.14.20 There is no logical connective o connecting two
propositions, which is dif-
PL
ferent from I and : (Example 1.6.5) and such that {o} is an adequate set. Solution:
Let o be a logical connective such that A o B is a well-formed
PL
formula. There
are as many possibilities for the truth table of A o B as there are combinations of the two truth values in groups of four:
A
B
O1
02
03
04
05
06
07
08
t
t
t
f
t
/
t
/
t
/
t
f
t
t
f
f
t
t
f
f
f
t
t
t
t
t
f
f
f
f
f
f
t
t
t
t
t
t
t
t
/
A
B
09
O10
O11
O12
O13
O14
O15
O16
t
t
t
f
t
f
t
f
t
f
t
f
t
t
f
f
t
t
f
f
f
t
t
t
t
t
f
f
f
f
f
f
f
f
f
f
f
f
f
f
Exercises
79
The meaning of oi , 1 <_ i <_ 16, will a p p e a r more clearly in: AolB
-
A V ~A
(truth) (AIB
Ao2B
-
--,(A A B )
Ao3B
=
A--~B
A 04 B
-
-,A
Ao5B
-
B~A
A 06 B
=
--,B
AoTB
=
Ae+B
A 08 B
=_ ~ ( A V B )
Ao9B
=
A 010 B
=_ ~ ( A ~
AOllB
=
A 012 B
=_ --,(A ~
Ao13B
=
A
A 014 B
-
--,(B ~
A o15B
-
AAB
A o16 B
-
A A -~A
in E x a m p l e 1.6.5)
(A" B in E x a m p l e 1.6.5)
AVB B)
B B)
A)
(contradiction)
The above t r u t h table also contains logical connectives which refer to only one of the PL propositions, such as in o4 and 06. T h e exercise thus provides us with a more general conclusion involving all the PL connectives. For every o C { o l , . . . , o16) we have: If A o B takes value t when A and B take value t, then (o} is not a sufficient set, since the negation -- c a n n o t be expressed by means of o: w h a t e v e r the n u m b e r of occurrences of o in the proposition expressing the negation, we will always have t r u t h value t and never f , n a m e l y ~ t, the value of -~A, ~ B .
T h u s o r Ol, 03, 05, 07, 09, o11, o13, o15.
If A o B takes value f when A and B take value f , then {o} is not sufficient, since it can only provide t r u t h value f and never t, which is the value o f - ~ A , ~ B . Thus 0 -r 09,010,011,012,013,014, O15, O16If 0 = 04, then {0} is not sufficient since -~-~A -
A. We will always have A
or -,A, no m a t t e r how m a n y times the negation recurs, and we will thus not be able to express, for instance, A and V.
Hence 0 -r 04, and for the same reason,
0 - ~ 0 6.
If o - 02 or o - o8, then {o} is sufficient, as we saw in E x a m p l e 1.6.5.
-
/
/
>
>
>
~
>
~
~
<
~
>
>
Q
~
<
~,
>
<
>
<
~
>/
~
~
~
>
<
>
~ama
~ ~ ~
<~
C~
:~
~ ~
~I~' ~
~
>
$
$
>
$
$
~
~
<
>
c~
>
>
z'~ "~ ~
>
$
J ~
$
~
oo
9
c--e-
~.
=~
~
,<
_~.
o~
~
~--L o
o ,~. O~
o
O0
d~
%
%
~_,o
~*
9
.J
....
9
~
~0
..,
9
~r
i,-,o
o
~~
F--,-"
n
9
B
0
~
i=
~. ~
~-,o
~
0
~
~
o
l:D-,
~" 9.
.
|
|
|
~
~ ~
$
$
$
.1.
$
v
x
82
Propositional Logic
Next assume that a differs from a ~, a occurs in ~o, and ~o differs from ~/. (1)
The number of logical connectives occurring in ~o is 0.
Then ~o is an
atomic proposition and ~o and a must be identical, gY is thus ~ or a t. Then we have k ~o ~ ~d.
(2)
Let us assume the theorem holds true if ~o has n logical connectives.
(3)
We need to prove that if (2) holds true, then the theorem holds if has n + 1 connectives.
The following cases must be considered separately: (i)
~o is the proposition -%ol. We know that !- ~1 e--> ~o], where ~o] is derived from ~ol by substituting none, one, or more than one occurrence of a with a'. By the completeness theorem, we know that (C ++ D) -+ (-~C ++ ~D) is a tautology derivable in our axiomatic system.
(ii)
qa is the proposition ~x o qo2, where o C {A , V , -+ , +~}. ~1 e+ qo~ and I- ~2 e+ ~ ,
Then
and by the tautology C -+ (D -+ (C A D)),
completeness, and the theorem of substitution of equivalences we obtain !
!
The result follows by the tautology (C1 ++ Da) A ((5'2 ++ D2) -+ (C1 0 C2 ++ Dx 0 De)
1.14.23 Assuming that k (A A B) ~ - ( A -+ -~B) and -~(A v B) +-~ ( - A -+ B), derive: (a)
k ( - B -+-~A) ~ (A -+ B)
(b)
k (-,A --+ B) e+ (~B --+ A) and k (A --+ B) ++ (B --+ - A )
(c)
t- (A A B)--+ A
(d)
k (A A - ~ A ) - ~ B
(e)
k A V -~A
Exercises
83
Solution: (a) and (b). We use Axiom (3), tile Law of Negation, and the Theorem of substitution of equivalences, to prove -4 and then +--. (c) ~--~A -4 (B -4 ~A)
(1)
by Axiom (1).
-~A -+ (A -4 -~B)
(2)
by (1), (b), and substitution of equivalences.
(A -4 -~B) -4 A
(3)
by (2), (b), and substitution of equivalences.
But according to tile assumptions, this is what we are seeking, namely
( A A B ) -4 A. We usually need to proceed analytically in order to determine tile steps of the derivation: we first try to determine (3), then determine (2) from (3), and continue until a proposition known to be derivable, such as (1), is found. 1.14.24 Using resolution prove that tile following sets are not verifiable:
(a) S = {-~AvBvD, -~BvDvA, --,DvC, -~DvA, AvB, Bv-~C, -~Av-~B} (b)
S = {-~AvB, ~ B v C , ~ C v A , A v C , ~Av-~C}
(c) S - - { A v B v C ,
A v B v - ~ C , A v - ~ B , -~Av-~C, -~AvC}
Solution:
(c) Consider
(1)
AVBVC
(2)
AVBV-~C
(3)
A v-~B
(4)
-~AV-~C
(5)
--1AVe
(6)
AVB
by (1) and (2) with resolution.
(7)
A
by (3) and (6) with resolution.
(8)
--,A
by ( 4 ) a n d (5).
(9)
1--1
by ( 7 ) a n d (8).
Hence S is not verifiable.
~ > j
$
J
~
j
>
>
~
oo
9
~.,,~ 9
9
9
~ ~" o~
~
,
9
l--I
p__,.,
~
r
~ .
~" ,<
J
r
~
m" ,<
~
. ~
~
m" ,<
J
<
J
.-Y .-Y .-Y .-y
.
G' ,<
[-I
.
J
<
J
~
.
~
<
J
J
<
J
,--,
<
J
;~
--
~
<
~
~ >
$
"~
j
~ < a
>
< ~ <
~
~
~ o ~
:
-~
< C~ ~ $
J
J
"~
j ~ <
<
<
J
~
"--
< J ~ '~
.~
,--,
> ~ ~ $
J J ~
~
I~
~
;~
o ~
j
..
$
~,
c~
J.
J
.,
$
$
> ~
~
o ~ ~ >
~
~ o
o
~-
,~
o
~ o
~
8
9
T ~
-,1-
..
<
~
~ -.
"
C~
.~
.i~
,-,-
<
9
--.
~
= -
"~ ~.
"~
Exercises
85
Solution: (a)
We first need to determine the [~(AAB/X~C)]
~
CNF
of -~(A A B A -~C):
[-~AV-~BV~-~C] e+ [ ~ A v - ~ B v C ]
Set theoretically: { {-~A,-,B, C} }. 1.14.27 Which of the following sets of clauses is satisfiable and why? For ('.very satisfiable set, determine a truth valuation which satisfies it. (a)
{ { A , B } , {-~A,~B}, {~A,B}}
(b)
{{-~A}, {A,-~B}, {B}}
(c)
{{A}, Vl}
(d)
{I--l}
Solution:
(b) Tile corresponding
is -~A A (A V -~B) A B. This proposition is not satisfiable. If it had been satisfiable, there would be a truth valuation W such that W ( B ) = t (in order to satisfy {B}) and W(A)= f (in order to satisfy {-~A}). But then W would not satisfy {A, -~B}.
(d)
CNF
U] is not satisfiable by its definition.
1.14.28
DeterInine the resolvent R(S) of the set S in tile following cases: (a)
S = {{A,-,B}, { A , B } , {-~A}}
(b)
S = {{A}, {B}, {A,B}}.
Solution:
(a)
S = {{A,-~B}, { A , B } , {~A}}, then R ( S ) = S U { { A } , 1
2
3
(b) R(S) = S. There are no resolvents.
1,2
{-~B}, {B}}. 1,3
2,3
~ .
II
<
"'1
~.
9
"" ~
0
~
<
9
0
0
9
~.~
m
i~
~,~~
,--.
9
<
$
$
>
4.
<j
9
~ o
,.....
9
9
O~
~ o
9
,.....
t~
O~
~--
~
~< ,--.
r162
~
9
rae.
~.,., o
~."
~'~,
< ~.
J
/
l ~
~
('~
9
="
~" =
~
~ ~ ~) ~j
J
"
-~ "
~,
O~
i~
~ ~
~"
~u
~" ~ ~'~ J
~ ~
~ J
..
II
-.
..
~
=
II ..o
=
~ ~
J
0
J ~
~ ~
~
<
$
< c~
J
$
<
>
j
$
c~
~T ~ ~ >
""
0 ~'~
o" 9
9
J
$
~"
~ ~"
9
~T
--0
=
~
J
$
9
>
>
,1.
>
~,
_.
9
o
b~
o
~
Oq
~
0 ~
0~
~..,o
<
i,O
~"
Exercises f [ ( A --+ B ) ~
87
(A V B)]
t ( A --+ B)
f ( A --+ B)
f(AVB)
t(AVB)]
fA
tA
fB
fB
fA
tB
tA
tB
|
|
W [ ( A ~ B) e+ (A V B)] =
W ( A --+ B) ~.~ W ( A V B ) [W(A) ~ W(B)] ~-~ [W(A)[_J W ( B ) ]
---- [ f ~
f] ~
[fk_Jf]
--t~f--f
1.14.31
If A and A --+ B are Beth-provable then B is also Beth-provable. Solution: A is Beth-provable and thus logically true. Then for every t r u t h valuation W wc have W ( A ) - t. Then A ~ B is logically true and therefore, for every t r u t h valuation W, we have W ( A ~ B) - t. Then t -
W(A~B)
-
W(A)~W(B)
-
t~W(B)
But then W ( B ) - t for every t r u t h valuation W. Hence B is Beth-provable.
88
Propositional Logic
1.14.32
{AvB,
Prove:
A-+C,
B-+D}k-CVD
(b) { A -+ ( B -+ C ), =D V A, B } k D -+ C Solution:
(a)
Consider:
S = {A V B , A - + C ,
f(CVD)
B -+ D}
By tile soundness and completeness timorem we have:
~CVD Hence
we
~ just
fC
Sk-CVD need
to
prove
S ~ C v D, or equivalently, that
fD
every truth valuation validating all tile propositioi~s of S, also validates
t(AVB)
CvD.
tA
tB
t(A -+ C)
t(A -+ C)
fA
tC
IA
tC
|
|
t(B -+ D)
|
fB
tO
|
|
We thus construct a semantic tableau with f ( C v D) at tile origin by concatenating successively the nodes t(A V B) , t(A -+ C) and t ( B -+ D). If C v D is verified by all truth valuations which validate the propositions of S, we will have constructed a contradictory semantic tableau.
m~
t _L
_L q
0
>
~
~
~
t
~m~ ~
,-~
..
,.~
~
-~
~
~
~
9~,
~
F
<
~J X
~
~
|
|
~f
X
90
Propositional Logic
1.14.34
Prove that if S U { A } ~ - B
then S t - - A - - + B .
Solution:
From the soundness and completeness theorem, we have:
SU{A}~B
S U{A} ~ B .
r
Let us assume S U {A} ~ B. Then for every truth valuation W validating the propositions of S, we have W (B) = t if W (A ~ B) = W(A) ~ W (B) = t, since B ) can take truth value f only if W ( B )
W(A ~
thenS~A~B.
HenceSU{A} ~B
= f and W ( A ) = t. However,
=:v S ~ A ~ B .
By the completeness theorem we have S ~ A ~ B =:v S ~ A -~ B. 1.14.35
Let al, a 2 , . . ,
be an infinite sequence of propositions. If for every i, ai+l -+ ai
is Beth-provable, while ai -+ ai+l is not, prove that there is no proposition T such that (a) and (b) both hold true: (a)
For every i, r -+ ai is Beth-provable
(b)
{a~, a 2 , . . . }
~ r.
Solution:
(a)
We will prove that if for every i, r --+ ai is Beth-provable, then we have: {al, a 2 , . . . } ~ T,
or equivalently,
(--T, al, a 2 , . . . }
is consistent; in
other words, there is a truth valuation W such that: =
w(o,)=
. . . . .
t
Since ai -+ ai+l is not Beth-provable for every i, there is a truth valuation W such that, for every n E N, we have: W ( t 7 n --+ tTn+l)
=
f
or
=
f.
Then W(an)
= t
and
W(an+l)
= f
(1)
Exercises Since o'i+ 1 ---+ O'i,
91
for all i E N, is Beth-provable, then f o r all truth
valuations W ' and for all n E N, W'(crn+t ~ a,~) = t; which is equivalent
to w'(~,~+,)..~ w ' ( ~ , , ) :
t, h+nc~:
Wt(o-n+l)
f
=
or
W'(a,~)
= t
And in a similar way: W'(a,,)
or
= f
W'(a,~_,)
= t
(2) W'(a2) For W ' =
= f
or
W, we have by (1) that
W'(al)
W(o'n+l)
--
= t
f and W(crn) = t. Then
by (2) we have:
w(~)
= w(~_~)
.....
t
(3)
For every i, r --+ ai is Beth-provable. Then: t = W(T--~cr,,+,)=
i.e., W ( r ) = f .
W(7)~
W(an+l)=
W(v)-c-,f
Then: W(-~T)
= ..~ W ( T )
(4)
= t
Then there, exists a t r u t h valuation W su(:h t h a t (1), (3) and (4) hold true. So for all n c N, the set {--T, e l , a2, . . . , a,~} is consistent. (b)
Suppose
{al, a 2 , . . . }
~ T.
We will prove t h a t there exists a t r u t h
valuation W, and a k c 51, with the p r o p e r t y t h a t W ( 7 ~ ak) = f , or equivalently, W ( T ) Since {al, a 2 , . . . } the sequence
t and W ( a k ) - f .
~ 7, the set {~T, aI, a 2 , . . . }
-~T, a l , or2, . . .
is not satisfiable.
is inconsistent.
Then
From the compactness
theorem, there is an n r N, such t h a t the sequence -~T, a l , a2, . . . , an is not satisfiable, and the set {--17, a l , a2, . . . , a n } is inconsistent. However, there is a t r u t h valuation W with the p r o p e r t y that:
w(~
-~ ~n+~) = f
92
P r o p o s i t i o n a l Logic
Then W(cr,~)~ W(crn+l)W(a,~)
f,
-
t
i.e., and
W(crn+l ) -
(~)
f
Since an -+ a , , _ l is Beth-provable,
t -
w(+,--+
t h a t is, W(cr,~) - f
~,,_,)
or W ( a n - 1 ) -
-
w(~.)-~
w(~,,_~)
t.
From (5) we obtain: W(an_l)
-
t
(6)
In the same way, by tile fact t h a t ai ~ a i - ~ ,
1, are Beth-
1 < i < n-
provable, we have:
w(~_~)
.....
w(~,)-
t
(7)
By (5), (6) and (7), the t r u t h valuation W validates all tile ai, 1 < i < n. But {-~T, a l , a 2 , . . . , a n } words W ( T ) -
t.
W(T ~
Hence 7 --+ a n + l
is not consistent.
Then W(~T)
-- f , in ()tiler
T h e n by (5) we have: O'n+l)
--
W(T) ~
W(o-n+l)
is not Beth-provable.
-
t -r
f
-
f
II Predicate Logic au~q)~p6~u~vov atc~q)~p6~evov, auv~gov
Things taken together are whole and not whole, something which is being brought together and brought apart, which is in tune and out of tune: out of all things can be made a unity, and out of a unity, all things. Heraklith
2.1 Introduction In the first chapter, we gave an analytic description of the
PL
language, a
formal language, by means of which we can express simt)le as well as compound propositions. Furthermore, we examined methods of derivation of conclusions from sets of PL propositions. Even though the
PL
languagc is quite rich, it only allows a limited formulation
of properties and relations. Let us take as an example thc following proposition in tim English language: S " "If George is human, then George is mortal". If A denotes the proposition "George is human" and B denotes "George is mortal" 93
94
Predicate Logic
then, within tile P L context, S becomes:
S:
A--+B
S expresses certain qualities of a particular person, namely George. Tile following question arises: how can we express siInilar properties of other people such as Socrates or Peter for example?
One solution would be to introduce as many
propositional symbols as there are different peot)le! However this is impossible in practice. In this chapter we will describe the language of Predicate Logic which provides a solution to such problems. The new element of this language is tile introduction of v a r i a b l e s and q u a n t i f i e r s .
The Variables If we consider the proposition S and if we assuIne that x is a variable which takes values from the set of the names of all tile people, for instance x = George
or
x = John
or
x ....
and if "Human" and "Mortal" are symbols denoting properties, then we can represent the general relation between these properties by: P:
Human(x) --+ Mortal(x)
Such representations as "Human(x)" or "Mortal(x)" which express general relations as properties are called p r e d i c a t e s .
A f o r m u l a is a representation like P
consisting of predicates connected by logical connectives. The substitution of tile variable x by tile constant "George" converts P into tile formula S':
Human(George) --+ Mortal(George)
Furthermore, if the variable x takes tile value "Socrates", the result will consist of a new formula representing the relation between Socrates and the property of being mortal. language.
"John", "George" and "Peter" are c o n s t a n t s in our new formal
Introduction
95
In general, the correspondence between the variables and the constants on the one hand and the symbols of the English language on the other can t)e intuitively represented by:
English language
Formal language
pronoun proper name
~ ~ ~ ~
variable constant
The special symbols "Hu~mn" and "Mortal" are called p r e d i c a t e s y m b o l s . Predicates can refer to more than one variable, thus expressing not only properties but also relations between many objects. For exainple, if the variables x and y take values from tim set of integers and if we introduce the predicate I, "greater", we can express one of the fimdamental relations of the integers:
I(x, y): grr
y)
which is i n t e r p r e t e d as "x is greater than y". If, in the above expression, we replace x by 5 an(t y by 3, we obviously have a particular version of I: I(5, 3 ) : greater(5, 3) which holds true for the integers.
The Quantifiers The introduction of variables results in the change of the validity of a fornmla. Consider for example the formula
Q(x, y ) : flight_XA(x, y) which is i n t e r p r e t e d
as
X Airlines flight connecting the cities x and y The
validity of
this formula is only
partial, since
there may not be for instance,
an X Airlines flight from New York to New Delhi. Conversely, the formula P(x): has a
universal validity, it
H u m a n ( x ) -+ Mortal(x)
holds true for every variable x.
96
Predicate Logic
In Predicate Logic, PrL for short, the general or partial validity is denoted by two special sy~nbols, the quantifiers; we thus use a u n i v e r s a l q u a n t i f i e r and an e x i s t e n t i a l one, denoted respectively by V and 3. Hence the initial fornmla P becomes: P(x):
(V x) (Human(x) --+ Mortal(x))
and Q becomes Q(x, y): (3(x, y)) fiight_XA(x, y)
In the following sections we will formally introduce tile language of Predicate Logic.
2.2 The Language of Predicate Logic We will now give the formal description of a language of PrL, [Chur56, Curt63, Dela87, Hami78, Klee52, Mend64, Meta85, Snml68]. D e f i n i t i o n 2.2.1: A PrL language consists of tile folh)wing fundamental symbols: (I) Logical symbols: (i)
(II)
Variables
x, y, z , . . . ,
(ii)
Logical connectives
(iii)
Comma, parentheses
(iv)
Quantifiers
xo, Y0, Z o , . . . , x i , . . .
A, V, --1, -+, ++ , ( )
V, 3
Specific synfl)ols: (i)
Predicate symbols
P, Q, R, . . . ,
Po, Q0, R 0 , . . . , /)1,...
(ii)
Symbols of constants
a, b,...ao, bo,... , a l , . . . , a2,...
(iii)
Symbols of functions
f, g, f0, go, f l , . . .
m 2.2.1
The number of different variat)lcs occurring in a predicate symbol is tile d e g r e e or a r i t y of tile predicate. For exanq)le, Q(x, y, z) is a predicate of degree 3 or a 3-ary predicate.
The Language of Predicate Logic
97
Each quantifier is d u a l to the other one: V is equivalent to the sequence of syinbols --3-- and 3 is equivalent to --V--. For the formula ( V x ) Q ( x ) we have for instance:
(v
(3x)
Each language of PrL, i.e., each set of specific symbols contains all logical symbols. Thus, in order to determine a language, a definition of its specific symbols suffices. Example
2.2.2:
= and
s
--
(=, 5, + , . , 0, 1) is a language for arithInetic.
_< are 2-ary predicate symbols: = (x, y) reads " x -
+ and
y", and < (x, y) denotes "x < y".
9 are 3-ary predicates: +(x, y, z) reads "x + y - z", and
9 (x, y, z) rea(ts "x 9 y - z".
0 and 1 are symbols of constants.
m 2.2.2
D e f i n i t i o n 2.2.3: A t e r m is inductively defined by"
(i)
A c o n s t a n t is a term.
(ii)
A v a r i a b l e is a term.
(iii)
If f is a n-ary f u n c t i o n and t l , . . . ,tn are terms then f ( t l , . . .
,t~,) is a
term.
II 2.2.3
D e f i n i t i o n 2.2.4: An a t o m i c f o r m u l a or a t o m is every sequence of symbols
P(tl,...,tn) i-
where P is an n-ary predicate symbol and ti is a term, for every
1,2,...,n.
m 2.2.4
D e f i n i t i o n 2.2.5: A f o r m u l a is inductively defined by: (i) (ii)
Every a t o m is a formula. If a l , a 2 are formulae then (O"1 A 02) , and (al ++ ae)
(0"1 V 02), (01-+ 02), (mO"l)
are also formulae.
(iii)
If v is a variable and ~ is a formula then ((3v)g)), ((V v)g)) are also formulae.
(iv)
Only the sequences of symbols formed according to (i), (ii) and (iii) are formulae.
II 2.2.5
98
Predicate Logic
Example
2.2.6: The following expressions are formulae:
(i) q01" (Vy)(Sx) [P(x, f(y))V Q(x)] (ii) ~2" (gx) (?y) [P(x) V Q(x, y) --+ ~(R(x))]. Remark
II 2.2.6
2.2.7: We observe t h a t the definition of the fornmlae of tile langlmge
allows trivial uses of quantifiers such as:
(3x) [v - a] which is equivalent to the formula: y-3 Tile trivial uses are formally accepted, however they are usually utilised only in technical proofs.
II 2.2.7
E x a m p l e 2.2.8: Here are several formulae of tile language s
which was define(t
in E x a m p l e 2.2.2. (1)
(Vx) (x - x)
= is reflexive
(2)
(V x ) ( g y ) ( x -
(3)
(V x)(V y)(V v ) [ ( x - y A y - v) -+ x - v]
-
(4)
(g z ) ( z _< z)
_< is reflexive
y --+ y - x)
= is symmetric is transitive
II 2.2.8 We will continue with some definitions that are necessary for tile complete description of the
PrL
context and Logic P r o g r a m m i n g .
Definition 2.2.9: (i)
A subsequence t l of symbols of a term t, such that t l is a term, is a subterm
(ii)
of t.
A subsequence W1 of symbols of a fornmla ~o, such that ~1 is a formula, is a s u b f o r m u l a of ~.
m 2.2.9
The Language of Predicate Logic Example (i) (ii)
If
99
2.2.10:
f(x, y)
is a term, then x, y and
p(x),
f(x, y)
v Q(x, v)
are s u b t e r m s of
f(x, y).
are s u b f o r m u l a e of the formula ~2 m 2.2.10
of E x a m p l e 2.2.6.
Remark
2.2.11:
In the P L chapter, we only e x a m i n e d the propositions according
to their c o n s t r u c t i o n based on simple, a t o m i c p r o p o s i t i o n s and logical connectives. We thus a s s u m e d t h a t atomic propositions did not require further analysis. Predicate Logic however deals with a more general concept, the a t o m i c formlllae. W h a t we assume here is t h a t the atomic formulae consist of predicates and t h a t every n-ary p r e d i c a t e
P(tl,...
,tn) expresses a relation b e t w e e n the t e r m s t l , . . . t n . II 2.2.11
Definition 2.2.12: (i)
B o u n d and Free O c c u r r e n c e s of Variables:
An occurrence of a variable v in a formula ~o is said to be b o u n d
if there
is a s u b f o r m u l a r of ~ which contains this occurrence and begins with
(My) or (3v). (ii)
An occurrence of a variable v in a formula is said to be f r e e if it is not bound,
Definition 2.2.13:
m 2.2.12
Free and B o u n d Variables:
A variable v occurring in a formula q) is said to be f r e e if it has at least one frcc occurrence in ~o. v is said to be b o u n d
Example
2.2.14:
if it is not free.
m 2.2.13
In E x a m p l e 2.2.6, the variable x has a free occurrence in the
subformula
: (3v) (P(x) v Q(x, v) of ~02 b u t it is b o u n d in the formula 9~2 itself. Hence x is free for ~ot (since it has at least one free occurrence), but it is b o u n d for ~2.
Definition 2.2.15:
A t e r m with no variables is called a g r o u n d
m 2.2.14
term. m 2.2.15
1O0 Example
Predicate Logic 2.2.16:
If a, b are constants and if f is a fllnction symbol, then a, b,
f ( a , b), f ( f ( a ) , b), . . . are ground terms,
Definition
2.2.17:
A sentence
m 2.2.16
or c l o s e d f o r m u l a
is a formula with no free
variables,
m 2.2.17
According to the previous definition, in order to form a closed formula from a given one, we have to bind all its free variables with quantifiers. Example
2.2.18:
From the formula ~(x, y ) "
(x + y - x 9 y)
we can form the
closed formula
~(~, , j )
(v x ) ( 3 y ) ( x + y - x 9 v)
m 2.2.18
A n o t h e r way to form propositions is to s u b s t i t u t e free occurrences of wtriat)h'~s by constants. In general, we have for tile s u b s t i t u t i o n [Fitt90]" Definition
2.2.19:
A substitution 0 --
{xl/tl,
s e t , or simply s u b s t i t u t i o n , x2/t2,
...,
is a set:
Xn/tn}
where x i and ti, 1 <_ i < n, are correspondingly variables and terms such that if x i - x j , then ti - t j , i < j <_ n.
If ~o is an expression (atom, term of formula) then ~0 denotes the expression resulting from tile substitution of occurrences of x 1 , . . . , x n
in ~o by tile corre-
sponding terms t 1 , . . . , tn. Tile e m p t y
substitution
is denoted by E,
in other words E -
{ }. m 2.2.19
Example
2.2.20:
If we use the substitution 0 - { x / 2 ,
u
y/2}
on the formula:
~(x, y)
of the previous example, we can forin the formula: KO
.
(2+2-2,2)
m 2.2.20
C
~
c
~
oo
~ ,
9
~ o
~ .
9
~
~
~'~
P
II
o ~.
i,a
i,~
9
J"'5 ~...., 9
,,o
~ ~
~
0
~-'
~
,'~
~-
~
9 .
~
,~.
0 ~-i
=
9
~...... 9
~ ,~
F
<
L~
c
~~
9
II
.~
0
~
o
o
o
II
'-"
~
-.
~
0
0.
0
0
0
~"
~-
i,o
l,o
I
J
>
0
0
,,'D
%
9
~
~-
~<
~
<
~
..
-.
F'
~
9
, ~ .
9
~
.o
~
0
o
0
o
~-, ,'~
~
q
I
a~
--
~
~
~
,~
~
::r"
~
o
~~
0
<
_~.
N
m
i,~
I
II
II
II
9
ca
II
ca
r
t,a
II
II
II
r
r
II
9
j...,..,.
,"-'
d~
i-i,
c2
i,i,
hi,.
9
C~
Axiomatic Foundation of Predicate Logic D e f i n i t i o n 2.2.25:
103
Let ~ be a formula with no quantifiers and let 0 be a sub-
stitution. Then ~0 is the fornmla resulting from the s u b s t i t u t i o n of every term t occurring in ~ by tO. Correspondingly, if S - { C 1 , . . . , Ca} is a set of PrL forinulae w i t h o u t quantitiers, then SO - { C 1 0 , . . . ,CkO} results from the substitution of Ci, 1 _< i <_ k, by the formulae CiO. Definition
m 2.2.25 Let $1 and $2 be two sets of formulae with no quantifiers.
2.2.26:
$1 and $2 are called v a r i a n t s if there are two substitutions 0 and r such that:
$1 -
Example
-
2.2.27:$1
$20
and
$2 -
P ( f ( x , y), h(z), b) and $2 -
Slg,
m 2.2.26
P(f(y,x),
g(u), b), where
b is a constant, are variants. Indeed, if:
o S10
$2r
Definition
-
{x/z, v/~, z / ~ } P(f(x,y),
~nd
h(z), b)O -
P ( f (x, y), h(u), b)g, -
2.2.28:
A renaming
r
P(f(y,x),
-
{ x / v , v/x, u / z } h(u), b) -
$2
P ( f (x, v), h(z), b) -
$1
substitution
then
9
2.2.27
is a s u b s t i t u t i o n of the form:
{v~/u~,..., vn/~,~} where vi an(t ui, 1 _< i _< n, are only variables.
m 2.2.28
2.3 Axiomatic Foundation of Predicate Logic In section 1.7 of the first chapter, we axiomatized Propositional Logic by mcans of an axiomatic system consisting of three axioms and a rule. P r e d i c a t e Logic can be similarly axiomatized [Dela87, Hami78, Klee52, Mend64, Rmsi74, RaSi70]. Let us first give an auxiliary definition" Definition
2.3.1:
A variable
x is f r e e for t h e t e r m
t in t h e f o r m u l a
a,
formally free(x, t , a ) , if none of the free variables of t becomes bound after the substitution of x by t in all free occurrences of x in a.
m 2.3.1
104
Predicate Logic
Example
2.3.2: Assume a : (V y) P ( x , y).
Then x is not flee for the term y
in a, since, after the substitution x/y in tile free occurrences of x, the variable y of the tern: y is bound. Conversely, x is free for the term z in a, where z is a different variable from y; since, after the substitution x/z in a, the variables of z, namely z, are not bound. Furthermore, y is free for y in a! (a does not contain free occurrences of y). 9
D e f i n i t i o n 2.3.3:
For all formulae ~o, r , a of PrL, tile axioms of
(2)
(~ -+ (~- -~ ~))
+
(3)
(~
(~ + ~)
(4)
If flee(x, t, ~o), then the fornmla
-+ ~ )
+
PrL
2.3.2
are:
((~ ~ ~-) -+ (~ -~ ~))
(Vx)~
~
r
is an axioln. (5)
If x is not free in the formula ~o, then the formula
(v x ) ( ~ -+ ~) + (~ -+ (v x)~)) is an axioIn. ,Just as in PL, the symbol i- denotes formulae derivation in tile
PrL
axiomati(:
system. This axiom system contains two rules: (1)
M o d u s P o n e n s : qa,
(2)
G e n e r a l i z a t i o n : ~o t- (Vx)~o
Remark
(1)
~o-+rt-r I
2.3.3
2.3.4:
According to the rule of generalization, if qo is a formula derived from tile axion~s and the rules of PrL, then (V x)qo is also derived in the axiomatic system.
Axiomatic Foundation of Predicate Logic
105
Assume for instance that the following formula is derived: "Human(x) +Mortal(x)" Then the fornmla: "(g x) ( H u m a n ( x ) + M o r t a l ( x ) ) " is also derived. In other words, we can always obtain tile validity of a generalized formula (Vx) ~o from tile validity of tile formula ~o. An erroneous application of the rule of generalization is often the cause of many mistakes in common discussions. For examt)le, we often claim that: "all politicians are swindlers" because we know that politicians a and b are swindlers. However, this claim is not logically valid: in order to generalize, t h a t is to characterise all the politicians and not only a and b, we must be certain that tile following formula is derived in our axiomatic system: "politician(x) + swindler(x)" T h a t is (hopefully!) not the case.
(2)
Comparing tile
PL
and the
ioms and the rule of
PL
PrL
axiomatic systems, we note t h a t tile ax-
are contained in tile axioms and tile rules of
PrL.
However Propositional Logic deals with propositions whereas Predicate Logic refers to a more compound concept, tile
(3)
PrL
formulae.
The quantifier =t is not included in the axiomatic system since we have already defined 3 as ~V~ in section 2.1.
(4)
By (2) and Corollary 1.12.2 we conclude that all the tautologies of derived in
PrL
by considering
PrL
formulae instead of
PL
of
PrL
PL we have
which is derivable in the axiomatic system
defined in Definition 2.3.3. We can thus apply all tile tautologies of fornmlae of
PrL
PL to
(completeness theorem 1.12.1) and have formulae of
which are derivable in our
PrL
are
propositions. For
example, for tile proposition A +-+ - ~ A which is derivable in the forinula ~o +-+ ~ o
PL
axiomatic system.
9
PrL
2.3.4
106
Predicate Logic
Tile theorem of substitution of equivalences is valid in
PL
as well as in
PrL.
Its proof is analogous to tile proof of tile corresponding theorem of PL. Theorem
2.3.5:
Theorem of Substitution of Equivalences for
PrL:
I f the formula A1 is derived from the formula A, after the substitution of the formula B by the formula B1 in none, one or more than one occurrences of B in A, if also {x 1 , . . . , xn } are the free variables of B and B1 which are also bound variables o f A, and if
(vx,)
.
.
.
( v ~ n ) ( B e B,)
then ~- A ~ A1. The formulae of
II 2.3.5 PrL
which are derived in the axiomatic system of
only "legitimate" formulae we carl work with within the
PrL
PrL
are the
context. The following
theorem provides us with a list of the formulae which are most often used. These formulae express associativity and distributivity of quantifiers on logical connectives. As shown in this Theorem, these properties of tile quantifiers are not fully valid. Theorem
2.3.6;
are derived in
I f 9~ and ~r are formulae of PrL, then the following formulae
PrL:
(v ~ ) ( ~ ~ o-) -~ ((v :,:)~ -~ (v x)o-) ((v:,:) ~ ~ (V~)o-) ~ ((3x) ~ ~ (3x)o-) ~ (3~) (,z e o-) ~
(3:,:)(~ -~ o-) (3~)(~ ~ o-)
((v ~) ,,,:, ~ (3:,,:) o-)
((v x) ,,,:,v (v ~) ~) ~
(v x) (~ v o-)
(v ~) (,z v o-) ~
((3~) ~ v (v ~) o-)
(3~) (~ v o-) e
((~:,:) ,z v (3~)o-)
(3:,:) (~ A o-) ~
((3~) ~ A (3x)o-)
(v ~) (,,,:,A o-) ~
((v ~) o- A (3:,:,) o-)
Axiomatic Foundation of Predicate Logic
(v ~) (~ A ~) ~
((v x) ~ A (v ~) ~)
(v~) (vy)~ ~
(vy)(vx)
( ~ ) (3y)~ ~ (Vx)~
~
(3x) ~
~
107
(3y)(3~) i f t h e r e is n o free o c c u r r e n c e o f x in
~
i f t h e r e is no free o c c u r r e n c e o f x in ~
II 2.3.6
Some of tile errors which often occur in formal proofs in all sciences are caused by an erroneous use of distributivity of quantifiers on logical connectives.
For
example, the formulae:
( v ~ ) ~ v ( v ~ ) o -+ ( v ~ ) ( ~ v ~ ) and
taken from the above list are derivable in PrL. But the formulae:
((v ~) ~ v (v ~) ~) e
(v ~) (~ v ~)
and
( ~ ) (~ A ~) ~
((3~) ~ A ( ~ ) ~ )
which express full distributivity of the quantifiers V and 3 on V and A respectively, are not valid: The formulae:
(v ~) (~ v ~) -~ ((v x) ~ v (v x)~) and
are N O T derivable in the axiomatic system of P r L and are NOT valid. For example, the formula: (V x ) [ ( x = 2x) v (x r 2x)] is true (see Definition 2.4.2).
108
Predicate Logic
However:
[(v
- 2 )1 v [(v
# 2 )1
is NOT a true formula. If it were true, then at least one of the formulae:
(v
(V x) (x - 2x), (Definition 2.5.5)
would have to be true.
r
T h a t is not the case: if x = 1,
x = 2x is not true and if x = 0, x # 2x is not true. We therefore need to be very careful when using cornmutativity and distributivity of quantifiers, for it is very easy to be misled to erroneous conclusions.
Remark
2.3.7:
If wc were to extend the language of PrL with a particular logical
symbol of arity 2, the equality symbol " - " , there would be properties of "=" which could not be expressed by means of the axiom system of Definition 2.3.3; this axiomatization expresses general properties of a predicate, and is not able to describe properties of particular predicates such ms " - " .
For example, equality
has to be a reflexive, symmetric and transitive relation. In order to express those properties axiomatically, we need to extend the axiom system of Definition 2.3.3 by two axioms [Dela87, Hami78, Mend64, SchwT1]" (6)
For all the terms x, the formula: X--X
is an axiom.
(7) If A1 is the formula derived from the formula A by the substitution of none, some, or all of the occurrences of the term x by the term y, then the formula: (x-y)
--+ ( A ~ A 1 )
is an axiom. We have thus axiomatized the reflexivity of equality and a rule of substitution of equal terms. The symmetric and transitive properties of equality are derivable by the axioms (1) to (7) and the rules of generalization and Modus Ponens. m 2.3.7
Axiomatic Foun(tation of Predicate Logic
109
Tile proof of Theorem 2.3.6 as well as theorems of soundness and completeness for the axiomatic system of Definition 2.3.3, with or without equality, are beyond the context of this book.
The reader can find them, for example, in [Klee52,
Mend64, RaSi70].
Definition (i)
2.3.8:
If S is a set of formulae, possibly empty, and if A is a formula of PrL, a p r o o f of A f r o m S, denoted by S i- A, is a finite sequence of formulae B 1 , . . . , Bk of erL, where every Bi, 1 < i <_ k, is either an axiom, or belongs to S, or follows from certain Bj, Be, 1 < j, g <_ i, using the rule of Modus Ponens or tile rule of generalization.
(ii) (iii)
A is p r o v a b l e f r o m S if there exists a proof of A from S. A is p r o v a b l e if there exists a proof of A from the empty set.
Ill PrL, tile theorem of dedu(:tion is of great interest.
II 2.3.8
The application of tile
corresponding PL theorem, that is Theorem 1.8.7, to PrL formulae may lead to unext)ected results: Let us assunm that we know t h a t there are rich pe.()ple: rich(x) There, by tile rule of generalization, we have: rich(x) F- (V x) rich(x) And by tile "corresponding" PrL deduction theorem we conclude: rich(x) + (V x)rich(x) In other words, the existence of rich people implies that all people are rich which, of course, does not correspond to reality!
Therefore, in order to
produce correct conclusions, the deduction theorem must include limitations on the use of the rule of generalization.
110
Predicate Logic
Theorem
2.3.9:
Deduction Theorem-
I f S is a set of PrL formulae, A, B PrL formulae such that S U { A } F B, and if the rule of generalization has not been used in the derivation of B from S U { A} on a free variable occurring in A, then S F A ~ B.
The proof of the sponding
PL
II 2.3.9
deduction ttmorem is similar to tile proof of tile corre-
PrL
theorem.
By the rule of Modus Ponens, the converse obviously holds. Then" SFA~B
~
Su{A}FB
2.4 Notation in Logic Programming In section 1.9, we defined several basic concepts of Logic Programming in We will now extended those definitions to Definition
PrL
PL.
[ChLe73, Dela87].
2.4.1:
(i)
A l i t e r a l is every atom (Definition 2.2.4) or its negation.
(ii)
A sequence of symbols of the form:
(v:,:) (v~2)... (v~k) (c, v c2 v . . . v c,,) where C i , i -
1 , . . . ,n are literals and x l , . . . ,xk are all tile variables
occurring in 6'/, 1 _< i _< n, is called a c l a u s e .
I f n - 0, we have the
empty clause, which is denoted by Wl.
II 2.4.1
The parentheses of the quantifiers will be omitted wherever the position of the variables and the quantifiers is explicit. A clause can equivalently be in one of the following forms.
(a)
Vx~...Vxk
(At v . . . V A,,, v --,B1 V . . . v --,B't)
(b)
Vxl...Vxk
( A 1 V ... V Am e-- B 1 A . . .
(c)
Vxl...Vxk
( A 1 , . . . , A m +--B1,... ,Be)
A Be)
Notation in Logic Programming (d)
{C1,C2,...,C,~},
111
set-theoretical form, where for all 1 <_ i _< n,
Ci i s A j , l < j <_ rn, o r C i i s ~ B j , l <_j <_ (e)
A1,
. . . , A,,~
e-- B1,
. . . ,Be
Each clause is thus a sentence, Definition 2.2.17. The reverse does not hold true, due to tile possible existence of tile quantifier 3. However Logic P r o g r a m m i n g deals with all
PrL
sentences. We will see how to deal with this problem in section 2.7,
when we examine tile Skolem Forms. E x a m p l e 2.4.2: The following sequences of symbols arc clauses: (i)
V x V y V z ( P ( x ) v---,Q(x,y) v R ( x , y , z ) )
(ii)
VxVy(---,P(f(x,y),a) V Q(x,y))
Definition
2.4.3:
m 2.4.2
A sentence resulting from the subtraction of the quantifiers
from a clause ~ and the substit~ltion of all variables by constants, is a g r o u n d i n s t a n c e of ~o.
m 2.4.3
For example, tile sentence
P(a) V Q(b) v ~R(a, b) is a ground instance of tile clause v9v
D e f i n i t i o n 2.4.4:
v Q(y) v
y))
A H o r n c l a u s e is a clause of the form" Vzl . . . V x k (A +- B 1 , . . . ,Be)
where A, B 1 , . . . ,Be are atoms and t~ > 1. The atoms Bi, i assumptions of tile Horn clause and A is the conclusion, D e f i n i t i o n 2.4.5:
1 , . . . ,t~, are tile m 2.4.4
A goal is a Horn clause with no conclusion, that is a clause of
the form V z ~ . . . V x k (+- B 1 , . . . ,B~) The atoms Bi are the s u b g o a l s of the goal.
m 2.4.5
112
Predicate Logic
The intuitive, interpretation of a goal becoInes clear if we rewrite it in the forreal PrL form, and if we use tile duality of V and 3 as well as Dc Morgan (scc Remark 2.3.4 (3)): Vx~
. . .Vzk
(~B~ v
. . . v
~Be) ~
Vx~
. . .Vxk
~(B~
A . . . /x B e )
-7 ( 3 x , . . . 3~k) (B~ A . . . Be) In other words, there are no x x , . . . ,xk such that all the assumptions B 1 , . . . , Be are true. D e f i n i t i o n 2.4.6:
A f a c t is a Horn clause with no assumptions, that is a clause
of the form: Vxl...Vxk (A+-)
Remark
2.4.7:
m 2.4.6
In the following sections, we will especially examine sentences
of PrL, that is formulae with no free variables, Definitions 2.2.12 and 2.2.16. There are two reasons which urge us to deal with sentences: (1)
Tile clauses, Definitions 2.4.1, 2.4.4, 2.4.5, and 2.4.6, which are used in Logic Programming are sentences of PrL in which all variables arc bound by universal quantifiers.
(2)
For every formula ~a of
{~1,...
PrL
which contains exactly the flee variables
, ~k }, we have:
(where ( 0 ) by the rule of generalization and ( ~ ) by axiom (4) and tile rule of Modus Ponens). We thus examine tile sentence (V Xl)...
(VXk) qP
instead of examining
the formula ~o.
D e f i n i t i o n 2.4.8- A p r o g r a m is a finite set of Horn clauses.
Let us now illustrate the above concepts with an example.
II 2.4.7
m 2.4.8
Notation in Logic Programming
Example
113
2.4.9: Tile following sentences of tile English language are given: $1 :
Peter is a thief
$2 :
Mary likes food
$3 :
Mary likes wine
$4 :
Peter likes money
$5 :
Peter likes x if x likes wine
$6 :
x can steal y if x is a thief and if x likes y
By introducing tile constants "Peter", "Mary", "wine", "food", and "money", the variables x and y, and the predicates "thief' (arity 1), "likes" (arity 2), and "can_.steal" (arity 2), we form the following program: CI:
thief(Peter)
+--
C2 :
likes(Mary, food) +--
C3 :
likes(Mary, wine) +--
C4 :
likes(Peter, money) +-
C5:
likes(Peter, x) +- likes(x, wine)
C6 :
can_steal(x,y) <-- thief(x), likes(x, y)
Horn clauses C1, C2, C3 and C4 are facts. Horn clauses C5 and C6 constitute tile procedural part of the program. Assume we wish to find out what Peter might steal (if there is something which Peter can steal); we have to form the following goal: G:
<--- can_steal(Peter, y)
We will find out the answer in Example 2.10.12.
m 2.4.9
We will later discuss systematically the ways to derive conclusions from sets of clauses similar to those of the above example. Tile theory of interpretations will be developed in the following sections; we will describe methods of assigning truth values to sentences of a PrL language, that is to formulae with no free variables, by means of the interpretations.
114
Predicate Logic
2.5
Interpretations of Predicate Logic
Within the context of PL, tile finding of the truth value of a compound proposition was based on the concept of v a l u a t i o n .
By assigning t r u t h values to the
atomic formulae of a given proposition A, we determined inductively the truth value of A. In the following, we will generalize the concept of i n t e r p r e t a t i o n
on which
we will ground tile development of methods of finding the t r u t h value of a PrL sentence.
Inte~retations: an Informal Description We wish to interpret, t h a t is to assign truth values to P r L sentences. The basic elements of these sentences are terms, namely variables and constants. Therefore we first need to interpret terms by forming a set of objects which constitutes an interpretation of the terms of the sentence. We then can define the truth values of the predicates and the sentences of the language. Hence, the semantics of P r L are clearly more complex than the semantics of PL. In order to understand fully tile concepts which will now be introduced, we will give a few examples.
Example 2.5.1: Assume the language
z: = {Q, where Q is a predicate and c~ is a constant, and the sentence of this language:
s:
(v y)
y)
Every interpretation of the above language has to determine a set of objects, the elements of which have to correspond to the symbols of L: and assign t r u t h values to the sentences of the languages. We will take N, the set of natural numbers, as the set of objects. We can now define the interpretation as a structure: .A =
(N, <_, 1)
Interpretations of Predicate Logic
115
where: N
.
. m .
the set of natural numbers the relation "less than or equal to" in INl the natural number 1
Based on tile interpretation A, we assign tile following correspondences to the symbols of s the symbol Q corresponds to the relation < in N. the symbol c, corresponds to the nattlral number 1.
Tile variables of S, x and y, take values fronl N. Then tile following is the obvious meaning of S in relation to A"
S
.
"There exists a natural number x such that, h)r every natural number y, x < y holds."
S thus declares that there exists a minimal element in A and it is obviously true in A,
1 being the minimal elenmnt of A.
Let us now define a different interpretation for the same language s A'-
(N, >, 1)
where Q is tile predicate corresponding to tile relation " > " , the "greater than" relation of tile natural numbers, and where c~ corresponds to 1. Based on this interpretation, the meaning of S t)econms:
S
.
"There exists a natural number x such that, for every natural number y, x > y holds."
S states that there exists a maximal element in A, and this is obviously not true in A', since A' does not have a maximal element. In other words, we observe that we can a t t r i b u t e different interpretations for the same language, thus assigning different t r u t h values,
m 2.5.1
116
Predicate Logic
I n t e r p r e t a t i o n s a n d Truth" F o r m a l D e s c r i p t i o n We will continue with the formal description of the interpretation and tile truth of a sentence within the D e f i n i t i o n 2.5.2:
PrL
context, [ChLe73, Dela87, Hami78, Meta85, RaSi70].
Let
/: -- {R0, R1, ... ,f0, fl, . . . , c 0 , be a
PrL
c1,...}
language, where Ri are predicate symbols, fi are symbols of functions
and ci are constant symbols, i - 0, 1, . . . . .A -
(A, c(R0), r
An i n t e r p r e t a t i o n of s
c(f0), ~ ( f l ) , . . . ,
c(c0), e ( C l ) , . . . )
consists of: (i)
a set A :/= ~, the universe of the interpretation.
(ii)
an n-ary relation e(Ri) C_ A n for every predicate symbol Ri of arity n.
(iii)
a function r
(iv)
an element e(ci) E A for every constant symbol ci.
" A m.
> A for every function f/.
e(Ri), e(fi) and e(ci) are the i n t e r p r e t a t i o n s of Ri, fi and ci respectively in A. I
2.5.2
E x a m p l e 2.5.3: Let
/2be tile
PrL
{=,_<,+,,,0,
1}
language seen in Example 2.2.2. An interpretation of this language is: A-
(N, =,_<, + , , , . . . , 0 ,
1)
where N is the set of natural numbers, " - " the equality relation in N, "<" the relation "less or equal", and "+", "," the addition and the multiplication in N. At this point, note that the symbol "+" of/2 is just a 3-ary predicate symbol, for example +(2, 3, 5), 2 + 3 - 5. In the interpretation, this symbol is i n t e r p r e t e d as r
the symbol of tile natural numbers addition. We usually denote by c((@)
the object interpreted by the symbol "@" of the language in an interpretation ,4.
Interpretations of Predicate Logic
117
We then have tile following interpretations of symbols of s
e(=)
c_
N•
e(<)
c_
N•
e(+)
g
NxN•
e(,)
g
N•215
~(0)
c
N
c(1)
e
1N
m 2.5.3
A PrL language and its interpretation thus differ greatly.
Howe.ver, we will
often, for the sake of simplicity, treat a language and its interpretation identically. For example, < will be use.d both as a predicate of the language and as an interpretation, instead of ~(_<), which is a subset of 1N • IN. As seen in Example 2.5.1, a language can have many interpretations. Example
2.5.4:
A different interpretation of the language s examined in the
previous example is:
A' -
({2n In c No}, -', *', +', 0')
(even natural nunlbers)
where
c(=)
is
=',
the equality in { 2 n l n c No }
e(*)
is
,',
the multiplication in {2n I n E No }
e(+)
is
+',
tile addition in {2n I n C No}
e(0)
is
0' ,
the identity element of + '
As mentioned before, a sentence of a language call be "true" ill one of the interpretations of the language and "false" in soIne other. For example,
(3v) (v :~)(. 9 .v - *) denoting tile existence of an identity eleInent for ,, is true in A and false in A'. m 2.5.4 We will now define inductively tile truth of a scnten(:e in an interpretation A.
~
.~ ~ ~
9
m
7~" _.
~
-"
~.
~
~
~<
~
~..,.
~
~"
~
o ~
~..,
'-"
~
~"
Z
~
~
II
-]]-
I
~
~"
~
~
~-
~
~"
bO
%
~
~"
~
~
~
m.
-~
m"
..
('9
-
""
~
~
~
~
~
=
~
0
c~
~
~
b..., 9
~
%
"~
~"
~"
9
o
~-
~6
"
---
,~"
Interpretations of Predicate Logic
119
All tile axioms of Definition 2.3.3 and R e m a r k 2.3.7 are formulae which are true in every interpretation A.
Furthermore, the rules of generalization and
Modus Ponens lead from true formulae to true formulae, for all PrL interpretations (soun(tness of the PrL axiomatic system).
Example 2.5.7:
Let s A -
-
{Q, f } be a language of arithmetic, and let
(Q+, =, qo)
be two interpretations of s
and
A' -
(R+,-,g)
where
Q+
9
tile set of positive rational nulnbers
II~
9
the set of positive real numbers
.q
"
-
e'(Q) 9 =, e(Q) 9 =,
g(.q) e(g)
tile equality in the equality in Q+
9 g defined in II~ 9 g defined in Q+
(Vx) (3y) Q(x, g(y)), namely (Vx) (:ty) (x - y2), is true .A' but not in ,4 since tile equation x - y 2 where x is given and positive,
Then the sentence S " in
always has a solution in IR+, but not always in Q+.
m 2.5.7
Example 2.5.8: For the language s - {Q, f,a, b}, we can define the following interpretation: A -
(A, child, mother, John, Mary, Napoleon)
where: A e(Q)
.
the set of all h u m a n beings
-
the relation "child", i.e., e(Q)(Xl,X2)
c(f)
=
child(xl,x2)
=
'~X 1
is the child of x2 "
9 the relation "mother", i.e.,
e(f)(x)=
r
9
the person Mary
r
9
the person J o h n
9
tile person Napoleon
mother(x)=
"the m o t h e r of x"
120
P r e d i c a t e Logic
We observe that, with A, one symbol which does not occur as an element of the language, namely c, is never the less interpreted.
We will scc in the following
Theorem 2.5.14, t h a t such an interpretation of symbols which do not belong to the language is not problenmtic. Let S : Q(b, f(b)) v ( 3 x ) ( Q ( a , x ) ) .
According to ,4, the interpretation of S is:
"John is the child of the m o t h e r of John
S :
()r
there exists a person x such t h a t Mary is a chiht of x". John is obviously the child of the m o t h e r of John. interpretation A
S is therefore true in the
(case (e) of Definition 2.5.5).
m 2.5.8
D e f i n i t i o n 2.5.9: Let /2 be a language and a a sentence. Tile interpretation A of s is a m o d e l of cr if and only i f A ~ c f ,
m 2.5.9
D e f i n i t i o n 2.5.10: If s is a language and ~4 an interpretation of s
is the t h e o r y o f t h e i n t e r p r e t a t i o n .
then tile set
m 2.5.10
In other words, the the, ory of an interpretation is the set of all the sentences which are true in this interpretation. In the previo~s example we saw that it is possible to i~terpret symbols which do not OCCllr as elements of the langlmge. We will give a definition which is usefill when dealing with slmh sitlmtions. D e f i n i t i o n 2 . 5 . 1 1 : Let us assllnm we have a lang~mge L: and an ii~tcrpretation A of s
such that the elenmnts al, a 2 , . . , of the universe of A are not interpretati(ms
of constant syiifi)ols of s (1)
We f()rI11 a new language: s
=
Z:U{Cl,C2,...}
where syIllbols cl, c 2 , . . , are new constaIlt synlbols which do not belong to s
s
is said to t)e an e l e m e n t a r y
extension of s
Interpretations of Predicate Logic (2)
121
If we have an i n t e r p r e t a t i o n A =
(A, R1, R2, . . . , P1, P2, . . . , dl, d2, . . . )
then we can form a new interpretation: A* =
(A, R1, R2, . . . ,
of the language
s
=
P1, / ) 2 , - - - , dl, d2, . . . , a l , a2, . . . )
s
where
cl, c 2 , . . . ~ s
thus
assigning a suitable i n t e r p r e t a t i o n to el, c2, . . . , and actually imposing e(Cl) = al, r
= a2, . . . .
A* is then called an e l e m e n t a r y
extension
o f A.
E x a m p l e 2.5.12: L; =
m 2.5.11 Assume
{=,<,+,,,1,0}
and
r
=
(N, = , _ < , + , , , 0 , 1 )
We can then form the language: s
=
{=,_<,+,*,0,1,
Cl, C 2 , . . . }
(N, =, <, + , , , 0 ,
1, 2, 3 , . . . )
and its interpretation A* =
Remark
2.5.13:
II 2.5.12
We have already enriched the language /2 with constants;
s can however also be enriched with new symbols of flmctions and predicates. s
the extension of s with new symbols of functions and predicates, is a non-
elementary extension o f / : .
T h e corresponding extension of A, n a m e l y A*,
non-elelnentary extension of ,4.
is a
m 2.5.13
Here is a theorem a b o u t the verifiability of a sentence within the P r L context. Theorem
2.5.14:
Let ~. be a language, ~4 an interpretation of s and s
and A*
their respective elementary extensions such that all the elements of the universe of .A* are interpretations of symbols of f_.*. Let (~ be a sentence of L~. Then ~ is true in .A if and only if it is true in .A*.
Formally
122
Predicate Logic
Proofi
By induction on the length of a.
If a is a sentence P ( c l , . . . .4* ~ P ( c l , . . . , c a )
,ca),
then
ca ( a ( C l ) , . . . , e ( c a ) ) e c ( P ) and
e ( C l ) , . . . ,~(ca) E .A*,
c (A*) k
ca (C(Cl),... , c ( c a ) ) e c ( P ) and
c(cl),...
,e(ca) E .A,
~ ( P ) C_ A * ,
since a is a sentence of L; ca
.,4 ~ P ( C l , . . . , Ck )
T h e cases a : --,~o, a : ~o V ~o', a : ~o A ~a', and
a : qa -+ ~o', are t r e a t e d similarly.
Let us a s s u m e t h a t a is the sentence (3x) ~o(x), where x is the only free variable of a. Then: ,4* ~ (3x)~o(x)
r
for a c o n s t a n t s y m b o l c o f / : * , ,4* ~ ~o(c)
ca
A D ~a(c) duc to the i n d u c t i o n a s s u m p t i o n and to the fact t h a t ~o is a sentcnce of t;.
T h e case a :
(Vx)~a(x)
is e x a m i n e d similarly.
m 2.5.14
According to T h e o r e m 2.5.14, the t r u t h of a sentence in an i n t e r p r e t a t i o n A* does not d e p e n d on the selection of the new c o n s t a n t s y m b o l s and their intcrpr(~tations.
Definition 2.5.15: consistent
A s e n t e n c e a of a language s is v e r i f i a b l e or s a t i s f i a b l e or
if and only if there is an i n t e r p r e t a t i o n A of s in which it is true,
i.e., A ~ a.
Definition 2.5.16:
m 2.5.15
A s e t o f s e n t e n c e s S is v e r i f i a b l e or s a t i s f i a b l e or c o n s i s -
t e n t if there exists an i n t e r p r e t a t i o n in which all the sentences of S arc true. In the opposite case, S is a n o n - v e r i f i a b l e
or n o n - s a t i s f i a b l e ,
or i n c o n s i s t e n t
set.
m 2.5.16
II ~
~
~"
--X..,,
~
-.X...,,
IA -" 4-
~ ~ ~
.
IA " 4-
II
~
~-..
~
II
"
"
.
~"
..
IA
II
~
~ ~ ~
~
=
9.
.
-.-.1
.~
"<
,-,o
~ ~ ~
p
~.
o
,~'
,-,
~.
II
II
,~
II
~ ~
J
--Jr
~
LU ~..
~
-11-
,
< do
~
g-q.
.
~ ~ ~
<(
j
-IT-
I11
~<
'-'
< ,--
-~
~
~
~y'
r
9
9
,J.
X
b 9
L~I
r
*
-11-
v
II
,~
~J
'-'
.
~
II
-11-
b-
-11-
II ~ > j
~
~
~ ~ <
J
.1.
~
II ~
<
c~ " D
~"
II
9
~
~
~ g'F"
~
b.-,
-TF
~~- r F
'-' .-'
-IT-
r..,.-,
==
~.
.
"-" o
r ~
~ .
~
l:::h.
~
'-" ~-,
~
II
~~
< s
~
~
X
0
~ "" F,-
~
o
=" ,-~
~
~
= "u 9 <
~
~ ]1
:~
-"
.. II
~
"-2 '-
"'ql
9
t,~
'u
o oq ,,_,o
g
""0
o
g
F,-
124
P r e d i c a t e Logic
Let cr be the sentence (actually the formal definition of a (lense order):
(v ~)(v,/) [-~(~ - y) -+ (3z)[-,(z - ~) A -~(z - y) A (~ < z) A (z < v)]] Then, A V= a but B I= or.
9 2.5.18
Just as in PL, there are sentences in every language of interpretations of the language.
PrL
which are true in all
D e f i n i t i o n 2.5.19: Logically True Formula: If tile formula cr of the language/2 is true in every interpretation of L;, then a is l o g i c a l l y t r u e . Example
I
2.5.19
2.5.20: Let ~o be a sentence of L: and r a formula of/2.
Then the sentence
s:
((v v ) ( ~ ~ r
~
(~ ~ ( v ~ ) r
is true in every interpretation of s P r o o f : Assume that S is not true in all interpretations of/2. There then exists an interpretation A o f / : such that:
Then by Definition 2.5.5, we have
not [A g= (V,)(~ ~ r
or
and by De Morgan:
not [A V= (v ~ ) ( ~ -~ r
and
not [A ~ ~o --+ (V v)r
From tile Law of Double Negation we take: A ~ (V v) (qo --+ ~) Then A ~ ~
V (V v ) r
(1)
and
.A ~: qo --+ (V v) r
(2)
follows from (2), and by Definition 2.5.5 (b), we have: A ~ qa
and
.A ~: (V v ) r
(3)
Normal Forms in Predicate Logic By (3) and Definition 2.5.5 (h) we know that there exists a c C s
125 such that:
x b ~r and from A ~ ~a and Definition 2.5.5 (d) we take
x b: ~ A-,r or equivalently
x b -~(-,~ v r or equivalently
A ~ ~((~ -~
r
(4)
By (1) and Definition 2.5.5 (h) for every c C s
.4 b (~ ~ r
(5)
holds. However ~ is a sentence, and it has no free variables. Hence there is no possible substitution in p. Then (5) takes the form A ~ ~ ~ r
for all c E s
and that is a contradiction by (4). S is hence true in all the interpretations of s m 2.5.20 D e f i n i t i o n 2.5.21: Let /2 be a PrL langlmge and S a set of PrL sentences. The sentence a is a c o n s e q u e n c e of S, formally S ~ a, if and only if every interpretation .4 of/2 verifying all the propositions of S, verifies or. This is denoted by:
e Con(S)
**
(V,4)[,4~S~.4~a]
.. 2.5.21
2.6 Normal Forms in Predicate Logic In PL, we examined two equivalent forms of a proposition; the CNF (Conjunctive
Normal Form) and the DNF (Disjunctive Normal Form). Sentences of analogous forms are also found in PrL. Within the context of PrL, however, there are two additional forms; the Prenex Normal Form (PNF) and the Skolem Normal Form (SNF), depending on the quantifiers occurring in the sentence [Chur56, ChLe73, Hami78, Klee52, Thay88].
By reducing two propositions into one of the above
126
Predicate Logic
forms, we can easily compare them and determine whether they are equivalent, whether one of them is the negation of the other, or, simply, whether there is anything significant about them. The Skolem Normal Forms have an important role in Logic Programming. We will examine analytically these forms in the following subsections.
Definition 2.6.1: The formula ~o is a p r e n e x f o r m u l a ,
PNF
for short, if ~ is of
the form: qo: (QlXl) ( Q 2 x 2 ) . . . (Qnxn)a where every Qi, i = 1 , . . . , n is one of the quantifiers V, 3, and a is a f()rmula without quantifiers.
( Q l x l ) . . . (Q,~x,.,) is called the p r e f i x of ~o, and a is called
the m a t r i x of ~.
m 2.6.1
E x a m p l e 2.6.2: The sentences below are in a P N F :
(i) (V x) (V y)[P(x, y) ~ Q(x)] (ii)
(Y x) (3y) [Q(x, y) v P(x, y)]
m 2.6.2
To reduce a formula or a sentence to a P N F , besides the formulae used in PL, we also use in PrL the following formulae: (1)
(Qx) P(x) V G ~
(Qx) [P(x) v G]
where x does not occur free in G
(2)
(Qx) P(x) A G +-+ (Qx) [P(x) A G]
where x does not occur free in G
(3)
-(Vx)P(x)
e+ (3x) (--,P(x))
(4)
~(Sx)P(x)
e+ ( V x ) ( - , P ( x ) )
(5)
(Vx) P(x) A (Vx)G(x) e+ (Vx) (P(x) A C(x))
(6)
(3x) P(x) v (3x) G(x) e+ (3x) (P(x) v G(x))
where (Qx)is (Vx)or (=Ix). The above formulae are derivable in the PrL axiomatic system (Theorem 2.3.6). Formulae (1) and (2) declare that the scope of action of the quantifiers includes conjunctions and disjunctions, provided that the variables of the quantifiers introduced do not occur as free variables.
Formulae (3) and (4) are obvious cases,
$
L.U
J ~
<~
J
~
$
~
<
J
J LU
$
~
J j
< j
j
>
J ~
> J ~
j
<
J ;~
..
9
Oc.
,...,
Oo
9
,...,
o
9
-.
_~.
9
o.
Cab
<
>
$
.o
9
$
O
9
b-..o
oo
9
~,,o
~
<
~
~
=-
~
~
~
~
<
,q
~
>
~"
~
db..
~ l~-.
o
~-.,~
~
....
o
r
~"'"
~"
o
~
~
~
~
~
~
e.e
9
~
.. ~
F."
b-..
~
,F.
~.,.o
~
~,'~,'~
<
~.,..
e-,-
r
,~
dl,
9
'-
~.
,_..
o
:r
..
"-~
o
~
~
-.
9
~
~
~
~," o
_~
ii, o
..
~
~
~
~.
t~
,-,
..
%
~
~,
~
9 <~
N
J
LLI
<<
~
<
.
<
$
.
.
~
~
J
~
..
J
<<
<
$
.
~
--~
J
<
.<
<
$
~
~
~
~
45
~
.<
.
<
$
<:
..
..
~
_
m
~
w
c~
~ ~
~
uJ
~
<
~
j
uJ
~
..
""
,~
9
,i,
9
~
..o
~
o =
m
~
~
N
~
,~
,~
~
<
~
$
<
,-'
,0
~
~
r
~
$
>
> c~
~
G' ~" ..
~
o~
~.~
.. ~ =
,-,,~
,"'
,0
r
~
=
~
~ 0
tu
C'~
~
9
L
~
'-~ =
~
o =
~'
m.
~]
~
t::u
,0
~
<
>
~ m. ~-~ ~-~ .=..=.
~
~
~
G'
r
Lu
'"
~
~
~
~
<
['{
~
--,.
~
~
r
N
.~
o
=
~-~
~'
"0
o 0"~
,'T
p_. E.."
t9
Normal Forms in Predicate Logic
The Skolem Normal Theorem
2.6.6:
129
Form
Loewenheim, Skolem:
For every sentetrce 99 of PrL, we can form a universal sentence 99* such that: 99 verifiable
r
99* verifiable
We will now describe how to form 99", 99 being an arbitrary D e f i n i t i o n 2.6.7: Let 99 be a sentence of a
PrL
m 2.6.6
PrL
sentence:
language.
Step 1 : We determine the
PNF
of 99.
Step 2 : We gradually cross out every existential quantifier (3y), replacing all the occurrences of y by a new, so far unused, function symbol f of all the variables bound by universal quantifiers which precede (3y). f is called a S k o l e m f u n c t i o n . Tile sentence 99* obtained after the application of steps 1 and 2 is a S k o l e m N o r m a l F o r m , of 99,
SNF
for short,
m 2.6.7
E x a m p l e 2.6.8: Assume the sentence 99: (V x) (3y) (V z) (3v) P ( x , y, v) which is in a (1)
PNF.
We cross out (3y) and replace y with the Skolem function f ( x ) . We thus obtain the sentence:
~ : (v ~) (v y) ( ~ ) P(~, 1(~), z, v) (2) We cross out (3v) in 991 and replace v with the Skolem function g(x, z), since (V x), (Vz) precede (3v). We thus obtain
~*: (v ~) (v z) p(~,/(~), z, g(~, z))
II 2.6.8
9
> 9
.=.
~-'"
,'D
*
<
.
.
~
~
.
"
o
.
=.
9
:~
~
"
~-~
0~
,o
o
~
k
F ''<
~
o
0
""
~
Fs
0
~
~ ~
~
,--"
.
o
~ ~
~
Or~
""
~..
~
-'
"
~
~'U o
0
T -~
~
z
dD
~
t o ~
m
.
~
~
_.
<
~'~
~
--
~
..
tlJ
9.
--.
<
~
#D
~Y"
~
,~
9
=r
tO
9
~..
~
~"
o~o
---
~. q-
m-
~
b
"~
~
~
Herbrand Interpretations
131
and every Ci, 1 < i < k is a disjunction of the atoms or tile negations of tile atoms
P i l , . . . , Pip
Then ~a is s e t - t h e o r e t i c a l l y represented as a set
o f PrL.
or
S -- { { P , , , . . . , P , ~ } , . . . , { e ~ , , . . . ,P~,,}}
S-
{C~,...,Ck}
Every Ci is a c l a u s e , S being ttle set of clauses.
m 2.6.10
E x a m p l e 2.6.11: Tile sentence
(vx) (Vz)[P,(x, z) A (/,~(~) v p~(z)) A p4(z, ~) ] has, as its set-theoretical form, the set-
S -
{{(P~(~,z)}, {p~(x),p~(z)}, {P4(z,x)}}
In the following chapters, we will present H e r b r a n d
m 2.6.11
interpretations
as well
as proof methods of the verifiability of sentences or sets of clauses of PrL.
2.7
Herbrand
Interpretations
In this section, we will describe a special kind of interpretation, the Herbran(t interpretations [ChLc73, Dela87, Klee52, Thay88] which have a catalytic role in the theoretical foundation of Logic Programming. Herbrand interpretations were the subject of J. Herbrand's thesis in 1930. There is t)robably no exaggeration in saying t h a t without Herbrand's (:ontribution, Logic P r o g r a m m i n g might still be a far-off dreaIil.
The basic problem in A u t o m a t i c
Theorem Proving is tile determination of a general procedure by means of which we can prove whether a PrL sentence is true or not [ChLe73].
In 1936, Turing
and Church, both working on their own, proved t h a t there is no such general procedure. Herbrand had however already solved tile problem indirectly, by giving an algorithm for tile construction of an interpretation refuting a given fornmla ~a. If ~a is true, there is no interpretation refuting it and tile algorithm stops after a finite number of steps. The first a t t e m p t s to use Herbrand's ideas in Logic P r o g r a m m i n g go back to 1960 and must be credited to Gilmore as well Davis and P u t n a m ; these a t t e m p t s were however not truly successful. Success came in 1965 with Robinson's application of Herbrand's method, due to the introduction and use of resolution.
132
Predicate Logic
Description of the Herbrand Universe Given a P r L sentence qo, we want to be able to determine whether qo is verifiable or not. We thus decide a b o u t a possible verifiability of the sentence by examining properly the corresponding set of clauses S of qo. However the proof of the verifiability or non-verifiability of all the basic terms occurring in a clause is almost impossible. We therefore create a set, a universe, in which the terms of qo take values. This set is called the Herbrand Universe. The construction of the H e r b r a n d Construction
U n i v e r s e is carried out inductively as follows:
2.7.1: C o n s t r u c t i o n of the Herbrand Universe:
Let S be the set of clauses corresponding to the sentence ~o. Step 1 : _
Ho
~ {c I c constant
t
occurring in S}
{Co} if S does not contain any constant.
co is a new constant (new; meaning t h a t it does not occur in S), which we introduce arbitrarily.
Step i + 1 :
H~+I - H~ u { f ( a ~ , . . . , a~)l aj, 1 < j < n, are terms of Hi, and J" a function or a constant occuring in S}
Finally, we impose: H
=
UieN Hi
Then H, the set of all terms formed with the constants of H0 and the functions occurring in S, is called tile H e r b r a n d are called the H e r b r a n d
Example
U n i v e r s e for S. Tile Hi, i = O, 1, 2 , . . . ,
s e t s of S.
2.7.2: Let a be a constant and S the set of clauses: S =
{{P(a)}, {~P(a),P(f(x))}}
II 2.7.1
9
-"
H
9
~
~
~
~
9
~-
~
=
~
m
9
~
~.
~
~
=
~
<
-.
m
~
.
m-,..9
,.-'
o
N
.q
m
..,
~
-.
-.
II
o
om..
N
0~
~"
-,1
.fl) .
-.-,1
m
~
[m
m..
[m
9
II
II
II
.~
m"
9
'm
m-
,-,..
,'9
134
Predicate Logic
We will now give the formal description of a Herbrand interpretation. D e f i n i t i o n 2.7.5- Herbrand Interpretation: Let S be a set of clauses and H tile Herbrand universe correspon(ting to S. An H e r b r a n d
r
interpretation
for S is defined as follows:
(i)
H is the universe of the interpretation.
(ii)
Tile interpretation of every constant symbol is the constant itself.
(iii)
The interpretation of every t e r m f ( t l , . . . a constant, is f ( t ~ , . . . , t ' ) , t l , 9 9 9 , tn
(iv)
, tn), where f is a function or
where t ~ , . . . , t " are the interpretations of
respectively.
The interpretation of every n-ary predicate symbol n-ary relation P(t'l,...
2.'/'.6: Let
/2 -
{_<,
t,,) is an
, t') in H , where t ~ , . . . , t" are tile interpretations
of tl, 9 99 , t,, respectively. Example
P(tl,...,
II 2.7.5
+, ,,
s, 0}
be
a
language of Arithmetic, where s is
the successor function. We have: H0-
{0}
H1 -
{0, s(0)}
H2 -
{0, s(0), s(s(0))}
and finally
H
-
{0, s ( 0 ) , s ( s ( 0 ) ) , s ( s ( s ( 0 ) ) ) , . . . }
The interpretations of the elements of s different from 0 are tile corresponding relations in H. For example, for s and _<, we have according to Definition 2.5.2. c(s)'H,
)H
-
a:
~'s(a) e H
~(<) C_ H 2 " c ( < ) - {(0, s(0)), (s(0), s 2 ( 0 ) ) , . . . , (sn(0), s n + * ( 0 ) ) , . . . } II 2.7.6
Herbrand Interpretations
135
We next give a basic theorem of the H e r b r a n d interpretations.
2.7.7:
Theorem
Let ~s be a universal sentence, S being its set of clauses. Then
r is verifiable (in some interpretations) if and only if it is verifiable in a Herbrand interpretation. Proof."
( ~ ) : A t t e r b r a n d interpretation is an interpretation (trivial case). (=>) : We wish to prove here that if qo is verifiable in an interpretation A, then we can define relations in the Herbrand universe which satisfy the clauses of S. Let us therefore assume t h a t qo is verified in an interpretation A with A as its universe. In order to denote the interpretations of the various symbols of s
i.e.,
of tile language to which qo belongs, we use:
c(f)
c(R)
-
~ E A, for every constant symbol c
-
f " An:
-
e(f(tl,...,t~))
-
R C_ A n, for every n-ary predicate R.
; A, for every function of n variables -
]([1,...,in),
for every t e r m t - - f ( t l , . . . , t n )
A
For every n-ary predicate symbol R we define a relation RH in H (Definition 2.7.5, Definition 2.5.2 (ii), Definition 2.5.5) as follows:
,tn) We thus have a Herbrand interpretation
in) c r
Let us impose A' = { t i t C H}. Tile s t r u c t u r e A', with A' as its universe and with the restrictions of tile relations of A in A' is obviously an interpretation of S. Furthermore, A ~ qo, and whatever occurs in qo, occurs by definition in A. Hence ,4' ~ qo. Then, by tile definition of AH, AH ~ qO. Hence qo is indeed verifiable in a Herbrand interpretation,
m 2.7.7
By Theorem 2.7.7, if qo is not verifiable in a H e r b r a n d interpretation, then qo is not verifiable; there is no interpretation satisfying qa. In other words:
136
Predicate Logic By means of the Herbrand interpretations, we reduce tile non-verifiability of a set of clauses to the non-verifiability of a set of ground instances of those clauses in a Herbrand Universe.
Since there are no variables
occurring in any ground instance of the clauses, verifiability can be proved by means of PL methods just like semantic tableau and resolution. Beth-proofs, as well as resolution, are algorithmic proofs (as opposed to the usual method for verifiability in PrL, Example 2.5.17). This result is known as the Herbrand theorem, no m a t t e r what form it may take in the classical or modern bibliography.
It will be analysed in a following section dealing with proofs by
means of semantic trees.
2.8 Proofs with Systematic Tableaux I n PrL, just as in PL, we can determine whether or not a sentence or a set of
sentences are satisfiable. The methods which we have already examined can be used within the PrL context. Let us start with the semantic tableaux [Fitt69, Fitt90, Meta85, Snm168]. These tableaux are used in the finding of the t r u t h value of a compound sentence of a PrL language s
D e f i n i t i o n 2.8.1: Proofs with Complete Systematic Tableaux: Assume we have a language s and that c0, c l , . . , constituting a list of constants.
are its constant symbols
(Tile meaning of this list will appear clearly in
Construction 2.8.5). Let a, al, a2 be sentences of s
The semantic tableaux are given in tile table
on the facing page.
II 2.8.1
The PrL semantic tableaux are extensions of the corresponding PL tableaux with additional cases for the quantifiers. With the semantic tableau t(Vx)~(x)
t~a(t) for every c
<
b~
9
~r~
%
9
%
<
b~
b~
~
~.
v
H
<
b3
""
h~
h*
h~
h~
$
$
$
$
h~
b~
h*
>
<
v
>
b~
h*
J
c"
~.
Qr~
,..,.
9 9
138
Predicate Logic
Correspondingly, with tile senlantic tableau
t(3x)qo(x)
for a new c we represent the fact "for (3m) qo(m) to be true, tiler(', has to be a constant c, which has not yet a p p e a r e d in the tableau, such t h a t ~o(c) is true". T h e P r L semantic tableaux are called c o m p l e t e
systematic
t a b l e a u x , CST
for short. T h e construction of a complete s y s t e m a t i c tableau of a P r L sentence is analogous to the corresponding P L construction. Let us first see some examples.
Example 2.8.2: Let us assume we wish to prove t h a t a " (Vx)qo(x) -+ (3x)~o(x) is logically true, where ~o is a P r L sentence.
fo
We s t a r t the tableau with .
.
.
as origin" .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
f((gx)r
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
--+ ( 3 x ) r
,lode 1
t(w)r
tableau 8
node 2
f(qx)C(x) .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
node 3 .
.
.
.
.
.
.
.
.
.
.
.
ffoin node 3
fr
for all c
node 4
from node 2
tO(c)
for all c
node 5
contradiction between 4, 5
|
For the last node of the semantic tableau we used tile santo constant c, in order to create the contradiction.
We are allowed to do just so since the tableau of
(Yx) ~o(x) allows the use of every constant. Intuitively, the above Beth-proof tneans t h a t (Vx)~o(x) -+ (3x)~o(x) is a logically true sentence, since each a t t e m p t to prove it false has resulted in a contradiction.
II 2.8.2
<
~,,o
9
~ m
~
N"
~
~r'
~~ m
~'"
~
o
m
~
w
~
,~'
~,,,~
~r~
o
~
co
~
9
~
N
"'
1_1_1
%
o
P
F'
~
|
|
b~
c~
c~
c~
b~
$
c~
c~
b~
c~
~3
<
<
<
$
<
$
<
<
$
<
$
$
oo
x
~ ~
,...~
o o
140
Predicatc Logic
E x a m p l e 2.8.4: Let (3x)~a(x) -+ (Vx)~a(x) be a sentence. This sentence is not logically true since the existence of an x, such that ~a(x) holds, does not imply t h a t ~a(x) holds for every x (for cxample, the existcnce of an x > 3 does not imply that for cvcry x, x > 3 holds).
But:
f ((3x)r
--+ (Vx)r
tr
for new c
re(c)
for new c
| In node 5 we did not have the right to use tile same constant c as in the previous node 4. We have thus "proved" that (=Ix)~(x) ~
( V x ) ~ ( x ) is a logically true
sentence, whereas it is obviously not.
I
2.8.4
Due to the tableaux 11 and 14, a systematic tableau may continue infinitely if there is no contradiction occurring in one of its branches. (In Examples 2.8.2 and 2.8.3, there wa~s no need to write down all the constants of tile tableaux 4 and 13). This fact will surely be better understood in the following formal construction of a complete systematic tableau of a sentence p.
C o n s t r u c t i o n 2.8.5: Construction of a Complete Systematic Tableau: The construction begins with the signed formula f ~ or t~ as the origin of the tableau. We then proceed inductively.
Step n : We have already formed a tableau T,~. Tn will be extended to a new tableau Tn+l, by using some of tile nodes of T,~.
Proofs with Systematic Tableaux
141
Step n + 1 : Let X be the unused and non-atomic node which is furthest left from those nodes equidistant from tile origin. If there is no such node X, the systematic tableau is complete. If now there is such a node, we construct tableau T,~+~ by extending every non-contradictory branch passing through X with the concatenation (at tile end of the branch) of the tableau corresponding to X. We analyse tile new cases:
Case 1:
X is t((V x)~(x))
Let cn be the first constant symbol of th(; list of all tile ('onstants of the language, such that c,, does not occur in any branch passing through X.
We then add
t(p(c,~) at the end of every non ('ontra(tictory branch passing through X, according to tile tableau:
t((Vx)~(x))
t~(cn) Case 2:
X
is f((V x) ~(x))
Let ca be the first constant symbol of the list which (toes not occur in any node of the branch passing through X. Then we add fqo(ck) at the end of every bran(:h passing through X, according to the tableau:
S(w-)~(J-) f (p(ck) Cases 3, 4:
X is f ( ( 3 x ) ( p ( x ) ) and t((3x)(p(:r)) respectively.
These cases are dual to 1 and 2. Intuitively, in cases 1 and 3 (and dually in 2 and 4) we wish to avoid tile repetition and wc declare ~(c) truc for contimlally new constants, thus using up the list of constants (not in a finite period of tinm of (:oursc).
Definition 2.8.6: A C o m p l e t e S y s t e m a t i c T a b l e a u ,
CST
II 2.8.5 for short, is the
union of all the tableaux Tn of tile previous construction, i.e.: i/' =
U,,eNTn
m 2.8.6
142
Predicate Logic
A CST can have an infinite n u m b e r of nodes, whereas tile senlantic t a b l e a u x of PL are always finite. Definition (i) (ii)
2.8.7:
A CST is c o n t r a d i c t o r y
if all its b r a n c h e s are c o n t r a d i c t o r y .
A sentence a is B e t h - p r o v a b l e
(Beth-refutable)
if t h e r e exists a con-
t r a d i c t o r y CST with an origin f a ( t a ) . Tile fact t h a t a is Beth-provable is d e n o t e d by Fs a(iii)
A s e n t e n c e a is B e t h - p r o v a b l e
from a set of PrL sentences S if there exists
a c o n t r a d i c t o r y CST with an origin f a and a next n o d e tP, where P is the c o n j u n c t i o n of the sentences of S. T h i s is d e n o t e d by S Fa. B Example
2.8.8:
node 1
t[(Vx)A(x,x) A (3y)(-,(A(y,y)) V B(y,y))]
node 2
t[(Vx)A(x,x)]
node 3
t[(3y)(~A(y, y) v B(y, y)]
node 4
t[~A (co, co) V B (co, co)]
from 2
m 2.8.7
for new co
t[-~(A(co, co))]
t[U(co, c0)]
/[A(co,co)]
t[A(c,c)]
t[A(co,co)]
tA(co,co)
|
tA(cl,cl)
from 2 for all c
Proofs with Systematic Tableaux
143
In this example, the left branch is contradictory whereas the right branch continues infinitely,
m 2.8.8
Proofs by means of semantic trees are quite similar to Beth-proofs. Let us look at an informal description of tile problem and tile method.
S e m a n t i c trees: Informal description Let ~ be a sentence and S the corresponding set of clauses of ~0. If ~0 is nonsatisfiable, then it is true in a Herbrand interpretation (Theorem 2.7.7). Accordingly, all the ground instances of the clauses of S are true in this interpretation. Hence, if ~ is non-satisfiable, then every a t t e m p t to verify all the ground instances of the clauses of S, by means of a valuation of the ground atoms R(tl, t2,... , t,~), where t l, te, . . .
,tn belong to the Herbrand universe, has to fail. This failure
is confirmed in a finite number of steps by the construction of a finite set of nonsatisfiable ground instances in a Herbrand interpretation.
So by Theorem 2.7.7,
these instances are non-satisfiable in all interpretations. Accordingly, the question is how to construct a procedure which, starting with a sentence ~ and the corresponding set of clauses S, (i)
if ~ is non-satisfiable, ends after a finite nuinber of steps in a finite set of ground instances.
(ii)
if ~ is satisfiable, the procedure does not result in anything in a finite period of time, however it constitutes tile construction of a Herbrand interpretation satisfying ~0.
In other words, by means of this procedure we want to obtain the proof or a counterexample of tile non-satisfiability of the sentence. The construction of such a procedure requires the use of semantic trees [ChLe73, Dela87]. D e f i n i t i o n 2.8.9: A t r e e is a structure T = {X, r} where X is the set of tile
nodes of T and r is a binary relation in X such that: (1)
if x , y c X and xry, then x is called the p r e v i o u s n o d e of y and y the n e x t n o d e of x.
(2)
There is exactly one node of T which does not have a previous node. This node is called the o r i g i n of T.
(3)
Each node which differs from tile origin of T has exactly one previous node.
~
F,.
9
~
e~
o
,~,
~
,.,
.-
~ ~
~-~
"-I
9
eo
~,
~"
~
~
~
~
~
~
g
~-
9
9
9
/
/
/ ~
9
@
9
/
9
~
1---,
9
~....
.
H
~
e
~ 9
N
~
gc~
b--.o
9
1--,
.-'I
~
Proofs with Systematic Tableaux Definition 2.8.12:
Let S b e a s e t
145
Semantic Trees:
of clauses, S =
{C1,...,C,,},
P1,...,Px
the atoms occurring
in the clauses of S, and { a l , . . . ,a,~} the Herbrand universe of S. A s e m a n t i c t r e e for S is a tree T such that: (1)
The origin of T is an arbitrary point. The nodes differing from the origin are ground instances of P 1 , . . . , P~ in the universe {a 1 , . . . , node has exactly two next nodes, i.e.
(2)
Pi(ai,,...
, ai,)
and
}. Each
an,...
--,Pi(ai,,...
, ai,).
Every branch of T containing exactly the ground instances ( a i , , . . . , a i k ), . . . , P i p ( a p , , . . . , a p k )
Pi,
represents the conjunction Pi, (ai,,
(3)
. . . , aik)
A
. . . A
Pip
(ap,,
. . .
, ap~,)
The disjunction of all the conjunctions of the branches of T is logically true.
(4)
If a node of T ~Pi(ai,,.
(5)
is
Pi{ail,...
,ai,}
then
the next
node cannot
be
. . ,ai,}.
If during the construction of T we are at a node k which contradicts a ground instance of one of C 1 , . . . , C,~ of S, then k is a final node and the corresponding branch is said to be c o n t r a d i c t o r y ,
m 2.8.12
In practice, in order to construct the semantic tree of S, we start with P~(ai,,.
. .
,~i~):
P l ( a 1 1 , . . . ,alk)
- - , P l ( a l , , . . . ,alk)
If one of the clauses of S contains P l { a l , , . . . is contradictory,
~Pl(al,,...
,alk)
, alk }, then the branch on the right
is a final node and the construction continues
with tile left branch. If one of tile clauses of S contains tile branch oil tile left is contradictory,
Pl(al~,...
construction continues with the right branch.
~Pl(al,,...,
alk), then
,alk) is a final node and the
146
Prcdi(:ate Logic
Let us assutne t h a t
Pl(al,,...,alk)
the nodc P 2 ( a 2 , , . . . , a 2 x ) .
is not a final node.
We continue with
(The choice of the a t o m with which wc continue the
construction is ours).
P~(a~,,.
. . ,a,.)
-~Pl(all,...
,al~)
~P2(,2,,... ,a2~) ~I
t~2
P l ( a l , , . . . ,alk)AP2(a2,,... ,a2x). Branch g2 is the conjunction Pl(al,... , a x k ) / ~ - - ' P 2 ( a 2 1 , . . . ,a2~,). We check whether one of the clauses of S contains --'Px(ax,,... ,alk) or --,P2(a2,,... ,a2x). If t h a t is the case, Branch al is the conjunction
gl is a contradictory branch and we continue with g2. We continue the inspection and the construction. Our goal is to reach final nodes, using up all the atoms of S.
D e f i n i t i o n 2.8.13: (1)
A set of clauses of S is said to be r e f u t a b l e
by semantic
t r e e if there
exists a semanti(: tree for S, the branches of which are contradictory.
(2)
A branch ~ of a semantic tree of a set S of clauses is called c o m p l e t e , if for every ground instance or
P ( a l , . . . , aT,) of each a t o m P, either P(al,... , a,,)
--,P(al,...,an) is contained in ~.
m 2.8.13
E x a m p l e 2.8.14: Assume
~. (v~) [P(~) A (~/'(~) v Q(f(~))) A ~Q(r
Then
s -
{ {P(~)}, {~p(~),Q(f(x))}, {~Q(/(~))} }, 1
and
H
:
]
2
Herbrand universe
Ground instances
A semantic tree for S is T1 :
=
3
=
{P(a),
{a,
f(a), f(f(a)), f ( f ( f ( a ) ) ) , ... },
Q(a), P(f(a)), Q(f(a)), ... }
r~
9
9
~.
0
<
o~
ca
9
<
J
=
O'q
9
J
o~
<
9
CD
CD
9 .
~
~
C~
~
~ J
9
~
.
h~
~
~....
~
o,
~
~" ~
~
|
~
J
o
o
k
X
cy
oc,
0
m
|
~
9
l! j
>
>
..
"
b~
|
|
~ ~
J
g~
~
o
~,-~
~<
<
~"
g~ ~
0
~r~
~"
~. ~
e-~
b~
0 O~
~,.
Proofs with Systematic Tableaux
149
As said before, the order in which we use the atoms in the construction of the tree has an important effect on the number of steps required to reach final nodes and contradictory branches. In R e m a r k 2.8.11, we saw that the semantic tableaux and the complete systematic tableaux are trees. By Koenig's lemma, L e m m a 1.11.9, every finite tree with infinite nodes has at least one infinite branch. Then, if S is not satisfiable, the construction of its semantic tree will lead through a finite number of steps to the finding of a Herbrand interpretation H, such t h a t AH ~ S. If S is satisfiable, then the corresponding construction will result in an infinite tree, every branch of which will determine a Herbrand interpretation satisfying S. The above conclusion constitutes the substance of Herbrand's theorem, which we will next prove by using a method for the construction of a semantic tree corresponding to a given set S of clauses. Theorem
2.8.16:
Theorem of Herbrand"
I f S is a non-satisfiable set of clauses, then S is refutable by a semantic tree. Proof:
We will describe an algorithm for the construction of a semantic tree for S.
We form the Herbrand Universe for S as well as the set a0, al, 9 9 } of the ground instances of the atoms of S. T h e n
Step O" We construct the tree:
ao
~a 0
Step n" We concatenate, at the final node of every non-contradictory branch ~, extension"
an
--"an
,he
150
Predicate Logic
Let us now assume that S is not refutable by a semantic tree. Then the construction described by tile above algorithm will never end. However in that case, Koenig's lemma guarantees that the tree will contain an infinite branch n. For every ground instance a,~, ~ contains either a,~ or its negation --a,~ as the name of the node. We glow define a Herbrand interpretation as follows: For every n-ary predicate symbol P, and terms
tl,t2,...t,~,
with their inter-
pretation belonging to the Herbrand universe of S, the interpretation of P is the relation:
c(tn)) where
P(tl,... ,tn)
is the name of some node of the infinite branch. The inter-
pretation obviously satisfies all the clauses of S. Then S is satisfiable,
m 2.8.16
In fact, the theorem of Herbrand provides an algorithm examining the satisfiability of a sentence or a set of clauses S with propositional logic metho(ts, for example tableaux or resolution. If S is not satisfiable, then there is a set of ground instances of the clauses of S which is not satisfiable. This finite set consists of PL propositions, and its non-satisfiability can be evidenced with methods whi(:h are known to us. Thus, for every set of clauses S, we start counting all the ground instances of the clauses of S. While this procedure is under way, we systematically check the satisfiability of every finite subset of ground instances with propositional logic methods. If S is not satisfiable, then this inspection will show that one of the finite subsets is not satisfiable. If S is satisfiable, the inspection will continue infinitely.
Example 2.8.17: Assume the sentence of Example 2.8.15. ~"
(Vx)(Vz)
((~P(x) V Q(f(x),x)) A P(g(b)) A -Q(x,z)
Determine whether ~a is satisfiable. Tile corresponding set of clauses is: S
~
{{-,P(x),
Q(f(x),x)}, {P(g(b))}, {~Q(y,z)}} 1
2
3
The Herbrand Universe for S is: H
-
{b, .q(b),
f(b), f(g(b), g(f(b)),... }.
Proofs with Systematic Tableaux
151
The set of ground instances of the atoms of S is {P(b),
Q(f(b),b), P(g(b)), Q(f(g(b)),g(b)), P(f(b)), ... }
We construct the systematic semantic tree for S according to the method of Theorem 2.8.16.
P(b) Q(f (b), b)
~P(b)
J
~Q(f (b), b)
-~Q(I(b),b)
Q(f (b), b)
J
Q3
(~1
p(g(b))
(~3
J Q(f(g(b)), g(b)) (~ 3
--,P(g(b))
--O(f(g(b)), g(b)) |
| 2
1
This semantic tree constitutes the proof that S is not satisfiable and gives us a certain subset of ground instances of clauses of S which is not satisfiable. These ground instances are those, and only those, which were used in order to proclaim a certain branch contradictory. Concretely :
{{~Q(f(b),b)}, {~Q(f(g(b)),
g(b))},
{-~P(b),
{P(g(b))},
Q(f(b),b)}, {~P(g(b), Q(f(g(bl),
g(bl)}}
The two first instances come from clause 3 of S, the third from 2 and the two last from 1. The non-satisfiability of this finite set is indeed proved by the method of resolution in the context of Propositional Logic. Let us name:
A:
Q(f (b), b)
B : Q(f(g(b)), g(b))
C: D:
P(g(b)) P(b)
152
Predicate Logic
We can then write this finite subset of the ground instances of thc clauses of S as follows: S' = {{~A}, {~U}, {C}, {~D,A}, {--C,B}} 1
2
3
4
5
Using resolution we have: (1)
~A
(2) (3)
(~)
~B C ~D,A -,c,B
(6)
B
by (3)and (5)
(7)
El
by (2)and (6)
(4)
We must at this point note that the algorithm lying in the construction of a systematic semantic tree does not always yield the minimal non-satisfiable set of ground instances. Thus in the previous example
s~ = {{~B}}, {C}, {~C,B}} 1
2
3
is already a non-satisfiable subset. (1)
~B
(2)
C
(3)
-,C,B
(4)
B
(~)
by (2)and (3) by (1)and (4)
II 2.8.17
E x a m p l e 2.8.18: Let us look at a satisfiable set of clauses. Assume sentence
~: (v ~)[(3y)P(~, y) -~ (~y)P(~, y)] Tile SNF of qo is (V x) (V y) [-~P(x, y) V P(a, f(x, y))]. (Why?) Tile corresponding set of clauses of 9~ is:
S - {{~P(x,y), P[a, f(x,y)]}}
Unification and Resolution in PrL
153
and the corresponding Herbrand universe:
H
-
{a, f(a,a), f(a, f(a,a)), f(f(a,a),a), f(f(a,a), f(a,a)),...
}
The ground instances of the atoms of S are:
{P(a, a), P(a, f(a, a)), P(f(a, a), a), P(f(a, a), f(a, a)), . . . } A semantic tree for S is:
P(a,a) P(a, f (a, a) )
/
P(f (a, a), a)
~P(a,a)
--,P(a, I (a, a) )
--,P(f (a, a), a)
/
P(a, f (a, a) )
:
--,P(a, f (a, a) )
: :
9
This infinite systematic tree contains branches which are not contradictory by their construction, since the conjunction of their atoms does not go against any clauses of S. For example, branch n l which does not contain negations of P is non-contradictory, n l gives us a Herbrand interpretation of ~o satisfying S. m 2.8.18
2.9 Unification and Resolution in PrL We have already referred to the Prenex and Skolem Normal forms of a sentence. Let ~o be a sentence of a P r L language. Then, by following the steps
A:
Prcnex Form
B:
CNF of the subsentence containing no quantifiers
C:
Skolem Normal Form
D:
Clausal Form
we can reduce ~o to a set of clauses and determine its t r u t h value with tile usual PrL methods.
154
Predicate Logic
Example
2.9.1: Let ~ be a sentence which is already in a Prenex Form (A).
~:
((-~P(x, y) A Q(x, z)) v R(x, y, z)) (V x) (3y) (3z) ((-~P(x, y) v R(x, y, z)) A (Q(x, z) v R(x, y, z))
(V x) (=ly) (3z)
We now introduce the Skolem function symbols f, g, where y -
f(x) and z = g(x).
Then r
(Vx)[(-~P(x,
f(x))
V
R(x, f(x), g(x)))A (Q(x, g(x))
V
R(x, f(x), g(x)))]
and we finally determine the clausal form (D) off
S - {{~P(x,f(x)),R(x,f(x),g(x))}, {Q(x,g(x)), R(x,f(x),g(x)))}} m 2.9.1
In the clausal form of ~, we have to deal with instantiation of variables and with Skolem functions.
The semantic trees defined in the previous section help
us deal with such problems. Using semantic trees, we can select symbols for the substitution of variables from the corresponding Herbrand universe and apply the method of resolution in order to find contradictions among the ground instances of the occurring clauses. Such a procedure is, however, time consuming. W h a t ' s more, it cannot easily be used as a structured conclusion mechanism likely to be programmed into a computer. We therefore need a more algorithmic method which can be used with a computer. The unification procedure which will now be examined [ChLe73, Dela87, Fitt90, Lloy87, Thay88] offers a method such as the one required for the finding of contradictions.
Unification: Informal Description We have already defined, Definition 2.2.19, the concept of substitution. Assume we are given the following clauses
C1 :
{P(f(x),y), Q(a,b,x)}
and
(72 :
{-,P(f (g(c)), g(d)) }
We want to apply resolution to C1 and (72 substituting x by For this we have to (1)
check whether C1 and C2 can be resolved,
and
(2)
find the suitable substitution sets allowing resolution.
g(c) and y by g(d).
Unification and Resolution in PrL
155
These controls and substitutions are achieved with the unification algorithm. Let us now see how this algorithm unifies C1 and (72.
Step 1 : We start by comparing the clause's terms on the left side and proceed towards the right side until we meet the first terms with tile same function symbol which do not agree in the variables or in tile constants. ing those terms, tile d i s a g r e e m e n t
set.
We now create a set contain-
For C1 and C2, {x,g(c)} is tile first
disagreement set.
Step 2 : We check each variable of the disagreement set to see if it occurs in some other term of that same set.
Step 3 : If the previous check is positive, then the clauses do not unify and the algorithm en(ts with a failure. If it is negative, we proceed with the substitution of the variable by the other term of the disagreement set. For C1 and C2 we apply substitution
01 = {x/.q(c)}. C1 and C2 then become: C~ -
{P(f(g(c)),y), Q(a,b,x)}
-
Step 4 : We proceed to the right by following stcps 1 - 3. The new disagreement set is
{y,g(d))}. We apply substitution 02 - {y/g(d)}. We impose: C~ -
{P(f(.q(c)),g(d)), Q ( a , b , x ) }
Resolution carl obviously be applied to C 2 and C~. Finally the algorithm will finish by giving a set of substitutions by means of which C1 and C2 are unified and then resolved by the
PL
resolution rule. This set
of substitutions is called a g e n e r a l u n i f i e r , GU, of the resolved clauses. For C1 and C2, the general unifier is 0 = {x/g(c), y/g(d)}. resolved, the algorithm will finish in step 2.
If the clauses cannot be
156
Predicate Logic
Unification: F o r m a l D e s c r i p t i o n We will now continue with the formal description of the unification algorithm and the necessary definitions.
D e f i n i t i o n 2.9.2: Disagreement Set" Let S = {C1, C 2 , . . . , Cn} be a set of clauses. T h e set:
-- {ti, t j l , . . . , tj~ [ ti is the first t e r m on the left which occurs in some s u b t e r m ca of the clauses of S and terms tjl,... , tjx occur in s u b t e r m s cjl,... , cj~ of the clauses of S, so t h a t for s u b s t i t u t i o n 01 = {tz/tj,} or substitution 02 = {tj,/ti}, 1 <_ # <_ A, ckOt is identical to cj,01, or ck02 is identical to cjO~ }
DS(S)
is called the d i s a g r e e m e n t
s e t of S, DS for short,
m 2.9.2
T h e disagreement set is not uniquely defined, it depends on the order of inscription of the clauses of S. In E x a m p l e 2.9.1 we have S =
{C1,C2}
=
{{P(f(x),y),Q(a,b,x)},{-~P(f(g(c)),g(d))}}
g(c) for the s u b t e r m x of f(x) in P ( f ( x ) , y ) , in other words 01 = {x/g(c)}, then f(x)01 is identical to f(.q(c))01. The first disagreement set is thus DSl = {x, g(c)}. After the application of 01 the disagreement between x and g(c) is raised and the next disagreement located by the algorithm is given by the If we s u b s t i t u t e
set DS2 = {y,g(C)}.
D e f i n i t i o n 2.9.3: Let X = {al, a 2 . . . an} be a set of clauses or terms. A substitution 0 is called a u n i f i e r for C if a l 0 = a2 0 . . . . .
an 0.
m 2.9.3
D e f i n i t i o n 2.9.4: A unifier 0 is called a M o s t G e n e r a l U n i f i e r , MGU for short, if for every other unifier r there is a s u b s t i t u t i o n ~/such t h a t r = 0 ~/.
II 2.9.4
For a set S of terms or clauses, the Most General Unifier is uniquely determined [Lloy87]. Example
2 . 9 . 5 : Assume the set of a t o m s S =
{Q(g(x), w), Q(y, b)}, where b is
a constant symbol. A unifier of S is:
r = {y/.q(b),xlb, w/b}
Unification and Resolution in PrL
157
If r is applied to S, it creates the following g r o u n d instances:
Q(g(x), w ) r
= Q(g(b), b)
Q(y, b) r = Q(g'b), b) If we now impose 3" = {x/b} and 0 :
{ y / g ( ~ ) , w / b } we can easily prove t h a t
r F u r t h e r m o r e , 0 unifies the above atoms. 0 is thus an MGU, m o s t general unifier of S.
II 2.9.5
Example 2.9.6: Let C be the set of terms C -
{f(x,g(x)),f(h(y),g(h(y)))}.
r - { x / h ( g ( c ) ) , y / g ( c ) ) } is a unifier of C. 0 - { x / h ( y ) } is also a unifier of C. If we impose 3' - {y/c}, then r - 0 3" can easily be proved. Hence 0 is a general unifier of C.
II 2.9.6
We can now move to the formal description of the unification algorithm. Algorithm
2.9.7
Unification algorithm:
Let T - {P1, P 2 , . . 9 , Pn } be a set of a t o m i c formulae.
Step O" 00 - E (identical s u b s t i t u t i o n )
Step k" We already have s u b s t i t u t i o n s 00, 0 1 , . . . , Ok
Step k-t-1 9 (recursive step) 9 (i)
If PlOlO2...Ok
-- P20102...Ok
r i t h m finishes by providing 0 (ii)
If PiOlO2...Ok (a)
r
0 1 0 2 . . . Ok as a general unifier.
PjOIO2...Ok for some i , j ,
then"
We form the disagreement set Ds(P101...0k,
(b)
PnOlO2...Ok then t h e a l g o -
.....
. . . , PnOi...Ok)
We carry out tile O c c u r
Check
-- D s ( T 0 1 . . . 0 k )
-- DSk
( o c ) of the variables, in o t h e r
words we check w h e t h e r every variable v c DSk occurs in some o t h e r element of DSk.
158
Predicate Logic
If tile OC is affirmative, then we stop and conclude that T is not unifiable. If there is no variable which belongs to DSk then the o c is regarded as affirmative. If there are v , t C DSk and the OC is negative then we impose 0k+l = {v/t} to eliminate the disagreement between P 1 , . . . , Pn in terms v, t.
Step k+2: We resume s t e p s k + l , k + 2 f o r k = k + l . As we will see clearly with the next theorem, if T is unifiable, the algorithm always finishes by providing the most general unifier: MGU = 0 = 0 1 0 2 . . .
On
m 2.9.7
The proof of the following theorem is beyond the scope of this book, it can be found in [Robi65, ChLe73].
T h e o r e m 2.9.8:
(J. A. Robinson)
If we apply the unification algorithm to T = {P1, P 2 , . . . , Pn},
then:
If T is unifiable then the algorithm finishes by providing the MGU of T. If T is not unifiable then the algorithm finishes by declaring that there is no unifier,
m 2.9.8
The above theorem states that if T is unifiable, then the unification algorithm finishes by determining the MGU (and not only some unifier) of T.
E x a m p l e 2.9.9: Assume the set of formulae: s
=
Q(z,/(y), f(y))}
Is S unifiable? If it is, determine tile MGU.
Step O:
We impose 90 = E
Step 1 :
SOo = S Ds(S00)
=
DS1
=
{a,z}
OC negative We impose 01 =
{z/a}
II
.'7-"
~
--
~
,..-,
C.~
~.,
"
~
=
'-"
II
II
u
II
~ ~
o
>.
9
'~
~
C~
0
D,-,.
C}~
~,,~0
~
~
~ II
'~
~176
~176
II ,-.,,-,
r.~
~
•
II
%
~
~
II
.~
c.~
""
160
Predicate Logic
We must note at this point that in many applications, in the pursuit of higher efficiency, the PROLOG inference mechanism based on the unification algorithm ignores Occur Check. In other words, it substitutes the first term of a given DS for the first variable x. This can surely lead to erroneous conclusions; it is therefore for the programmer to create the suitable flow and security mechanisms in the program in order to avoid such errors. We are now in position to describe the PrL resolution method in simple terms.
Resolution in PrL The PrL resolution method is actually a combination of PrL unification and PL resolution.
Thus, just as in PL (Definition 1.9.17), if S is a set of PrL clauses,
then a p r o o f b y r e s o l u t i o n f r o m S, is a finite sequence of clauses C 1 , . . . , C,,, such that for every Ci (1 _< i < n), (1 < j, k < i), where
we have Ci c S
Ci c R({Cj, Ck})
and
R({Cj, Ck}) is the resolvent of Cj and Ck.
Note here that since the variables of all the clauses are considered bound by universal quantifiers, we can rename these variables to avoid confusions in each unification. This renaming procedure is called a n o r m a l i z a t i o n
of v a r i a b l e s .
Let us take a look at an example. Example
2.9.11: Consider the following clauses: C1
=
C2
-
{-.P(x,
z), P(x, z)} v), P ( v ,
We wish to conclude:
Ca
{-~P(x, y), -~P(z, y), P(x, z)}
The respective notations for C1, C2 and C3 in the context of PL are: (V x) (V y) (V z)
[P(x, y) A P(y, z) --+ P(x, z)]
for C1
(V x) (V y) (V z)
[P(u, v) ~ P(v, u)]
for (72
(V x) (V y) (V z)
IF(x, y) A P(z, y) -+ P(x, z)]
for C3
Method I. We work directly within the context of PrL"
<
~
~-
0
<
<
~.
J
.,
0
o
<
oo
<
~,,,,
0"
oo
t,o oo
O"
Oo
ix/
<
J
<
~
~.~
0
J
<
<
~
"1 ~'
oo
~
..
~
,_.
~.
~
<
J
>
~
~
A
~
""
~
C~
~
~
~
9"
~
9
.
--
..J
|
~
~
~:~
""
0
<
<
-.,
..,
J
~
~
:.~
J
<
..,
~
~
~
J
..,,
<
<
.,,
~
~
oo
0
,~
P
e-~ ,....
o
.....
o
~n
o
~.II
i',,/
/
I/
~~
'-.a
J
II
9
J
II
J
J
~
~
9
--
g
~
~
~_ ~"
9
e
m
~
<
g
0
~
N"
~
e
P-'" ~" ~' - "
~
~,"
~
9 ~I:~
~- o~ ~ ~
~:
9
--.
,~
~"
.
~"
.
~"
J
J
'~
~
9
~
J
~
~
~
~
,-,
~
'~
~"
@
9
"F
o~
9
~,
,~
@
9
~
~
~
~
>
ix/
<
ix/
~
~ ,-----, U ("4
rao
*
<
~~ "
9
9
Oo
(.j.
9 ""I
,-]
,g
i..-a
~t
9
9
~
J
"a
01
O'q
g
J
J
"a
~
p___..
9
~
J
~
~
0 ,,.j
,....
o
o
~
~
Soundness and Completeness of PrL Proofs
163
The above procedure with the tree, which we also used to prove clause (73, is also used by the PROLOG language,
m 2.9.11
PROLOG'S dynamisnl and, more generally, Logic Programming's effectiveness in the field of symbolic programming, are obvious.
The third chapter gives an
analytical presentation of PROLOG. Remark
2.9.12: Working with resolution is actually an application and simpli-
fication of the corresponding PrL working method.
Hence, if the sentence cr of
PrL has a proof by resolution from the set S of PrL, denoted by S ~ a , then a is provable by S, Definition 2.3.8. Forlnally: S ~R- a ~ S k - o The reverse is however also valid for every set S of clauses: SF-a~Ska
m 2.9.12
R
Let us now take a look at the soundness and completeness results for the methods we have presented so far.
2.10 Soundness and Completeness of
PrL
Proofs
We will refer here, just as in PL, to conclusions concerning tile completeness and the soundness of PrL proofs. Tile proofs are quite similar to the PL proofs, and they can be found in [ChLe73, Meta85, Smul68].
Soundness
and Completeness
of Tableaux Proofs
ka denotes that a sentence a is Beth-provable, and ~ a denotes that a is B logically true. We will give the auxiliary l e m m a t a and theorems of the soundness and completeness of Beth-proofs. L e m m a 2.10.1:
Let R be a predicate symbol of arity n of a language s
a be a sentence of s
and let
A s s u m e there is a non-contradictory branch ~ in a systematic
tableau with Ca at the origin. We form an interpretation ,4, the universe of which is any set {a l, a 2 , . . . } -
A
which is in a one-to-one correspondence with the
constant symbols of the language s ci is the element as = ~(r
The interpretation of every constant symbol
164
Predicate Logic
We define a relation r r
C_ A '~"
(ail,ai2,... ,ai,)
r
tR(cil,ci2 . . . c i , )
is a node of branch
Then: (i)
if f a is a node o f ~ then a is false in .,4.
(ii)
if ta is a node o f n then a is true in .,4.
Theorem
2.10.2:
m 2.10.1
Completeness:
I f a is a consequence o f the set o f sentences S o f PrL, then it is also Bethprovable by S: S~a
Corollary
~
Definition
2.10.4:
m 2.10.2
I f a is logically true, then it is also Beth-provable:
2.10.3:
a
of s
S~-a s
~
~B a
m 2.10.3
Let g be a b r a n c h of a tableau, and let ,A be an i n t e r p r e t a t i o n
A is said to a g r e e w i t h ~ if: (i) (ii)
taisanodeof~
~
.A. ~ a
f aisanodeof,~
~
A ~a
Lemma
2.10.5:
m 2.10.4
Let T be a complete s y s t e m a t i c tableau with f a at the origin,
s a language and .,4 the restriction o f an interpretation o f s s y m b o l s occurring in a, such that .,4 ~ ~a.
Then there is at least one branch of
T which agrees with s o m e extension o f A. Theorem
2.10.6:
to the constant m 2.10.5
Soundness:
I f sentence a is Beth-provable by the set o f sentences S o f PrL, then a is a consequence o f S:
S~a s
Corollary
2.10.7:
~
S~a
m 2.10.6
I f sentence a is Beth-provable, then it is also logically true: t--a =v ~ a B
m 2.10.7
Soundness and C o m p l e t e n e s s of PrL Proofs
Theorem
2.10.8:
165
Compactness:
A set of sentences S is satisfiable if and only if every subset of S is satisfiable. m 2.10.8
Soundness Theorem
and Completeness of Resolution Proofs 2.10.9:
Soundness:
Let S be a set of clauses and R*(S) the set of resolvents of S. If the e m p t y clause belongs to R*(S), then S is not satisfiable: E] E R* ( S) =~ S non-satisfiable L e m m a 2.10.10:
m 2.10.9
If C~ and C~ are ground instances of clauses C1 and C2, and
if C t is the resolvent of C~ and C~, then there exists a resolvent C of C1 and C2, such that C ~ is a ground instance of C. Theorem
2.10.11:
m 2.10.10
Completeness:
Let S be a set of clauses and R*(S) the set of resolvents of S.
If S is not
satisfiable, then the empty clause belongs to R*(S): S non-satisfiable
=v [:3 c R* (S) m 2.10.11
The intuitive interpretation of the above theorem is: In order to prove satisfiability or non-satisfiability of a sentence by means of resolution, we just need to prove the empty clause by the corresponding set of clauses. Thus, if a consistent set of clauses S is given and if we wish to prove a sentence ~, we resolve the set of clauses S U S t where S t corresponds to ~ .
If we end up with the empty
clause, the non-consistency is due to the assumption -~p, therefore ~ is logically true. E x a m p l e 2 . 1 0 . 1 2 : Assume the set of Horn clauses of Example 2.4.9. Give analytical answers using the method of resolution to the queries: (a)
"What can John steal?"
(b)
"Can John steal Mary?"
166
Predicate Logic
A nswer:
We reformulate the. given Horn clauses and the queries in a set-theoretical form. We thus take the following set of Horn clauses. C1:
{thief(Peter)}
(72:
{likes(Mary, food)}
(73:
{likes(Mary, wine)}
C4:
{likes(Peter, money) }
C5:
{likes(Peter, x),--likes(x, wine)}
C6:
{can_steal(x,y), -~thief(x), -,likes(x,y)}
The queries take the form of claims:
Cr:
{~can_steal(Peter, y)}
Cs:
{-~can_steal(Peter, Mary)}
Then:
(a)
C7
C6 ~~/Peter {-,thief(Peter), -,likes(Peter, y)}
C1
J (74
{-,likes(Peter, y)}
~
~~/money VI
In other words, starting from the negation of tile query, we ended up with tile empty clause, hence C7 is not a true claim. What's more, the above proof gives us tile value of the variable y, so answering query (a), y = money; which means Peter can steal money.
Soundness and Completeness of PrL Proofs
167
(b) Cs
C6
~x/Peter y/Mary {--thief(Peter), --likes(Peter, Mary)}
C5
/
C1
{--likes(Peter, Mary)}
{-likes(Mary, wine)}
/
C3
[3 Cs thus leads to the proof of tile empty clause, hence Peter can steal Mary! m 2.10.12 Remark
2.10.13:
If during resolution the unification algorithm finishes without
yielding a Most General Unifier, in other words if we cannot prove VI from our data, then our goal is said to fail. In the opposite case, the goal is said to s u c c e e d , Remark 1.9.9. For instance, in example 2.10.12, the goal "can_steal(Peter, Mary)" succeeds,
m 2.10.13
C o m p l e t e n e s s o f the A x i o m a t i c Proofs The proof of the following completeness theorem of the axiomatic proofs method is beyond the context of this book. It can be found, e.g., in [Klee52, Mend64, Rasi74, RaSi70]. T h e o r e m 2.10.14:
Soundness and Completeness, GSdel, 1930:
A formula qo of PrL iS derivable by the set of PrL sentences S, if and only if ~o is a consequence of S. Formally: SF-qo v=> S ~ q o
m 2.10.14
A formula ~p of PrL is derivable by the axioms of PrL if and only if cp is logically true. Formally: C o r o l l a r y 2.10.15:
F- qo ~=> ~ qo
m 2.10.15
168
Predicate Logic
2.11 Decision Methods in Logic Many m a t h e m a t i c a l problems come under a general problem scheme and can thereh)re be solved by means of a general procedure which is applied to a specific problem. For instance, we can answer the query "does the polynomial f(x) divide the polynomial g(x)?" using the division algorithm; by dividing the specific f(x) by the specific g(x). If the remainder from the division is the zero polynomial, then the answer is "yes". If the remainder differs from the zero polynomial, then the answer is "no". D e f i n i t i o n 2.11.1: A method which allows us to answer "yes" or "no" to a specific case of a general query, is called a d e c i s i o n p r o c e d u r e .
The problem of
finding such a method for a general query is called a d e c i s i o n p r o b l e m for the query.
I
2.11.1
Many decision problems in m a t h e m a t i c s cannot be solved in their general form, or have only specific solutions, which means t h a t they are solved under certain conditions.
The decision problem for a logic L is such a problem.
A logic L is
determined by its language, which consists of its logical and special symbols, and the axioms and rules by means of which we can prove and analyse well-formed sentences of L. D e f i n i t i o n 2.11.2: The decision problem for a logic L consists of finding an algorithmic method by means of which we can decide whether a well-formed sentence of L is derivable in L or not, in other words whether it is provable in L or not.
m 2.11.2
The decision problem for a
PL
PL
is solved by the use of truth tables: if we are given
proposition A, we construct the truth table of A and check whether A is a
tautology. If A is a tautology then by Corollary 1.12.2, A is derivable in PL; while if A is not a tautology, it is not derivable in Theorem
2.11.3:
The decision problem for
Tile situation is not that simple for formula ~ of a
PrL
PL.
PrL.
Hence the following theorem holds: PL
is solved.
9
2.11.3
By Corollary 2.10.15 we know that a
l a n g u a g e / : is derivable if and only if ~ is logically true, i.e., if
Decision M e t h o d s in Logic
169
it is true in all interpretations of L;. However the interpretations of a language L; are countless and we are naturally not in position to check them all. Tile following theorem has been known since 1936 [Chur36, Turi37]. Theorem
2.11.4:
The decision problem for P r L cannot be solved.
In other
words, there is no algorithmic m e t h o d which allows us to decide whether a given PrL
mR 2.11.4
formula is derivable in P r L or not.
Although tile decision problem for PrL cannot be solved in general, there are specific solutions [Klee52, Chur56]: Theorem
2.11.5:
Tile decision problem for P r L can be solved for formulae
in a Prenex Normal Form in which there is no existential quantifier preceding a universal quantifier. There is hence an algorithmic procedure which allows us to decide whether a formula o f the form:
(VXl)...(VXk) (3yl)...(:::]yA) only v
only 3
is derivable in P r L or not.
Theorem
2.11.6:
m 2.11.5
The decision problem for P r L can be solved for all formulae
consisting exclusively of predicates of degree less or equal to 1, that is predicates
II 2.11.6
with no more than one variable.
In [Chur56] there is an analytical table presenting all known specific solutions of tile
PrL
decision problem.
While efforts to find specific solutions to the
PrL
decision problem were under
way, there were attempts towards solving tile decision problem for tile satisfiability of a
PrL
formula, and thus determine an algorithmic method allowing us to
determine whether or not a
PrL
formula is satisfiable. Theorems 2.6.6 (Loewen-
heim Skolem), 2.7.7 and 2.10.6 precisely state that the decision problem for tile satisfiability of a
PrL
sentence admits specific solutions.
170
Predicate Logic
2.12
Exercises
2.12.1 Determine whether the following expressions are terms, formulae or none of these: (a)
Nick
(f)
(V x)[number(x) A x = x + x]
(b)
Mathematician(x)
(g)
= [+(~+ v),z]
(c)
number(6)
(h)
(x + y) + j2
(d)
is_a_planet (x)
(i)
the best book
(e)
(3 + 1) + 10
(j)
hates(x, y) A loves(x, z)
Solution: (a)
term
(f)
formula
(b)
formula
(g)
formula
(c)
formula
(h)
term
(d)
formula
(i)
none of these
(e)
term
(j)
formula
2.12.2 Find the free occurrences of variables in the following formulae:
(~) (v ~) p(~, y) -~ (v z) O(z, ~) (b)
Q(z) ~ -~(Vx) (V y) P(x, y, a)
(~) (v ~) P(~) A (vv) q(~, y) Solution: (a)
y,x (x also has a bound occurrence).
(b) z.
(c) x.
2.12.3
Determine tile free occurrences of variables in the following formulae. Which of these formulae are sentences?
Exercises
171
(~) (v ~)(v y)(v z)[x > y A y > z] -+ (3~)[~ > ~] (b)
(3x) (is_red(x)) V (V y) [is_blue(y) V is_yellow(x)]
(c)
x-+-x
(d)
(3y)[x+x
(e)
(3x) (=]y)[is_teacher(x, y) A teaches(x, y, z)]
=
xWx
= x+x]
Solution:
(a)
No free occurrence. This formula is a sentence.
(b)
x has a free occurrence in the second disjunctive subformula.
(c)
z o c c u r s free.
(d)
x occurs free.
(e)
z o c c u r s free.
2.12.4 Using the arithmetical symbol "<" (less than) and the
PrL
language, formulate
the following sentences: (a)
There exists a number x less than 5 and greater than 3.
(b)
For every number x, there exists a number y smaller t h a n x.
(c)
For every number x, there exists a n u m b e r y greater t h a n x.
(d)
For all numbers x and y, sums x + y and y + x are equal.
(e)
For every number x, there exists a n u m b e r y such t h a t for every z, for which, if the subtraction z - 5 is less t h a n y, then the subtraction x - 7 is less t h a n 3.
Solution:
We introduce functions - ( x , y), + ( x , y) in order to express the subtraction and the sum of x, y, and we also introduce the predicate "number(x)" to express the
"
-
u
i,-,,,, ...,,
=
~"
m
9
~.~.
oo
9
e-,-
~
~
~
~
~. ~
..~
9
o
B
~
.
~
=
IA
~o
.
II
:
~
. . . .
"T"
..
~
9
9
~ -
"9
0
d~
~
*
m
~
~-, ~,
~'~
~"
"~
o
i~
<
~
~
:
~
~
~
~
=
o
~,~
o
~-~
m"
,-.
'-'"
Or'
--,1
I
A
$
A
L~
~-~
+
~
$
0"
<
~
A
0"
W
~
A
0"
L~J
~-~
~
o o
~
9
i~
o~
bO
LU
T
~ '~ ~
J
m ~,, ~~.
=
9.
~
,'-'o
9
i,-,~
l::r'
!
oo
d~
9
N
J
<
T .~.
...,,
~
~
~
=
r
,:~
N
~,.,~
~
0
O
j <~
~
~
;~
j
m
~
~
b--"
~ ~
~
~.<
9"
II
II
>
II II
$
T
--~
T
~
T
--~
T
--~
T
a
T
~
~
LI.J
"q"
T
~
..
l::r,
r
,-1
9
v
.--,0
l::r'
9
~
""
~
~
u
"u
~176
O
N
2.
< N
N
N
~
~
~
9
N
O
~
=
N
s
"u
,~
%
~
~.
N ,~
174
Predicate Logic (b)
By axiom (4), since free(x, a, A).
(d)
F - A - + ( 3 x ) A by ( c ) , a n d
~(Vx)--+A
Then by tautology:
by (b).
(B -+ C) --+ [(A -+ B) --+ (A -+ C)] we have: F (A -~ (=tx)A) --+ [((Vx)A --+ A) --+ ((Vx)A --+ (3x)A)] A double Modus Ponens application leads to what we are seeking.
(e)
Assume formula x = y is A. Then by (a) we have:
By axiom (6), Remark 2.3.7, and Modus Ponens we have ~ (3y)(x = y). Then by the rule of generalization, ~ ( V x ) ( 3 y ) ( x = y). (f)
By axiom (7) we have: F x=y
-+ ( x = x -+ y = x)
We apply tautology: [A --+ (B -+ C)] ++ [B --+ (A --+ C)] which gives: F- x = x - +
( x = y--+ y =
x)
By axiom (6) and Modus Ponens, we have F x = y ~ y = x . Modus Ponens application leads to what we are seeking.
(g)
By axiom (7), we have: F y=x
~
(y=
z-+ x=
z)
and by (f): F x = y -+ (y = z --+ x = z) We apply tautology: [A --+ (B --+ C)] e+ [(A A B) -~ C] which gives: F (x--yAy--
z) ~
We then apply Modus Ponens three times.
x--
z.
Adouble
Exercises
175
2.12.8 Determine the set-theoretical form of the following sentences: John loves food.
(b) (c) (d) (e) (r)
Apples are food. Chicken is food. Whatever can be eaten without killing somebody is food. Bill eats and is still alive. Mary eats whatever Bill eats.
Solution: We introduce the following predicates: Loves(x, y) : x loves y
Food(x) : x is one kind of food
Eats(x, y) : x eats y
Lives(x) : x is alive
Then (V x) [Food(x) -4 Loves(John, x)] ~
(V x) [~Food(x) V Loves(John, x)]
Hence Sa = { {-,Food(x), Loves(John, x)} }.
(b)
Food(Apples).
(d)
(V x)(V y)[Eats(x, y) A Lives(x) -4 Food(y)]
Hence Sb = {{Food(Apples)}}.
++ (V x)(V y)[-,Eats(x, y) V--Lives(x) V Food(y)] Hence Sd = { {-,Eats(x, y),--Lives(x), Food(y)} }. 2.12.9 Assume dragons really exist and we capture a big one. Formulate the following sentences of everyday speech using Horn clauses. (a)
Every dragon leaving in the zoo is not happy.
(b)
Every animal that meets polite people is happy.
(c)
People visiting the zoo are polite.
(d)
The animals living in the zoo meet the people that visit the zoo.
176
Predicate Logic
Solution: We introduce the following predicates: Dragon(x) : x is a dragon
Person(x) : x is a person
Happy(x) : x is happy
Lives(x, y) : x lives in y
Animal(x) : x is an animal
Nice(x) : x is polite
Visits(x, y) : x visits y
M e e t s ( x , y ) : x meets y
Then: (a)
+- Dragon(x), Happy(x), Lives(x, Zoo).
(b)
Happy(x)
(c)
Nice(x)+--
Person(x), Visits(x, Zoo).
(d)
Meets(x,y)
+-- Zoo(x), Person(y), Lives(x, Zoo), Visits(y, Zoo).
+- Zoo(x), Person(y), Nice(y), Meets(x,y).
2.12.10 Formulate the following sentences of everyday speech with Horn clauses: (a)
x is the mother of y, if x is a woman and a parent of y.
(b)
x is the father of y, if x is a man and a parent of y.
(c)
x is human, if his parent is a human.
(d)
x is a human, if his father is a human.
(e)
Nobody is his own parent.
2.12.11 We have two empty receptacles of 5 and 7 litres respectively. We wish to determine a sequence of actions which would result in leaving 4 litres of water in the 7 litres receptacle. Only two actions are allowed: (1)
Fill up a receptacle.
(2)
Transfer water from one receptacle to tile other until the corresponding receptacle is filled or empty.
Formulate the problem and the allowed actions with Horn clauses.
Exercises
177
Solution: We introduce the predicate Contain(u, v)" The 7 litre receptacle contains u litres and the 5 litre receptacle contains v litres. If we assume that relations x + y - z and x <_ y have already been defined, then the problem can be expressed with tile following clauses: C1
:
Contain(0, 0) e--
C2 : +-- Contain(4, y) C3 : Contain(7, y) +-- Contain(x, y) C4 :
Contain(x, 5) +-- Contain(x, y)
C5 :
Contain(0, y) e-- Contain(x, y)
C6 :
Contain(x, 0) +-- Contain(x, y)
C7 :
Contain(0, y) +- C o n t a i n ( u , v ) , u + v - y, y _< 5
Cs
Contain(x, 0) +-- Contain(u, v), u + v - x, x _< 7
:
C9 :
Contain(7, y) e-- C o n t a i n ( u , v ) , u + v - w, 7 + y - w
C10 :
Contain(x, 5) +-- C o n t a i n ( u , v ) , u + v - w, 5 + x - w
C1 expresses the initial situation and C2 the goal. C3 and C4 express the filling of the first and the second receptacles respectively. C5 and C6 express the emptying of the first and the second receptacles, C7 and Cs the transfer of water until the first receptacle is empty and C9 and C10 until the second one is empty. 2.12.12 Let s - {a, b, P} be a P r L language and let .A be an interpretation of s with a universe
D - {a,b},
such t h a t
(b, a) do not belong to r
(a,a)and (b,b) belong
to c ( P ) , whereas (a, b) and
Find out whether the following formulae are true
in .A. (a)
(V x)
(3y) P(x, y) (V y) P(x, y)
(b)
(3x)
(c)
(v ~) (3y) [P(~, ~) -~ p(y, x)]
(V x) (V y) P(x, y) (e) (3y) ~P(a, y)
(d)
(f)
(v ~) P(~, ~)
178
Predicate Logic
Solution: (a,b) C c(P) C_ D 2 allocating value t, true, to P(a,b), and we declare (a, b) ~ r C D 2 allocating value f , false, to P(a, b). Then, for the We declare fact
given interpretation, we can use the following table of values of P in relation to the universe of the interpretation"
(a)
P(a, a)
P(a, b)
P(b, a)
P(b, b)
t
f
f
t
If x takes value a or b, then by the above table we can see that there are values of y (a or b respectively) such that the corresponding satisfied. Hence
(b) (c)
P(x, y) is
(V x)(3y)P(x, y)is true in .A.
False (why?). We create a table giving us the values of (c) for all the possible values of
x,y"
(x,y)
(y,x)
P(x,y)
(a,a) (a,b) (b,a) (b,b)
(a,b) (b,a) (a,b) (b,b)
t f f t
P(y,x) t
t
t
f
t
f
t
f
Then (c) is true in A. (d)
W h e n x is interpreted as a and y as
(e) P(a, y)
b, P(x, y) is obviously not true in .A.
is true when y takes value b.
2.12.13
Let
a " ( 3 x ) P ( x ) ~ (V x)P(x) be a sentence.
(a)
Prove that a is always true in interpretation with a universe which is a singleton.
(b)
Find an interpretation with a universe consisting of two elements in which a is not true.
Exercises
179
Solution: (a)
Let A -
( { a } , c ( P ) ) be any interpretation of the language E -
{P} in
which a is expressed. Then by Definition 2.5.5 we have:
A
p
r
(3x)P(x)--+ (Vx)P(x)
p
(V x)
P(x)
or
A p P(a)
or
A
A
A
p
(3x)P(x)
p =P(a)
.4 p P(a) V ~P(a) But the last disjunction is true because of tautology A V-~A, Remark 2.3.4.
(b)
Let {a, b} be the universe of the interpretation we are seeking, with a :/= b. For a to be false in A, we need .A ~: (V x)P(x) and .A ~ ( 3 x ) P ( x ) .
We
thus construct the following table of values of P for a and b:
P(a)
P(b)
t
I
In this interpretation a is not true (why?). 2.12.14 Find an interpretation with a universe consisting of two elements {a, b} such that sentence ( 3 x ) ( 3 y ) P ( x ,
y) is true and sentence (Vx)(3y) P(x, y) is false.
2.12.15 Let/2 = {A, cl, c 2 , . . . , c9 } be a language, where A is a 2 - a r y predicate symbol and cl,...,c9
are constant symbols, and A = ({1, 2 , . . . , 9 } , / )
be an interpretation,
where / is the divisibility relation, c(A) = / and c(ci) = i, i = 1 , . . . , 9. Determine which of these sentences are true in this interpretation.
The answer has to be
justified.
(~) (v y) a(c~, y)
(c)
(3~) (v y) a(~, y)
(b)
(d)
(3x) (Y y) A(y, x)
(Yx)[A(x, c5) ~ (x - Cl) v (x - c5)]
-F
o
e-,-
~'~ >
~
IA ~
<
~
~,
<
J
~..
~.~
IA
~,
IA
~
~
IA ~
~
~,~
~
~ ~
J
~
~
~
~
II
~LLI
~ ~
<: ~
$
IA
>
IA
~
<
~ IA
~176
~
~x
O
i~
o~
~.,.
IA
II
~
~
~
~
"
o
>
II
o
II
~..
)
0
N
"
N
s
~<
o
i- I
~<
<
II
II
$
"
II
$
~
~
~
II
S
"
+
UJ
$
<:
-~-
..
ca
I,o
,~
o
r
l::r'
~
o~
0
r
N
T
i.-,~
In"
Oo
l::r"
t~
O~
+
,'o
t,O
i- I
"--0
In"
,---,
"-,0
9 ",O
"-'O
iJ~
l:r'
i- 1
~
l:r
In-'
--s -q-
i~
oo
<
r
i-,o
9
l::n
~176
o~
G~
E,."
g
OO
Exercises
181
Solution: The first 3 axioms of E axiomatize the order, the fourth declares that the order is dense and the fifth axiom states that the order is total.
We know that real,
rational and natural numbers with the usual ordering are totally ordered and, furthermore, the order is dense for real and rational numbers whereas it is not dense for natural numbers (why?). 2.12.18 Let .A1 = (Q, c I ( P ) )
and .A2 = (N,~2(P)) be two interpretations of the PrL
language/: = {P}, where ~ 2 ( P ) a n d c 2 ( P ) a r e the usual orders of rational numbers Q and natural numbers N, with 0(.A~) and 0(.A2) the corresponding theories (Definition 2.5.10). (a)
Determine two sentences al and a2 of L: such that A1 ~ al A a2
and
A2 ~: al V (72
(b) Determine which of the following are valid and justify the answer: e(A~) = e(A~),
e(A~) c e(A~)
Solution:
(a)
By Definition 2.5.5, both (71 and a2 have to be valid in A1, whereas in .A2, al and a2 cannot be valid. We take for al tile sentence expressing dense orders:
~,- (v ~)(v ~/) [-~(~ - ~) A (~ <__y) (3z) [(~ < z) A (z < ~) A -~(~ - z) A -~(~/- ~)]] Another characteristic difference between Q and N is the existence of a least element in N, whereas Q has no least element. Let (72 be the sentence stating that there is no least element:
~:: -~(3~) (v y) P(~, y) ThenA~alAa2
andA2 ~=alVa2
(why?)
"~
J
J
~
UJ
q
"G
<
...
c~
~
j
j
,--,
,N
j
..
,_..,
Zr'
,,~
~
>
~.,
~-~
LJJ
~
::r ,"D 9
=r ,'D
q
~D
p
<
J
<
J
~
"
<
LLI
N
<
.---
~
~
-.
J
LLI
~
w
~
~
~
LLI
.~
Ill
.-----
c
~
0
~
~,
~
~'~
~
~
~
~
~
LId
$
~
L~ ,~.
~-
~
0
-
~
~
~D
~.
.
:~
v
hq
~
~
J
-IT
o
hq
~
"
J
--
..~
J
-IT
~
~
-11-
0
p
Oc
o
9
:~
,~
~
~
~
0
,~
j
o"
,..,
~..,o
:~
II
q
J
>
~
,-----,
~
<
%
.
~
~
0~
.,.
~=~ o
0
.
.
J
~
~
9- - - ,
u.J
~,~
<~
J
LLJ
~
~
~
~
. ~.~
<
,~
LIJ ~
~
>
~
<
~
,~
LLI
J
>
~
<
~.~
<
~
~ ~
%
z ~
~ o
~
0
o
-,-'--,
.. j
J h~
~ ~
.. ~
~
J
~,~
~"
m-
~ ~
~"
oo
-.
q
J
< ~
9
J ~
~
< j
~
~.
~.
~
~
~ ~ ~ r~
,
~
~
~--~ 0
~"
~
~
<
< J
~
~
? ~
f
~
<
~
~
,e,
~
LU
~
o
9
9
~
~
-
;~
~
""
o
~.
184
Predicate Logic
2.12.21 The following sentences are given: F1 : Whoever saves up money earns the interest. F2 : If there is no interest, then nobody saves up money. We introduce the following predicates:
A(x,y):
x saves up y
T(x):
x is interest
g(x,y):
x earns y
X(x):
x is money
Formulate F1 and F2 in the language defined by the above predicates and determine the Skolem Forms of F1 and ~F2. 2.12.22 Let (G, ~) be a group. Describe with carefully selected predicates the properties of G, in other words the fact that G is closed for ~, ~ is associative and has an identity element and inverse elements. Determine the SNF of the formulae which express these properties.
Solution: We introduce predicate P(x, y, z) which is interpreted as x~y = z. Then, to express the fact that G is closed for ~ we use formula 0",
: (V x) (V y) (gz) P(x, y, z)
Associativity is expressed by formula: 0"2 : (V x) (V y) (V u) (V v) (V w) [P(x, y, u) A P(y, z, v) A P(u, z, w) ~ P(x, v, w)] A (Vx) (Vy) (V z) (Vu) (Vv) (Vw) [P(x, y, u) A P(y, z, v) A P(x, v, w) ~ P(u, z, w)] The existence of an identity element:
~ : (3y) (v ~) [p(~, y, ~) A P(y, x, ~)] The existence of an inverse element: 0"4 :
(3y)(Vx)(3Z) [P(x,y,x) A P ( y , x , x ) --+ P ( x , z , y ) A P(z,x,y)]
o
i-1
~ar,4
~.-
i~
i-,0
v
J
N
~
II
.~,
II
9
o
cr
~
oo
~'
"
.
('1)
,,'9
.-.-
o
~
.
~. ~"
o
~
~ o~
.~
~
-. ,~
o ~
~
~
=
~
.
.-
.
-.
~
~
--
<
~
.
~-~
~
~
.
.
.
~"
~
('I)
~
~
<
~
~
~
J
<
.<
<
<
<
$
>
.
~
~
.
~
A
~
~
.
N
<<
<
<
$
.
>
m-'
~
~
~
A
~
9.
--
~.
im
,....
(9
,..,.,
..
~r]
186
Predicate Logic
Solution: For $2:
There are no variables occurring in $2.
We thus introduce c, a
constant symbol. Then Ho = {c}. However, there are no function symbols occurring in $2. Then Ho = H1 . . . . .
Hlo = {c}.
2.12.24 Determine the Herbrand sets Ho and H1 for the set of clauses
S = { { P ( f ( x ) , a, g(f(x),b))}} Determine all the ground instances (Definition 2.4.3) of S in Ho and H1.
Solution: Ho :
{a, b}
H1 = {a, b, f (a), f (b), g ( f (a), b), g(f (b), b)} Then the instances of S for Ho are: Sll
:
{{P(/(a), a, g(/(a),b))}}
$12 = {(P(/(b), a, g(f(b),b))}} The instances of S for H1 are: S21 =
{ { P ( / ( a ) , a, g(f(a),b))}}
S22 :
{{P(l(b), a, g(f(b),b))}}
$23 :
{{P(/(g(/(a),b)), a, g(/(g(f(a),b)),b))}}
$24 = {{P(/(g(/(b),b)), a, g(/(g(/(b),b)),b))}}
2.12.25 Prove by means of Complete Systematic Tableaux the following sentences: (a)
[(3x) P(x) ~ (V x)Q(x)] -~ (V x ) [ P ( x ) -+ Q(x)]
(b)
-(Vx)P(z)++
(3z)(-P(z))
>
~
~
~
3>
~ >
IT1
m
-~ ~
~
m
m
o~
r~
~e
1" ~e
1" ~e
t.m.~
1"
~e m
v
1"
m
~e
q~
1"
~e
~4
rm~
1"
~4
~e
m
~4
|
|
o,-.i
0
9
o,..~
,mq
9
0 o,.., o,..~
E4
9
9 9
~
"~
9
.<
ILl
~y'
.~
~
~
$
.<
'7'
c~
0
9 9
9
0
,-1
0 <
|
%
e~
%
-/
.~.
.~,
~
> ~
v
>
> "~
>
>
>
.<
'<
0
F~'
9 < lm
m"
9
lm
9
~
o
9 ~r
<
9
r~
<
t~
<
t~
~
v
<
~
~
~
o
~ s
<
$
~<
~
o
e
9
tm
190
P r e d i c a t e Logic
In order to determine an interpretation Ao, notice, in the non-contradictory branches, nodes tP and tQ marked with 9 (why?). Constants cl and c2 constitute the universe of the interpretation. We impose e(P) = {c1 } and ~(Q) = (c2}. Thcn a is not true in A~, = ({cl, c2}, c(P), c(Q)) (give a complete proof of this last claim according to Definition 2.5.5). 2.12.27 Determine a refutation of the following sets by means of semantic trees: (a)
S = { {P(x)}, {-~P(x), Q(x, a)}, {-~Q(y, a) }}
(b)
S' = {{P(x),Q(f(x))}, {-~P(a),R(x,y)}, {-~R(a,x)}, {-~Q(f(a))}}
Solution: (a)
S = {{P(x)}, {-~P(x), Q(x, a)}, {-~Q(y, a)}}. 1
2
3
The Herbrand Universe for S is H = {a}. Then
,P(a)
P(a)
O(a,a) |
3
~O(a,a)
| 1
@ 2
2.12.28 Construct the semantic trees and determine the non-satisfiable ground instances corresponding to the following formulae: (a)
a: (3x) (V y) P(x, y) A (g y) (3x)-~P(y, x)
(b)
~o: (3x) (Y y) (V z) (3w) [(P(x, y) A -~P(y, x)]
(c) T: (qX) (Vy) (V z) (3w) [P(x, y) A -~P(z, w)]
9
o
I
o
~,.,~
..
~"
9
~
~
j
9
.<:
~
-.
~
..
~
~:~
9
oo
bo
II
~
o"
>
~
~
LW
$
~
J
~
~
W
$
%
o"
~
d
~
~
~
~
~
~
~ o
t~
9
d~
9
|
..
/
~
.~
J
v
~
j
rae.
d~
O~
o
~
~
~,
~
"
~.~
~
<
~
j
~
LLI
<
LU
m
< N
~ o
>
LU
192
Predicate Logic
2.12.29 Find out whether the sentence a:
( 3 x ) [ G ( x ) v F(x)]--+ [ ( 3 x ) F ( x )
-+ (3x)--,G(x)]
is logically true or not, and give the corresponding proof or a counterexamplc.
Solution: We construct a
CST
with an
f a origin. If this tableau is contradictory, then we
apply Definition 2.6.7 and Theorem 2.10.6. If the CST is not contradictory, then a non-contradictory branch will give us an interpretation in which a will be false. 2.12.30 Determine a non-satisfiable ground instance for the following sets of clauses:
(a) $1 = {{P(x,a,g(x,b)}, {--,P(f(y),z,g(f(a),b))}} (b)
$2 =
{{P(x)},
{--,P(x),Q(f(x))}, {--,Q(f(a))}}
(c) $3 = { { P(x) }, {Q(x(f (x)), --,P(x) }, {--,Q(g(y), z) } } Solution: Determine the corresponding Herbrand universe and then construct the corresponding semantic tree. 2.12.31
Let us create a
PrL
language E for the study of Euclidean geometry. The special
symbols o f / : are:
P(x) E(y)
predicate symbol of arity 1 interpreted as "x is a point" predicate symbol of arity 1 interpreted as "y is a straight line" predicate symbol of arity 2 interpreted as "x belongs to y"
(a)
W h a t form does the interpretation of/2 take? Describe the universe of the interpretation as well as the interpretations of P, E and I; e(P), e(E) and e(I) respectively.
Exercises
(b)
193
If a: is a sentence of s
(Vx) [P(x) -+ ( 3 y ) ( E ( y ) A I(x, y))] formulate the intuitive interpretation and examine tile
truth of o.
(c)
A predicate symbol of arity 2 is added to s
II(x, y), with an intuitive
interpretation "straight lines x and y are parallel".
Describe an inter-
pretation of language s U {II}. Give a formal presentation of the axioms
of Z: u {II}. Solution: The basic axioms relative to points and straight lines of the plane are: There exists at least one point which does not belong to a given straight line. 9 Two points belong to exactly one straight line. 9 There exist at least two points on every straight line. From these axioms we can prove, e.g., theorem: 9 There exist at least two straight lines passing through a point. The parallelism of two straight lines is an introduced concept: Two straight lines of the plane without any common point are called parallel. (Parallelism stricto sensu as opposed to parMlelism lato sensu, where two straight lines are called parallel if they coincide or have no common points). (a)
An interpretation of s takes the form .A = A =
r
(A,e(P),e(E),e(I)),
where
{x Ix point of the plane} u {x I x straight line of the plane},
:
{x]xisapoint},
c(E) =
{x I x is a straight line},
and r
=
{(x, y) l x a point, y a straight line, and x on y}
>
>
II
$
~
<
J
>
~
>
~
J
i~
9
c
s,
<
i~
-.
~
~
"
~
~.
c~"
0
~
~
_
.
~
q]-
~
-,I,-
-IT
~
$
~
9
"~
~
~r'
~
",
. ~
~
N
.
~
9
.4,. 4,
~-~
"
~
~
~
.
-
~
~
~
-IT-
<
-.
9
"
-.
c~" ="
~
~
L~
~
-IT-
~
o
~
>
O~
~
~ $
o
-IT-
~
~
~
~
~
--
~
,1.
.~
~-
bl
9 o~
,..,.
2~
..,
II
..,
II
~
J
II
II
II
II
II
N
b~
b~
.J_..
9
~...~
--
li
cn
flr)
ii
C
9
o
~
~
e-,-
~
,----,
<
<
~
~
o
>
<
j
b~
o~
~r~
9
o~
$
~..,o
~
9
~.,.o
e
,-~
>
<
<
0
9
>
,J
>
>
>
w
J
$
r
>
$
>
>
II
J
>
LI_I
w
.<
o
P
--"
~
II
~
II
~
II
~
II
~
II
o.
In"
Cr~
F-.
"-
P
~.
In"
v
~
~
~
~
..
O"
~ Ca~
t<
9
N 9
r.~
i-.,o
t~ In
.,
~
-,.
..
9
c'-e-
Oo
o 0q
s"
,q.
n~
Exercises
197
2.12.34 A s s u m e a general unifier 0 of a set of clauses S. P r o v e t h a t : 0 an MGU of S and 00 = 0
r
for every w, w a unifier of S, w =
Ow holds
Solution: If 0 is a n MGU, t h e n for every unifier w t h e r e is a s u b s t i t u t i o n ~, such t h a t : =
0~
=
(00)~
Conversely, if w is a unifier a n d w = 0~
=
=
0(0~)
=
0~
Ow, t h e n for s o m e s u b s t i t u t i o n ~/, 0(0~)
= 00~
(1)
will hold. A s s u m e 0 =/= 00. T h e n t h e r e exists a variable x a n d a t e r m q such t h a t
x / y e 0 and x/q ~ 00, which m e a n s t h a t (x/y)3' C 0"7 a n d (x/y)~ ~ 00"),, which gives a c o n t r a d i c t i o n b e c a u s e of (1).
2.12.35 D e t e r m i n e a non-satisfiable g r o u n d i n s t a n c e of the set of clauses:
S = {{P(x,a,g(x,b))}, {-~P(f(y),z,g(f(a),b))}} Solution: We note t h a t we could a p p l y resolution (and d e t e r m i n e u n d e r which c o n d i t i o n s S is non-satisfiable) if x were to be
f(y), z were to be a, a n d x were to be f(a).
We apply the s u b s t i t u t i o n :
o
=
{zl~, ~lf(.), yl~}
S t h e n becomes:
"S' -
{{P(f(a),a,g(f(a),b))}, { ~ P ( f ( a ) , a , g ( f ( a ) , b ) ) } }
S' is obviously the non-satisfiable g r o u n d i n s t a n c e we are seeking (why?).
198
Predicate Logic
2.12.36 Let us supposc that: (a)
There exists a dragon.
(b)
Tile dragon sleeps in his cave or hunts in the wood.
(c) If the dragon is hungry, then he cannot sleep. (d)
If the dragon is tired, then he cannot hunt.
Apply resolution in order to answer the following questions: (i) What does the dragon do when he is hungry? (ii) What does the dragon do when he is tired? (iii) What does the dragon do when he is hungry and tired? Solution:
We introduce predicates: Dragon(x) :
x is a dragon
Can(x,y,z):
ycanxinz
Does(x,y,z):
ydoesxinz
Hungry(x):
x is hungry
Tired(x):
x is tired
We also assume: Can(x, y, z) +-- Does(x, y, z). We convert assumptions (a), (b), (c) and (d) into clauses:
(~) Dragon(A) e-(b) Does(sleeps, A, cave), Does(hunts, A, wood) +-(~) ~Can(sleeps, A, cave) +- Hungry(A) (d) -Can(hunts, A, wood) +-- Tired(A) We then convert assumptions (a) to (d) and (,) into set-theoretical forms:
(1) {Dragon(A) } (2) {Does(sleeps, A, cave), Does(hunts, A, wood) ) (3) {~Hungry(A), -~Can(sleeps, A, cave)} (4) {--Tired(A), -~Can(hunts, A, wood)}
(,)
N
~
N
II
II
~
N
~
~-~
'--'
~
~
~
~
~
II
II
o
~
N
0
;-.,.
c--t'-
o
c
N <
II
~
W
~
o
w
o
N
0
~
~~
o
0
N
N
N
0
9
N
:~
~
o
9
~-,
~
~=
~
m
9
cO
o
J
0
9
~
o
e
o
200
Predicate Logic Note that the inconsistency of clauses (1) to (8) was proved without the use of clause (8), which was also our goal. According to the data, the situation of a both hungry and tired dragon is contradictory. We thus cannot decide what the dragon will do when it is both hungry and tired.
2.12.37 Assume predicates: T ( x , y, u, v) :
interpreted as "xyuv is a trapezium and x, y, u and v are the upper left, upper right, lower right and lower left vertices respectively".
P(x, y, v, u) :
interpreted as "the straight line sections xy and vu are parallel".
E(x,y,z,u,v,w):
interpreted as " @
- u~".
and sentences:
A,
:
( V x ) ( V y ) ( V u ) ( V v ) [T(x,y,u,v)--+ P ( x , y , v , u ) ] (definition of a trapezium)
A2 :
(V x) (V y) V u) (V v) [P(x, y, v, u) --+ E(x, y, v, u, v, y)] (if xy I[ vu then, ~ -
A3 : T(a, b, c, d)
~~))
(abcd is a trapezium)
Prove with resolution that A1, A2 and A3 imply E(a, b, d, c, d, b).
Solution: We just need to prove that the sentence
A1A A2 A A3 A ~ E ( a , b , d , c , d , b ) is not satisfiable (why?). The set-theoretical form of (,) is:
S-
{{-,T(x,y,u,v),P(x,y,v,u)}, {~p(~, y, v, u), E(~, y, v, ~, v, y)},
{T(a,b,c,d)} , {-~E(a,b,d,c,d,b)}}
(,)
>
r
r
~
r
o
9
o
o CP
o
.,-~
o
.,.-~
.,..~
o
o
r
.,....4
r
r
9
c4
~
<
FTI
]L
<
~
r
t
,.~
F
..~
~
~
~q
<
~ ~
hO
o
.,..~
~]
,---,
<
T /
"-"
~
=
u~
~
o ~
~ ~
,,..~
o
~
"<
.o
~
~
<
r
<
1-<
<
~
x
m
9 "'~
--
~
~&
~
~-
e-,.
~
~
m
r
9
r
..
o r
.~,~
III Logic Programming: The PROLOG Paradigm
Tb bcpca'r.'clxbq ~r z~koq ~:~:tkoy~.~r
x~i.
ZO:F~C~X'~q~:azat p.~azci. We must take into account the end sought and all clear evidence of sense to which we refer our opinions. For otherwise everything will be full of uncertainty and confusion. Epicurus
3.1
PROLOG
and Logic Programming
3.1.1 Introduction In the previous chapters, within the context of Propositional Logic and Predicate Logic, we examined and analyzed the notions which are critical to the study of Logic Programming. Our goal now is to present
Logic Programming as a direct
application of all the elements of Mathematical Logic that we mentioned in the two previous chapters, emphasizing wherever possible, the direct relationship between Mathematical Logic and the programming language P R O L O G . 203
204 The term
Logic Programming: The PROLOG Paradigm
Programming Language does
not exactly correspond to reality.
PROLOG, like FORTRAN and PASCAL, is not a language; it is only a notation
[DiSc90] by means of which we represent and formalize data and procedures. Tile so called "programming languages" differ greatly from the natural languages in terms of their expressive power, their usage, and their interpretation. continue utilizing the term
Programming Language, because
We will
it is an established
term, but we will always be referring to the set of expressions which are present in every kind of programming. The principal role of Propositional Logic and Predicate Logic is the determination of mathematical models, which make possible the proof of truth or falsehood of a proposition derived from a set of other propositions and assumptions. Within this context, we have already dealt with models which describe the rule of modus ponens, with proofs using tableaux, and with proofs by resolution. This last method is the strongest link between Mathematical Logic and Logic Programming. At the same time, it somehow bridges the gap between the abstract mathematical models and the operation of a programming language such as PROLOG. In the first chapter we studied the resolution as a proof procedure of Propositional Logic. Furthermore, in the second chapter, by studying the concepts of canonical forms, Skolem forms, and the notions of substitution and unification, we extended resolution into a proof procedure in Predicate Logic as well. Now we will define a programming language, by means of which we will represent and elaborate on the everyday practical experience and events of our world, using as many elements from the science of mathematics as we can. The dominant characteristics of this language are the formalization of the given information, the operation of the inference mechanism, as well as the relationship between this mechanism and Mathematical Logic. The representation is formally defined by Horn clauses, which not only describe in simple terms the events of our world, but can also be transformed easily into Skolcln Normal Forms. The inference mechanism of the language, with substitution, unification and resolution, offers an algorithmic mathematical tool for data management and derivation of conclusions from a set of Horn clauses. We know that resolution exalnines the data of a problem and guarantees that if there exists a solution, this solution will be found.
Having this in mind, we
are in a position to make a first intuitive presentation of PROLOG by incans of an example.
PROLOG and Logic Programming
205
We assume the following data expressed as Horn clauses of Predicate Logic: Pl P2 P3
P4 P5 P6 P7
P8 P9 P~0 Pll P12 P13 P14
Person(Marcus) +Person(Aurelius) +-Mortal(xl) +-- Person(xl) Born_Pompei(Aurelius) +-Born_Pompei(Marcus) eBorn(Aurelius, 45) e-Born(Marcus, 40) e-Died(x2, 79) +-- Born_Pompei(x2) Erupted(volcano, 79) +-Dead(x3, t2) +-- Mortal(x3), Born(x3, t l ) , t2 - tl > 150 Not_Alive(x4, t3) +-- Dead(x4, t3) Dead(x5, t4) e-- Not_Alive(x5, t4) Dead(x6, t6) +-- Died(x6, t5), t6 > t5 +-- Not_Alive(x, 1987)
where for uniformity with tile PROLOG formalism which will h)llow, we write tile indices beside the variables and terms; for example, Xl becomes x l. The first thirteen Horn clauses represent tile information which we give the program. P14 is the query we make: Who is not alive in 19877 The corresponding set-theoretic forln of the sentences p l , . . . , P14 is: (1)
(2) (3) (4)
(5) (6) (7) (S) (9) (10) (11) (12) (13) (14)
{Person(Marcus) } {Person(Aurelius) } {Mortal(xl), -~Person(xl)} {Born_Pompei(Aurelius) } {Born_Pompei(Marcus) } {Born(Aurelius, 45)} {Born(Marcus, 40) } {Died(x2, 79), -~Born_Pompei(x2)} {Erupted(volcano, 79)} {Dead(x3, t2), --,Mortal(x3), ~Born(x3, tl), --,(t2 - tl > 150)} {Not__Alive(x4, t3), ~Dead(x4, t3)} {Dead(x5, t4), -~Not_Alive(x5, t4)} {Dead(x6, t6), -~Died(x5, t5), -~(t6 > t5)} {~Not__Alive(x, 1987)}
206
Logic Programming: The PROLOG Paradigm
The corresponding representation in PROLOG is:
(6) (7) (8) (9)
per s on (mar cus ). person(aurelius). mortal(Xl) :- person(Xl). born_Pompei (aurelius). born_Pompei (marcus). born(aurelius, 45). born(marcus, 40). died(X2, 79):- born_Pompei(X2). erupted(volcano, 79).
(10) (11)
dead(X3, T 2 ) : - m o r t a l ( X 3 ) , born(X3, T 1 ) , gt(T2,150). not_Alive(X4, T 3 ) : - dead(X4, T3).
(12)
dead(X5,T4):- not_Alive(X5, T4).
(13) (14)
dead(X6, T6) :- d i e d ( X 5 , T5), gt(T6, T5). ? not_Alive(X, 1987).
( 1) (2)
(3) (4) (5)
where g t ( X , Y) means X > Y. We use the PROLOG notation, writing constants with lower case letters while variables are written in upper case letters. The period symbol ". " is used in most PROLOG versions to denote the end of a Horn clause input. PROLOG, similarly to Predicate Logic, tries to prove the goal. In its answer, if the goal is valid, PROLOG will instantiate; that is, it will give all possible values of the variables which validate the goal. Let us examine how this happens and how it relates to substitution, unification and resolution.
Step 1 : Our goal is: Not_Alive(x, 1987) PROLOG looks through the data for a Horn clause such that its head can be unified with the goal. In fact, the substitution 01 - { x 4 / x , t3/1987} unifies the goal with the head of the formula (11). PROLOG immediately tries to verify the body of the formula (11), by setting as a new goal, one after the other, all the assumptions of this formula. The resolution of (14) and (11) gives: (15)
{-~Dead(x, 1987)}
from (11)and (14).
PROLOG and Logic Programming
207
Step 2 :
Our goal is now: Dead(x, 1987) PROLOG, just like resolution does, now tries to unify the goal "Dead(x, 1987)" with a head of some clause from the data. This is achieved through the substitution 02 = { x 3 / x , t2/1987} in formula (10).
Step 3 :
The new goal for
PROLOG
Mortal(x),
is the triplet of assumptions from (10), i.e.:
Born(x, tl)
and
g t ( 1 9 8 7 - tl, 150)
which it tries to satisfy in tile order that they appear in rule (10). The resolution method respectively, would give us rule (16): (16)
{--Mortal(x), --Born(x, t l ) , ."gt(1987 - tl, 150)}
Step 4 :
PROLOG unifies "Mortal(x)" with the head of sentence (3) by using the substitution 03 = { x l / x } . Likewise, resolution would use the same substitution to unify "Mortal(x)" with "Mortal(xl)" of (3).
Step 5 :
Our new goal is now: Person(z) which in turn is unified through the substitution 04 = {x/Marcus} with the fact described by formula (1). Using the resolution method we would have: (17)
{-~Person(x),-~Born(x, t l ) , . " g t ( 1 9 8 7 - t l , 150)} from ( 3 ) a n d (16).
(18)
{."Born(Marcus, t l ) , . " g t ( 1 9 8 7 - t l , 150)}
from ( 1 ) a n d (17).
208
Logic Programming: The PROLOG Paradigm
Step 6 :
PROLOG continues its proof procedure and tries to satisfy the next goal from the triplet of goals presented in step (3), which now becomes: Born(Marcus, tl) The unification succeeds through the substitution 05 = {tl/40} with the head of rule (7). The resolution, correspondingly, would conclude the formula (19)
{-,gt(1987- 40,150)}
from (7) and (18).
Step 7:
The last goal to be satisfied from the triplet of goals from step (3) is gt(1987 - 40,150) which is satisfied by P R O L O G since 1 9 8 7 - 40 > 150.
Step 8 :
The goal of step (2), "Dead(x, 1987)", has been satisfied with the instantiation x = Marcus. The goal of the first step, "Not_Alive(x, 1987)", is therefore satisfied with the same instantiation. In this step, resolution deduces clause
(2o)
D
Step 9 . . .
:
PROLOG
does not stop here.
from (19).
It goes on, trying to unify the most recently
selected goal in a different manner, in order to find all possible solutions. Once the last goal has been satisfied in every possible way and all the valid solutions have been produced, the procedure is repeated for the immediately preceding
PROLOG and Logic Programming
209
goal. When this one is satisfied in some different fashion, tile program goes on by satisfying again the initial "last" goal using the substitutions which were made during the one before last goal, and continues along these lines: the final goal " g t ( 1 9 8 7 - 40,150)", cannot be unified in any other way, therefore PROLOG tries to unify, using a different substitution this time, the immediately preceding goal "Born(Marcus, 40)". With this method it eventually unifies "Mortal(x)" by means of the substitution 0 = {x/Aurelius} and with fact (2). It produces, just like in step (6), but for x = Aurelius this time, the new solution: Not_alive(Aurelius, 1987) This slick mechanism of P R O L O G which finds all the possible solutions is called b a c k t r a c k i n g and will be analyzed in detail in its respective section.
3.1.2 Logic and Programming The difference that distinguishes Logic Programming and the P R O L O G language from traditional programming and languages like F O R T R A N , BASIC, COBOL, and PASCAL, etc., lies in the fundamental principles of logic programming; both in the
design and tile implementation of a logic program. A program consists of two building elements, Logic and C o n t r o l [Kowa79, Lloy87]. By the term "Logic" we denote all those principal notions that determine W H A T a program is doing, whereas by the term
"Control" we mean all those
syntactic notions that determine HOW it is doing it (for example, the algorithm which solves a problem). If we wished to describe this using a single equation, we would write: PROGRAM
=
LOGIC + CONTROL
A traditional program written in BASIC or in some other traditional programming language consists of c o m m a n d s
which describe the actions which have to be
executed step by step by the computer in order for the program to produce the desired result. For example, the command in a BASIC program 10
LET
X = X + 5
increases by 5 the content of the memory address which corresponds to the variable X.
210
Logic Programming: Tile PROLOG Paradigm
Programming languages like BASIC are characterized by i m p e r a t i v e commands which describe the s t e p b y s t e p behaviour of the program, so that, after a finite sequence of those commands, the correct and expected result is produced. The structure of these commands during the design of the program comprises the element of CONTROL. Likewise, the element of LOGIC in the above command is the expression "X + 5", which is not a command by itself, but only a small d e s c r i p t i v e program. This program directly describes the arithmetic value which has to be computed and indirectly only how this value will be computed. Therefore, BASIC is an i m p e r a t i v e language which possesses a d e s c r i p t i v e element.
On the contrary, a PROLOG program could contain the sentence: n i c e ( X ) :- l o v e s ( X , p e o p l e ) which is simply a logical definition of the relationship between predicates "nice" and "loves". The design, therefore, of a PROLOG program is based upon the correct selection of predicates which define the relations between objects, such that the connection between the input data and the information that we expect as output is fully defined. In general, a program in a traditional programming language expresses a function from the input to the output of the program, whereas a program in a Logic Programming language expresses a relation between the data [Watt90].
As re-
lations are more general than functions (functions arc necessarily relations, but relations are not in general functions), Logic Programming has greater capabilities than traditional programming.
The selection of the predicates, and the relations which are expressed by these predicates, constitute the logic element in PROLOG. The control is found not only in the order in which we arrange the clauses, but also in a number of its structural control elements, like the cut "!" (section 4.3), which can be used as syntactic objects of PROLOG. PROLOG is therefore a d e s c r i p t i v e language which contains an i m p e r a t i v e structural element as a control element.
PROLOG and Logic Programming
211
Let us take as an example a program which reads two numbers and prints the greater of them. In order to make the difference between traditional programming and Logic Programming more concise, we will first give the program in BASIC and then exactly the same program in PROLOG: Program
in BASIC
10
INPUT "NUMBERI",X
20
INPUT "NUMBER2",Y
30
IF X > Y THEN 60
40
PRINT Y
50
GOT0 70
60
PRINT X
70
END
Program
in PROLOG program :- write("NUMBERl"), r e a l ( X ) , n l , write("NUMBER2"), r e a l ( Y ) , n l , greater(X, r, Z), write(Z). g r e a t e r ( X , X, X ) . g r e a t e r ( X , Y, Y ) : - X < Y. g r e a t e r ( X , Y, X ) : - Y < X . ? program.
" r e a l " , subsection 3.5.4, is a special predicate of PROLOG which checks whether a number is real. Using "nl", every number is written in a different output line. The program in BASIC is just a sequence of commands. These commands, which are executed in the order that the program indicates, constitute the control, i.e., the design and the flow of the program. The logic element in the BASIC program is found in the relation "<". On the contrary, the PROLOG program is a collection of clauses which fully describes the relation which completely determines the order of two numbers, i.e., the predicate " g r e a t e r " . This collection of clauses expresses the Logic, which also plays the dominant role in a PROLOG program, whereas the Control is found in the order in which we arrange and define the predicates.
212
Logic Programming: The PROLOG Paradigm
A typical program consists of the data we want to process, and the sequence of actions we want to perform. Once we formulate the facts of the problem and the procedures which will give us the results, the control structure specifies the order in which the different procedures must be executed.
In other words, we have a
clearly defined sequence of procedures, some of which are repetitive (e.g., loops). The selection of the formalism which we will use is very important, because the automation of processing depends on it. The programmer is totally responsible for this selection. Therefore, a fundamental element of a typical program is the flow, that is, the order of the procedures according to which they will be carried out.
3.1.3 Logic Programming One of the major drawbacks of programs in traditional programming is that they require permanent modification and adaptation to new demands. However, any partial modification in a classic program, usually affects its overall flow. For this reason checking an even slightly modified program is a demanding and timeconsuming task. One solution to the problems of traditional programming was provided by Logic Programming through the language of Predicate Logic. A problem which is to be solved using the help of the computer is introduced as a collection of Predicate Logic clauses. Some of these clauses express facts; for instance, in the example of subsection 3.1.2, the clause " greater(X, X, X) ".
Others express rules for
dealing with facts, as in the clause " g r e a t e r ( X , Y, X ) : - X > Y " from the same example, and still others express the queries which have to be answered giving the solution to the problem. Thus in Logic Programming, and specifically in PROLOG, after applying the rules, the solution to the problem will either be an answer to the queries of the form yes or no, or a set o f v a l u e s which will form the desired solution. The commands therefore in Logic Programming are not of an exclusively functional nature, as in classical programming; but are predicates, elements of the Predicate Logic language, which are either true or false according to the interpretation of their terms. The truth or falsehood of certain predicates according to the interpretations of the variables provides the precisely corresponding answer of the form "yes" or "no" to the questions of the program.
PROLOG and Logic Programming
213
The answers to the queries in tile program arc given ONLY in relation to the information which we have given the program. Therefore, if we ask the program in the example of subsection 3.1.1 if 4 is a perfect square, that is, the square of an integer, tile program, not having been given tile predicate which characterizes the perfect square, will answer "no". This means that the goal: ? perfect_square(4). fails, Remark 1.9.9.
We have to pay attention here.
The failure of PROLOG
to verify tile goal does not mean that the goal is really incorrect, it only means that using the facts of the program and the inference mechanism that we have, we cannot conclude that this specific goal is true. This is the so called C l o s e d World Assumption
(see also subsection 3.6.1), [Watt90]:
A predicate Q of Predicate Logic is assumed false by the program if tile program cannot prove that Q is true. Therefore, every PROLOG program appears as a complete description of tile universe, and tile universe is fully characterized by the data of the program. Whatever is important in the universe is described by the data, and the relevant goals succeed; whereas whatever is not specified in the program is not known, and tile relevant goals fail.
3.1.~ Historical Evolution After Herbrand's algorithm (1930), using which we carl find an interpretation which contradicts a formula ~ of Predicate Logic if ~o is not logically true, Gilmore (1960) was the first to try to implement Herbrand's method on a computer [ChLe73].
Gilmore's implementation was based on the fact that a clause
is logically correct, Definition 2.5.19, if and only if its negation is contradicted in some interpretation of the language. Consequently, in his program, a procedure exists which forms PrL sentences which arc periodically checked for inconsistency, Definition 2.5.16. However, the program of Gilmore was not capable of analyzing very complex PrL formulae. After the publication of Robinson's [Robi65] unification algorithm, Loveland (1970) was the first to use linear resolution with a selection function. This is tile resolution during which we pick a clause, we resolve it with another, we resolve the resolvent (Definition 1.9.11) with a third clause and we continue always resolving the last resolvent until we find an empty programming clause [ChLe73, Lloy87].
214
Logic Programming: The PROLOG Para(ligm
The first steps (1972) in Logic Programming are attributed to Kowalski and Colmerauer [StSh86]. First Kowalski expressed the procedural interpretation of the Horn clause logic and showed that a rule of tile form: A:- B1, B2,
... , B n
can be both read and executed by a recursive programming language. At the same time Colmerauer and his research team developed in FORTRAN a programming language to be used as a theorem prover, which implemented tile procedural interpretation of Kowalski. The founding stone of PROLOG (pROgrammer en LOGique) was set! The first PROLOG interpreter, i.e., tile program which translates a text understandable by people (source file) to a language understandable by tile nlachine (machine code), was written by Roussel in Marseilles in 1972 using ALGOL, and was based on the theoretical works of Colmerauer. However the question of whether Logic, and consequently Mathematics, can be used as a programming language, must be answered negatively [Watt90]: A problem expressed in a programming language, has to be solvable by a computer, whereas in Logic and Mathematics there are problems which do not have algorithmic solutions, and hence the computer is not able to solve them.
One example of this is the decision problem in Predicate Logic, section 2.11. Hence Logic Programming, which means the languages which are based on Logic, is always restricted to a part of logic formulae; for instance PROLOG only deals with Horn clauses, i.e., a subset of the PrL sentences. Many researchers |lave recently focused in tile relations between several programming languages; such as Func'tional Programming, i.e., programming based oil A-calculus [Thay88], Logic Programming, and Object-Oriented Programming, i.e., programming based on the theory of types, according to which every object is represented by a set of properties, values and procedures [Thay88]. A lot of effort has been put into finding suitable transformations and equivalences that enable the transition from one programming language to others, as well as the use of different formalisms in one program at the same time. Most of the Logic Programming languages, which in fact are logic theorem proving systems using tile resolution method, are the first steps towards optimal Logic Programming.
Program Structure
215
For optimal programming, however, we have to solve the problem of Control completely, t h a t is: (i)
Have available more satisfactory and more slick Control procedures in every language of Logic P r o g r a m m i n g
(ii)
Be able to transfer Control during the design and execution of a program exclusively to the computer.
The solution of tile Control Problem will enable filture programmers and users to limit their interaction with tile computer to thc complctc and concise definition of the problem. The program in turn will take over the solution of the problem and the total control of tile program flow.
3.2 Program Structure 3.2.1
The Program Elements
A PROLOG program consists of d a t a and queries to which the program has to give an answer. The data and queries are Horn clauses, Definition 2.4.4. The data have one of tile following forms:
A ( c l , . . . , ck) +--
(*)
or as we symbolically write in PROLOG
A ( c l , . .. , c k ) . where A is a predicate and c l , . . . , ck are constants of
PrL.
The pcriod symbol "
signifies the point where a given formula stops.
A ( a l , . . . ,at:) e- B l ( b l , . . . b t ) ,
...
, Bj(dl,...
,dm)
(**)
or as we symbolically write in PROLOG
A(a,,...
,at:) :- B l ( b l , . . . ,bt) ,
...
, B j ( d l , . . . ,dm) .
where A, B 1 , . . . , Bj are predicates and a l , . . . , ak, b l , . . . , b e , . . . , d l , . . . ,dm are terms (Definition 2.2.1) of Predicate Logic.
216
Logic Programming: The PROLOG Paradigm
Data of the form (.) are the f a c t s of the prograin (Definition 2.4.6). Data of the form (**) are called the r u l e s of the program. Tim rules express the relations between the predicates occurring in the facts. The rules, which are implicative formulae of Predicate Logic on which the program applies unification and resolution, form the p r o c e d u r a l called the h e a d , and
p a r t of the program.
B~(b~,...,b~), . . . , B j ( d ~ , . . . , d m )
A ( a l , . . . ,ak)
is
is called the tail or
b o d y of the rule. The facts and the rules forn~ the d a t a b a s e of the program [Brat90]. The q u e r i e s are goals (Definition 2.4.5), i.e., Horn clauses of the form: B,
(b,,
. . . , be),
...
,
Bj(dl,
. . . , din)
or symbolically in PROLOG:
? Bl(bl,...,be),
...
, Bj(dl,...dm).
where B 1 , . . . , B j are predicates and a l , . . . a k , b l , . . . ,be,... , d l , . . . ,d,,~ terms (Definition 2.2.1) of Predicate Logic. Tile period ". " following each fact, rule or query denotes the end of the corresponding Horn clause. Let us study the PROLOG program of Example 2.4.9 with the query set in E x a m p h ~. 2.10.12, "What can Peter steal'?":
thief(peter). likes (mary, food). likes (mary, wine). likes (peter, money). likes(peter, X):- likes(X, wine). can_steal(X,g): thief(X), likes (X, Y). ? can_steal(peter, Y). The facts are the first four Horn clauses, the next two clauses are rules, and the last clause is the goal (the query) of the progrmn. PROLOG will respond with Y = money Y = mary which means that Peter ('.an steal money and Mary, something which we already knew from Example 2.10.12.
Program Structure
217
Hence a PROLOG program is a collection of facts, rules arid queries, withollt any special directions for the flow and control of the program. This structure exhibits the great slickness of PROLOG: extensions and improvements can be achieved by adding or deleting facts and rules using the c o m m a n d s " a s s e r t " and " r e t r a c t " which we will study in subsection 3.5.1. The facts in a PROLOG program can be interpreted as d e c l a r a t i o n s regarding the universe of the program, hence they are allegations which describe a specific world. The rules [Xant90] can be interpreted riot only as declarations, but also as p r o c e d u r e s of the program. Classical p r o g r a m m i n g languages essentially allow a procedural interpretation, whereas PROLOG is probably the only programming language [Xant90, Brat90, C1Mc84], which allows both a p r o c e d u r a l
and
a declarative interpretation.
3.2.2
The Facts
A s we saw in subsection 3 . 2 . 1 , a fact is a relation, i.e., a predicate, between
a number of objects, which are terms of a P r L language.
Thus, a fact is a P r L
sentence (Definition 2.2.17). The facts in P R O L O G express directly corresponding facts of the spoken language: Spoken language
PROLOG
the canary is a bird John is a man Peter likes Mary Peter can steal Mary Mary's parents are John and Kate
bird(canary). man(john). likes (peter ,mary). can_steal (peter ,mary). parents (mary, john,kate).
The names of predicates and constants always start with a lower case letter. The words in predicates which are expressed with more than one word, are connected with an underscore, " _ ".
Periods denote the end of each fact.
The position of the terms in the predicates is ordered. The fact"
likes(mary, peter). signifies that" "Mary likes Peter" and it is different from the fact:
likes(peter, mary).
218
Logic Programming: The PROLOG Paradigm
The selection of predicates and names which will be used is carried out by tile programmer. His interpretation of facts is therefore important. For example, the fact " f a t h e r ( j o h n , c h r i s ) "
can be interpreted either as "John is the father of
Chris" or as "Chris is the father of John". options, as long as we do not pick both!
We can pick either one of the two We should not expect correct results
from a program which contains the facts: father(john, chris). father(george, nick). when the first fact is interpreted as: "the father of John is Chris" yet the second fact as: "the father of Nick is George"
3.2.3
The Rules
A rule in P R O L O G is an implicative PrL formula (subsection 3.2.1). The rules express information and relations more general and more complex than the information expressed by the facts. For example, f a t h e r ( c h r i s , george). son(nick, john). f a t h e r ( X , Y)"- son(Y, X). The first two clauses are facts.
They express the relationship between specific
people, Chris--George and Nick - J o h n respectively. The third clause expresses a general inferential rule which has to do with the predicates of the data: if Y is the son of X, then X is the father of Y. The variables in both the rules and the queries are written in capital letters. The period denotes the end of each rule. The head is separated from the tail with the symbol " ' - " (neck symbol). If the tail of the rule consists of more than one predicate, then these predicates are separated from each other by a comma " , ".
Program Structure
219
Let us look at an example:
likes(john, ice_cream). likes (john, mary). likes(john, backgammon). food(ice_cream). eats(X,Y)" food(Y), likes(X,]/). Here tile tail of tile rule is composed of two predicates, "food(Y)"
and
" l i k e s ( X , ]/)"
This rule coniectures t h a t for all X and Y (Definition 2.4.1), if Y is food and X likes Y, then X eats Y (Definition 2.4.1 (ii)). In PROLOG we can write two or more rules which share the same head as a new rule, with head the c o m m o n head and using the symbol " ; ".
For example, the
rules:
likes(X, Y ) " - likes(Y, X ) . likes(X, Y)"- good(Y). can be written as one rule:
likes(X, Y ) : - likes(]/,X); good(]/). and tile meaning of it is "X likes Y, if Y likes X, or if Y is good". PROLOG emlmerates tile facts and tile rules of every database. For example, in the previous database, " l i k e s ( j o h n , i c e _ c r e a m ) " database and " e a t s ( X , Y )
:
is tile first Horn clause of the
food(Y) ; l i k e s (X , Y) " is the fifth. As we will see
in subsection 3.4.1, the order in which the facts and the rules arc entered in the database is very i m p o r t a n t for the derivation of conclusions.
3.2.~ The Queries The q u e r i e s or g o a l s in PROLOG, are questions related to the conditions and the predicates of Predicate Logic which a p p e a r in the d a t a of the program. For example, for the d a t a of subsection 3.2.2 we can ask:
(I0)
? likes(gas, X). ? like (X,Y), ood(Y).
220
Logic Programming: The PROLOG Paradigm
and PROLOG will respond to tile first query with X -
ice_cream
X
- mary
X
- backgammon
and to tile second question with X = john Y = ice_cream The queries begin with a
~ ? "
, question mark, and end with a period. A query,
for example ? l i k e s ( g a s , X). can be answered in a lot of different ways. A c o m p l e x q u e r y , like ? l i k e s (X, Y) , f o o d ( Y ) . has more than one subgoal, which are separated from one another with a comma. In the complex queries, the same variables always refer to the same terms. For example, we asked ? l i k e s (X, Y) , food(Y). hence we asked the program to find X and Y, such t h a t X likes Y and Y is a food. The query
kes(X,Y), food(X). would fail, t h a t is PROLOG would not succeed in finding a suitable value for X, and would therefore answer "no". Let us for example examine the description of a party oil the facing page. Queries like (1), (2) and (3), are d a t a v e r i f i c a t i o n q u e r i e s , whereas queries similar to queries (4) through (7), are d a t a m a n a g e m e n t (6) and (7) are c o m p l e x or c o n j u n c t i v e q u e r i e s .
q u e r i e s . Queries (3),
o
~-,
II
II
I~.
-
-~
II
~
~
-
II
-~
~I~.
II
~
II
0~
-
~0.
II
-~
I~ I~
II
~
o
o
~0
~
""
~'
~ l~'
0
0
~ I~ .
~
..
~
co ~
~
~0
,-,
-
9
(1)
~
co ~
~ ~I-~
~
~
~
~ , ~
c~ (1)
~
o
o
9
~
~
0
Ol
"
~.
N 0
~t-
~~ . . . : I"
~
~
~
o
~ ~
~,~
g
9r~
~
~-..~.
I~ ~~
~.
9~
~
I-~. ~
(1)
~ i~. ~
~ ~
o.
~
~
o~
o
9
~ ~ ~
o~
o
~
I~ (I)
o~
~.
~
~
I=1 9~
~
o~
~.
.
I=1 9
~
o~
~.
~
I~ ~9
g
o~
~.
~
I~-' I~
~
~o
~.
~
~.
9
9 I~
~
~
~.
oq oq
~
~
~
,..
o ~
g;
~
~
I~ ~.
.
"~'~.
~
~.
~
~.
~
~.
I:,~-' '~:: ~ . .~
~
~
~.
I::1 ~
~
on
~.
"D
,'-5
C~
0-q
222
Logic Prograrnming: Tile PROLOG Paradigm
In queries (2), (3) and (7) the PROLOG program failed (Remark 1.9.9 (2)): For query (2) there is no information concerning Jack and Mary in the data of the program. For query (3) the first subgoal succeeds, because Jane drinks wine, however, there is no information regarding the type of music the Jane listens to. Hence the second subgoal and consequently (why?) goal (3) fails. Query (7) fails because in the data for tile party there are no values for X and Y which satisfy all three subgoals at the same time.
3.3 Syntax of Data 3.3.1
The Objects of P R O L O G
As we have already said, the language which we use in PROLOG is a subset of the language in Predicate Logic, enriched with symbols which satisfy certain needs. Hence, the terms in P R O L O G are v a r i a b l e s , c o n s t a n t s , and f u n c t i o n s on variables and constants (Definition 2.2.1). The constants can be a t o m s , n u m b e r s , or s e q u e n c e s of s p e c i a l c h a r a c t e r s .
The fundamental objects of the PROLOG
language are the predicates and the lists. We will proceed by examining the terms and the basic objects of PROLOG.
3.3.2 The Alphabet of P R O L O G The PROLOG a l p h a b e t consists of: (i)
Upper and lower case letters of the English alphabet
ABCDEFGHIJKLMNOPQRSTUVWXYZ ab cde f g hij k lmn opqrst (ii) (iii)
The digits Symbols like
uv
w xyz
0 1 2 3 4 5 6 7 8 9 ! ~ $ % ^ &
9
(
) _
=
-
+
<
depending on tile dialect of the PROLOG which we are using.
(iv)
Characters which operate without being directly displayed on the screen, like the space or the new line.
Syntax of Data
223
3.3.3 The Variables The v a r i a b l e s in P R O L O G are letters or words w r i t t e n in capital letters of tile P R O L O G alphabet.
Variables always s t a r t with a capital letter or an underscore.
For example X,
Y,
Result,
Father,
Who,
_mary,
A3
are variables. On tile contrary x,
y,
result,
mary
are NOT variables. Moreover the following are not considered as variables: lst_athlete "Result" 3579
(begins with a digit) (is enclosed in quotes) (is w r i t t e n with numbers)
Hence, if we write
(I) (2)
woman(Mother). woman(mother).
fact (1) tells P R O L O G t h a t every constant which exists in the p r o g r a m is a woman (wiLy?), whereas fact (2) denotes t h a t the specific c o n s t a n t "mother" is a woman. P R O L O G offers one additional possibility, the use of a n o n y m o u s ,
nameless
variables, which are variables whose n a m e we do not need; we siInply declare their position with an underscore. For example, writing
mother(helen,_) in PROLOG, we declare t h a t Helen is the m o t h e r of some person who is not really of any use for the rest of the program. Asking
? parents(nick,_, Mother). we are only interested in tile m o t h e r of Nick and we denote his father with an a n o n y m o u s variable. Most of tile time we use an a n o n y m o u s variable when this specific variable a p p e a r s in the p r o g r a m only once [Xant90]. tile relevant section, by using a n o w m o u s unification and we save time.
As we will see in
variables we simplify the process of
224
3.3.~
Logic Programming: The PROLOG Paradigm
The Constants
The c o n s t a n t s , which are strings of PROLOG symbols, are divided into three classes, according to their form [Brat90]. Therefore, the constants in P R O L O G can be (i)
Atomic terms
(ii)
Numbers
(iii)
Sequences of special characters
An a t o m i c t e r m is a sequence of alphanumerical characters and " _ ", hence it consists of letters, digits and probably underscores (but not as the first character because an underscore at the beginning of the sequence denotes a variable!). Tile starting symbol of an atomic term is always a lower case character. For example, y342,
tom,
red_apple
are atoms. Tile following are N O T atomic terms Peter
(starts with a capital)
_x5
(starts with an underscore)
3t
(starts with a digit)
a.32
(contains a period)
Any sequence of symbols enclosed in single quotes is also an atom, e.g.: 'x',
'Peter',
'5tx',
'John with the nice tic'
Tile n u m b e r s are integers or reals, for example 211345,
-32,
0.000013,
-5.7
Initially, PROLOG was designed to be a language for logic calculations, and its ability to carry out arithmetic operations is relatively limited. However, tile most recent edition, P R O L O G III [Cohn90], has great abilities for mathenmtical calculations. Using the s p e c i a l c h a r a c t e r
s e q u e n c e s in PROLOG, we construct symbols
which have a specific meaning attached. For example, >
or
can be used as symbols of implication, -+ .
>
Syntax of Data
3.3.5
225
The Predicates
The predicates, Definition 2.2.1, express relations and properties of terms. For example, the predicate g r e a t e r ( + ( X , 2), : (Y, 3)) signifies that the term +(X, 2), or equivalently X + 2, is greater than tile term :(Y, 3) or Y : 3 ,
Definition 2.2.1. The symbols " + " and " : " correspond to the
addition and division symbols respectively, X and Y are variables, and 2 and 3 are constants. Predicates in PROLOG start with a lower case letter. In PROLOG, we have the ability to construct c o m p o u n d
predicates,
or
s t r u c t u r e s , from predicates which already exist in the program. For example, let us take the facts wtlich describe a family: father(nick). mother(mary).
Child( m e).
(,)
child(tim). In this example, " f a t h e r " , "mother" and " c h i l d " are the predicates, and"nick", "mary", "anne" and "tim" are the constants. The facts (.) can be expressed using only one predicate:
family(father(nick), mother(mary), children(anne, tim)).
(**)
In the expression (**), "family" is tile predicate, but " f a t h e r " , "mother" and " c h i l d r e n " are no longer predicates, but functions [Thay88]. This means that " f a t h e r ( n i c k ) " , "mother(mary)", and " c h i l d r e n ( a n n e , t i m ) " in (**), as specific values of functions, are considered as constants. Therefore, " f a t h e r ( n i c k ) " in the facts (.) is a different syntactic object from " f a t h e r ( n i c k ) " in (**). In tile same fashion, the fact l i k e s ( g e o r g e , ice_cream). where " l i k e s " is the predicate and "george" and "ice_cream" are constants, can also be expressed using the predicates
i c e_cre am (i ike s (ge o rge))
(1)
226
Logic Programming: The PROLOG Paradigm
or
(2)
george( i ikes ( i c e_c re am))
where in (1) "ice_cream", " l i k e s " and "george" are predicate, function and constant respectively, whereas in (2) "george", " l i k e s "
and "ice_cream" are
predicate, function and constant respectively. PROLOG identifies if a given expression is a constant, a function or a predicate,
according to the position of the expression in a fact, a rule or a query.
This
identification method will be seen better in tile next subsection, where we will represent predicates as trees. The great advantage of the use of compound predicates can also be seen in the family example of (,) and (**). C o m p o u n d predicates offer an abbreviated expression and slick manipulation of tile data. For example, if we are interested in the parents of the family, the corresponding queries for program (,) will be: ? father(X). ? mother(]/). whereas by using tile complex predicate in (**) we would have only one query: '.~ family(father(X),mother(Y),_
).
where tile anonymous variable (subsection 3.3.3) indicates that we are, not interested in the values of the function " c h i l d r e n " .
3.3.6 Tree Structure of Predicates Using a first, simplified approach of syntactic: analysis, tile sentences in a natural language are composed of a subject, a verb phrase and the object or tile predicate. The subject and the object are name expressions which consist of an article, an adjective and a noun, whereas verb phrases consist of a verb and an adverb. Hence tile phrase: the blond child slowly closed tile door
(1)
carl be analyzed syntactically by the use of a tree [Thay88, Xant90], whose final
Syntax of Data
227
nodes (Definition 2.8.9) are tile exact words of phrase (1):
sentence
subject
verb phrase
object
article
adjective
noun
adverb
verb
article
the
blond
child
slowly
closed
the
adjective
noun
door
Tile same technique of syntax analysis is used for tile predicates: each predicate corresponds to a tree.
Tile origin of tile tree, Definition 2.8.9, is the predicate
under consideration. Tile internal nodes are the functions which are included in the predicate, and tile final nodes are tile variables and the constants which appear in the functions and the predicate. Therefore data (,) of the previous section correspond to four trees: father
mother
child
child
nick
mary
anne
tim
whereas tile complex predicate (**) corresponds to the tree: family
fat her
mot her
nick
mary
children
anne
The expressions " l i k e s ( g e o r g e , i c e _ c r e a m ) ",
tim
" i c e _ c r e a m ( l i k e s ( g e o r g e ) ) ",
228
Logic Programming: The PROLOG Paradigm
and " g e o r g e ( l i k e s
(ice_cream)) "
/
have corresponding structures: ice_cream
george
likes
likes
george
ice_crealn
likes
george
ice_cream
By using the tree structure of predicates, PROLOG controls the success of the unification, as we will see in section 3.4. After the unification of two predicates, the corresponding trees must be the same, node by node and edge by edge.
3.3.7 The Lists Lists are simple catalogues of data enclosed in " [ " and " ] ", for example: [wine, soap, j ohn, X, house]. These lists, which constitute a fimdamental building block of LISP, [WiBe89], have been embodied in P R O L O G . Using lists we can collect a lot of information in one PROLOG object. For example, tile facts: likes(george, football). l i k e s ( g e o r g e , food). l i k e s ( g e o r g e , wine). can be given using a single list as the fact:
likes(george, [football, food, wine]). This means that the sequence of terms, f o o t b a l l ,
food, wine, is represented by
tile object:
[football, food, wine] In general, a list is composed of a sequence of terms. By using lists we achieve space efficiency and greater slickness when managing complex data.
Syntax of Data
229
Lists are generally defined recursively: (i)
The e m p t y
list, i.e., a list which does not contain any elements, is repre-
sented by "[ ]". (ii)
A non-empty
list is composed of two terms:
(1)
The first t e r m is called the h e a d of the list.
(2)
The second term consists of the rest of the list, and is called the t a i l or b o d y of the list
The head of a list can be an a r b i t r a r y P R O L O G term, a constant, a variable or a fllnction. T h e tail of a list has to be a list. For example, the head of the list:
[football, food, wine] is the constant, f o o t b a l l ;
and its tail is tile list:
[food,wine] Due to the recursive definition of functions in Predicate Logic, Definition 2.2.1, lists are not fllnctions.
Moreover, list elements are always selected based on a
relation or property they possess. For example, the list:
[meat,bread] m a y denote objects which have to be bought, in which case it is expressed by tile predicate:
shopping(meat, bread) Therefore a list can be considered as a predicate by the use of a p r e d i c a t o r ,
". ",
i.e., a special symbol denoting the existence of some predicate at tile corresponding place. For example, by writing . (salad, vinegar) we denote t h a t we Inade up the list:
[salad,vinegar]
230
Logic Programming: The PROLOG Paradigm
without specifying either tile qualities of the terms " s a l a d " and " v i n e g a r " , or tile relationship between them. Hence ". ( s a l a d , v i n e g a r ) " a generalized
predicate,
is considered to be
a predicate whose form we do not know precisely.
Therefore, we cannot query PROLOG: ? [salad, v i n e g a r ] .
[X,Y].
or
?
or
? IX, v i n e g a r ] .
because we do not know the predicate which corresponds to " . " .
Furthermore,
within the same program, more than one list can exist, all of them being expressed by the special symbol " . ". In general, a list can be written as a predicate: 9 (Head, T a i l )
The tail is also a list with a head and tail. If we also use " . " as a f u n c t o r , i.e., a symbol denoting the existence of some function at the corresponding place, then tile general form of a list is: 9 ( H e a d l , . ( H e a d 2 , . ( H e a d 3 , . . . . (Head,[ where
[ ] denotes
])...)))
tile e m p t y list.
For example the list [a, b, c] has the general form:
9 (a,. (b,.
[ 1)))
A list containing only one clement has the form:
[element] or
. (element, [
])
For the typical construction of a list we use trees. The tree structure of a list forms tile i n t e r n a l representation of lists with respect to the PROLOG interpreter.
o ~ C/) (1)
~<
9
~ c"e
=-
~
~
~
N
"
..
<
~
~..,o
,,
"
~
im
~
im
('~
m
9
m
<
(T~ !
~
,'~
~
o
t"m
~"
9
.
o
~
"-"
e-,-
~.,
9
im m
m
/
im
/
>.
k /
[m
k /
k /
im
k /
~,<
<
"
"
"rl (1) CY
,"a
","
im
,1)
9
o
car]
232
Logic Programming: The PROLOG Paradigm
3.4 Operation Mechanism 3.~.1 The Unification Procedure in P R O L O G The u n i f i c a t i o n procedure of terms, section 2.9, with its algorithmic nature, subsection 2.9.9, is one of the most important operations in PROLOG. The unification procedure does not exist in other high level languages. Although LISP
uses unification as its fundamental operation property, it does not use tile
resolution method as a proof mechanism. However, the inference power of P R O L O G comes from the smooth cooperation of unification procedures and resolution. Unification enables the update of variables with given P R O L O G objects.
For
example, the reply of PROLOG X = football means that the variable X has taken a value, has been u p d a t e d
by the constant
"football". In general, with unification we examine if two predicates can be identical. If they cannot be identical, the unification procedure fails. If they can be identical, unification succeeds and the result of the unification is the update of the varial)les of the two predicates with such values that the two predicates match. In the bibliography, tile word m a t c h i n g is often used for u n i f i c a t i o n and the words b i n d i n g and i n s t a n t i a t i o n for the word u p d a t e .
Binding in this case is
not the binding of free variables as in Definitions 2.2.12 and 2.2.13, because in any case, all variables in P R O L O G are bound with universal quantifiers, Definitions 2.4.1 and 2.4.4. On the contrary, tile term refers to tile fact that the variables take a certain value through the procedures of tile program. By means of the unification algorithm and tile general unifier, Algorittun 2.9.7 and Definition 2.9.4, PROLOG can execute complex substitutions. This procedure can be marvelously represented using the tree structure of predicates: Let us suppose that a database in PROLOG (ontains the fact: patient(john, fever(38)). and that the following query is made: ? patient(X, fever(Y)),
(.)
Operation Mechanism
233
The tree structures, T and T?, of fact (.) and the query respectively, are: patient
john
patient
fever
X
fever
38
Y
If we bring tree T, directly over tree T, in order to achieve exact matching, then the variables of tree T, have to be unified, to match, with the terms to which they correspond, since the predicate and the number of edges are the same in both trees. Thus the reply of PROLOG in the above query is: X = john Y = 38 In general, the rules which are obeyed during tile procedure of unification to check if A and B are identical [Xant90] are: (1)
If A and B are constants, then A and B match only if they correspond to tile same object.
(2)
If A is a variable and B is anything, then A matches B. If B is a variable, then it matches A.
(3)
(4)
If A and B are functions, then A and B match only if: (a)
A and B have the same initial function,
and
(b)
the rest of tile corresponding subterms of A and B match.
If A and B are predicates, then A and B match if and only if." (a)
A and B have the same initial predicate,
and
(b)
the rest of the corresponding terms of A and B match.
The above matching rules result in tile r e c u r s i v e n a t u r e of the unification procedure. Rules (3) and (4), more specifically parts (35) and (45), refer to steps 1, 2, 3 and 4 in the informal description of unification in section 2.9. The recursive nature of the above rules can also be observed in the following example.
=
=
=
=
=
=
~"
=
~
~-.
o
9
~.~.
b~
~J
J
J
J
J
J
o
9
9
~.,,.
9
J
0~..,o
~..-, 9
I-J.
~
~
""
O
b~
O
~
~.a
~
e-,-
(~
9
~< 9"
~
o
~.~
o
~a. ~ ~
0 ~a. ~
0 ~a. ~
~
9
~
b..., 9
,...,
c~"
N
~
~
Operation Mechanism
235
Therefore trees T3 and T, are unified, or match, for the following values of the variables: XI=I X2=l Y1 = 3 Y2=5 The recursive definitions of relations constitute one of the dynamic characteristics of PROLOG and will be presented in detail in subsection 3.4.5.
3.4.2 Inference and the Backtracking Procedure As we have already mentioned, a program in PROLOG has two interpretations: the logic or d e c l a r a t i v e one, and tile p r o c e d u r a l one. In this subsection we will examine the procedural interpretation of PROLOG. W e will describe
how PROLOG
derives conclusions. We have already presented the unification procedure and the resolution method. Applying the unification process, we end up with a set of ground terms of the program, Definition 2.2.15. In the next step, we repeatedly apply resolution to this set of ground terms, and thus reach conclusions by proving the empty clause. By using the resolution method, we are guaranteed that any proposition which can take a truth assignment, will take it. However, in PROLOG, we cannot direct the inference in sentences in which we are interested, and this is of great importance for the efficiency of the programs. For example, let us consider the following propositions in Propositional Logic: (1)
D
(2)
E
(3) (4)
Byte AViD
(5)
AV~B
and let us assume that we posed the problem when A holds. We introduce the proposition: (6)
~A
236
Logic Programming: Tile PROLOG Paradigm
and we have the following proof through resolution of A: (7)
B
from ( 2 ) a n d (3)
(8)
A
from ( 7 ) a n d (5)
(9)
[7
from ( 8 ) a n d (6)
(,)
However, A can also be proved by the following resolution proof: (7')
A
from (1) and (4)
(8')
[El
from ( 7 ' ) a n d (6)
(**)
Proof (,) consists of more steps than (**), and its implementation takes more time. Furthermore, in proof (,) we had to conclude proposition B, which was not related to A. embodies in its inference mechanism a special search procedure in the universe of terms of a program. On the one hand, this procedure gives the ability to tile programmer to control tile sequence of calculations; and on the other hand it guarantees that all possible solutions of a problem will be found [Lloy87]. This procedure is called b a c k t r a c k i n g . For example, consider the following P R O L O G database: PROLOG
1,/ 2,/
thief(john).
/,
l i k e s ( m a r y , food).
/ 9
likes(mary, wine). likes(john, X):- likes(X, wine).
/, 3 , / / 9 4,/
can_steal(X, Y):- thief(X),
/,
l i k e s ( X , Y).
5,/
(In most PROLOG editions we use the character sequences " / , " to introduce and " , / " to end comments, non-executable by the program. Therefore in the above program we have introduced the numbering of clauses as comments). Let us suppose that we introduce the goal query: c a n _ s t e a l ( j o h n , Z). The steps followed to satisfy the goal are:
*
what can john steal 9 /
Operation Mechanism
237
S t e p 1. ? can_steal(john, X/john
(5)
ZlY thief(john)
Z).
The database is examined in order to unify the goal with a fact or with tile head of a rule. Unification succeeds with the head of rule (5) and with the substitutions X / j o h n , Z/Y. The position of rule (5) in the database is recorded. Tile variables appearing in the clauses of the head of rule (5) are also u p d a t e d through the same substitutions.
Step 2. ? thief(john). PROLOG aims to satisfy the subgoals, one at a time. yes
(1)
Tile database is examined from the stop, clause (1), for the unification of the subgoal. Unification succeeds with clause (1). This clause does not contain any variables, so tile subgoal is satisfied.
Step 3. ? likes(john,
x/y (4) likes(Y, wine)
Y). PROLOG aims to satisfy tile second subgoal. Tile database is examined from the top for tile unification of tile subgoal. Unification succeeds with rule (4) and through the substitution X/Y. The variables of tile predicate in tile b o d y ()f rule (4) are immediately updated.
Step 4. ? likes(Y, wine). Tile clause of the body of rule (4) is acted upon to be satisfied. The database is searched once more beginning from tile top in order to succeed tile unification of the goal.
S t e p 5. Step
The unification a t t e m p t with (2) fails because "food" and "wine" cannot be unified. The position of the clause with which it was a t t e m p t e d to unify, (2), is recorded.
Y/mary
(2)
Y/mary
PROLOG attempts to satisfy the goal in a different m a n n e r and goes on to the next clause of the program. Unification succeeds wittl (3).
(3) no
yes
6.
238
Logic Programming: The PROLOG Paradigm
S t e p 7: The initial goal, " c a n _ s t e a l ( j o h n , Z ) " ,
has been satisfied by means of tile
substitution Z / m a r y . PROLOG now attempts to satisfy tile goal in a different way. For that purpose, it backtracks to the immediately previous subgoal that it has satisfied, " l i k e s (Y,wine)", in order to unify it with a different Horn clause from tile program.
At this point, PROLOG frees all the variables from their updated
values. Then, it checks tile clauses in the database which are deeper than the last clause with which it unified the subgoal, step 6, clause (3). Deeper in PROLOG means that the corresponding clause has a larger serial number than the last clause with which unification was performed, subsection 3.1.2. Since the position in tile database of the clause being executed is recorded in every computational step, PROLOG has the ability to spot the deeper clauses. Since there is no fact or rule,
the head of which is unified with the subgoal, PROLOG finishes its search. In the above description of program execution, we can see that PROLOG aims to satisfy one goal at every computational step. This property of the derivation mechanism of PROLOG, and more specifically of the backtracking procedure, gives PROLOG a d e t e r m i n i s t i c
nature.
The order in which the programming clauses have been entered in the database plays an important role in the execution and control of a PROLOG program. The a t t e m p t of P R O L O G t o satisfy the (sub)goals of tile program by searching in depth the database, implements one of the basic algorithmic tree searching strategies, the d e p t h - f i r s t s e a r c h s t r a t e g y . This algorithm, which constitutes the heart of the backtracking procedure, will now be presented.
3.~.3
Depth-first Search
The use of trees in finding the solutions to a problem is based on the representation of the initial data, as well as tile solutions, by a s t a t e s p a c e [Nils71]. A state space is a tree in which: The nodes correspond to the different states of a problem: the i n i t i a l s t a t e , the i n t e r m e d i a t e
s t a t e s during its resolution process and the
final s t a t e s .
(b)
The edges between the nodes correspond to the a l l o w e d t r a n s i t i o n s from one state to another.
Operation Mechanism
239
Hence, the solution to a problem is reduced to finding a path from the initial
state to the final state [Brat90]. During the procedure of satisfying a possibly complex goal, PROLOG searches a state space. Tile goal is tile origin of the corresponding tree. The intermediate nodes correspond to the intermediate subgoals which are to be achieved each time. Tile final states correspond to facts a n d / o r consequences (Definition 2.5.21) of the facts in the database. Choosing the way of getting from one state to another is equivalent to choosing the corresponding subgoals. Therefore, the state space of tile program in the first example of subsection 3.2.2 is described with the following tree: A (,) D
B
E where clause A is the initial state, the goal of the problem, and clauses D and E are the final states, tile facts of the database. Therefore, for the solution of the problem, which is conclusion A, there are two different solution paths with, as corresponding sets of allowed transitions, the sets:
and
(**)
{D
)A}
(,)
{E
>B, B ~
A}
Given the state space of a problem, the question is how to select the appropriate s o l u t i o n p a t h of the problem and, fllrthermore, tlow to find all tile alternative solution paths. The answer to these questions can be given using the d e p t h - f i r s t s e a r c h a l g o r i t h m for tree traversal. The main idea for tile construction of the depth-first search algorithm is based on tile following observations. In order to find a solution path, S o l u t i o n _ P a t h , from a node N of a tree to a final state node:
(a)
If N is a final state node, then S o l u t i o n _ P a t h is the list IN].
(b)
If there exists a node N1, a descendant of N, such that there exists a solution path, S o l u t i o n _ P a t h _ l , from N1 to the solution node, then S o l u t i o n _ P a t h is the list I N : Solution_Path_l].
240
Logic Programming: The PROLOG Paradigm
This algorithm is executed repeatedly, until all the possible solution paths are found. Every dcs(:cndant node which is examined to see whether it belongs to a solution path or not, inust be deeper and more to the left in the tree, compared to tile initial node, than its predecessor; hence the name of tile algorithm. When PROLOG needs to backtrack in order to find alternative paths, it b a c k t r a c k s to the immediately preceding node. Consequently, the depth-first search algorithm is given in the following PROLOG program: f i n d _ p a t h ( N , [ N ] ) " goal(N).
find_path(N, [N" Solution_Path])"- allowed_transition(N, NI), f ind_path (N I, So lut ion_Path I). In this program, and more specifically in the second clause, it may seem strange that we use tile predicate " f i n d _ p a t h " inside tile definition of "find_path"!
Such
r e c u r s i v e d e f i n i t i o n s o f r e l a t i o n s between objects are used extensively in PROLOG, and will be presented in detail in subsection 3.4.5.
The ability to define in PROLOG its own structural and flmctional elcnmnts, such as the depth-first search algorithm and recursion, constitutes one of tile language's dynamic characteristics. This ability is based on the declarative and logic nature of PROLOG programs and is called PROLOG through PROLOG!
3.~4.~4 Controlling Backtracking: Cut The PROLOG language offers a special predicate, the c u t , symbolized by "!", which controls tile inference procedure and, more specifically, backtracking. The "cut" is used to prevcnt PROLOG from taking certain paths while searching the state space of a problem. This is done either in cases where tile user knows that they are not solution paths of the given problmil, or in order to make computation time shorter, or finally because it is not considered imperative to (tcduce conclusions for some specific facts (a form of rejecting facts). As a special predicate, "cut" is always true. T h a t means that when it is set to be satisfied, it has to be satisfied. It is interesting to examine the consequences of using "cut" on the execution and the set of conclusions of a PROLOG program. Let us consider for example the following PROLOG database [Thay88]:
~
..
-'~
e
N
~
9
9 Oq
O~
~
l::::r'
~
.~
N
N
~.. .
9
~=.~~
=
=4
CD
~.=., ~
~
~
~.=.,o
~'~
~
.,,
'"
--"
0
E~
~
~
.-.0
...................
bO
. ~0
v ~
,,
bO
~"
.......
~
.
~
.o
9
~J
9
~,~
~
o
o
9
~
"1"
,.
"1"
,.
~.
0
242
Logic Programming: The PROLOG Paradigm
PROLOG will a t t e m p t to satisfy the initial goal following once more the left path. Hence, it will try to satisfy subgoal "P1 (a, a)" by activating clause / 9 1 9
After
it satisfies P 2 ( a ) , it will come across the cut, !, which it will satisfy imInediately, because "cut" is always true. It will therefore go ahead and try to unify P3(a) and will fail, because there is no Horn clause in the program to be unified with P3(a). Normally it should now backtrack to subgoal " P l ( a , a)", for the purpose of satisfying it through the activation of clause / 9 2 9 the cut in the body of clause / 9 1t 9
Since, however, it found
it will n o t backtrack.
! prevents
PROLOG
from re-rendering as a subgoal the head of a clause in whose body there is a "cut". Hence, although in the case of clause / 9 1 9 / the initial goal was satisfied through the right solution path with X / a and Y/a, in the case of clause / 9 1' 9 / it is not satisfied, which means that PROLOG will reply "no". In other words, the right solution path is cut off from the state space (dotted line). In more complex state spaces, the use of "cut" can result in cutting off entire subtrees of the space. The "cut" predicate was named after this functional nature of cutting off subtrees. It is interesting how, using "cut" in a P R O L O G program, we can get rid of loops, which are never ending operations, which might be caused by ill-defined data. Let us examine the following P R O L O G program: thief(john). thief(X):-
thief(X).
/.
1
/,
2 ,/
/,
3 9/
9 /
and tile query
? Then by unification from clause / 9 1 9
PROLOG
immediately finds:
X = john and after freeing variable X from the value "john", it a t t e m p t s to satisfy tile goal in a different way, by means of / 9 2 9 ? thief(X). which Inatches the initial goal / 9 3 9 of rule / 9 2 9
After the resolution, the new goal is /.
4 9/
Clause / 9 4 9 / is unified with the head
and after the unification, the new goal is again / 9 4 9
means that the program is in an endless loop, from which it cannot escape.
which
Operation Mechanism
243
Let us study now the same PROLOG program, but with a cut in clause / 9 2 9 thief(john). thief(X):-- thief(X),
!.
/,
1 ,/
/,
2'
/.
3 9/
,/
and the query: ? thief(X). Clause / 9 1 9 / immediately gives the value: X = john PROLOG tries to find all the possible solutions; unifies with the head of clause / 9 2 9
solves, finds a new goal: ? thief(X).
/ 9 4 */
and stops, because "cut" does not permit the setting for a second time as goal the head of rule / 9 2' 9
in whose body the cut is. Hence, the only answer given is
X = john and the program is carried out without an endless loop. The function of "cut" can be described purely in terms of the procedural interpretation of a PROLOG program, meaning that its use and the control of its execution is left to the programmer. The ability to use "cut" as an element of the PROLOG language is currently under dispute, mostly by those who want PROLOG to be a purely logic programming language. This dispute is based mainly oll the following reasons:
(1) A PROLOG program without "cut" is able to find through backtracking all the correct answers to a given query (completeness of computation of correct answers). The presence of "cut" destroys this form of completeness, since "cut" does not allow the program to calculate all the possible correct replies [Lloy87]. (2) The use of "cut" allows us to write a logically incorrect PROLOG program, which, although it allows the derivation of incorrect conclusions, behaves correctly using "cut". However, the use of "cut" in certain programs can improve their efficiency without diminishing their logical clarity.
244
Logic Programming: The PROLOG Paradigm
3.4.5 Recursive Definitions in
PROLOG
Using r e c u r s i v e d e f i n i t i o n s we can define new predicates in a P R O L O G program. For example, let us assume t h a t we want to define the relation " a n c e s t o r " in order to examine the following part of a family tree:
/
X1
X2
X3 Each node represents a person, and the direct connection between nodes
X4
denotes the relation " p a r e n t " between people.
X5
X6
9
We observe that: for all X, Z in the family tree, when X and Z are directly connected, X is an ancestor of Z if X is a parent of Z. We can define the ancestor relation using a Horn clause: a n c e s t o r ( X , Z ) " - p a r e n t ( X , Z).
(a)
Therefore, by using the predicate " p a r e n t " , we defined the predicate " a n c e s t o r " for people who are one generation apart. For the ancestor relation between people who are two generations apart, for example between X1 and X4, we observe that" For all X, Z in the family tree, X is an ancestor of Z if X is a parent of some Y and Y is a parent of Z. Hence we write this clause" ancestor(X, Z)"- parent(X, Y),
p a r e n t ( Y , Z).
(b)
In our effort to check the " a n c e s t o r " relation between people who are more than two generations apart, we will need to write a variable length sequence of prog r a m m i n g clauses"
Operation Mechanism
245
a n c e s t o r ( X , Z ) : - p a r e n t ( X , Y1), p a r e n t ( Y 1 , Y 2 ) , p a r e n t ( Y 2 , Z). ancestor(X, Z):- parent(X, rl),
parent(rl, r2),
p a r e n t ( Y 2 , Y 3 ) , p a r e n t ( Y 3 , Z). Each time we go deeper into tile family tree we have to define a new rule. This is definitely not effective for our programs, because a program in no m a t t e r what language, is useful and effective only if it solves general and not just specific instances of a problem. There is a more efficient way to define the " a n c e s t o r " relation: We observe that: For all X and Z in the family tree, X is an ancestor of Z if X is the parent of some Y and Y is all ancestor of Z Thus, we can write the following P R O L O G programming clause: ancestor(X, Z):- parent(X, Y),
(c)
a n c e s t o r ( Y , Z).
In this definition, we used the predicate " a n c e s t o r " to define the predicate "ancestor"!
Definitions of this form are called r e c u r s i v e d e f i n i t i o n s , and tile
relations they denote are called r e c u r s i v e r e l a t i o n s .
Intuitively, we would say
that recursion is born the m o m e n t when a sketch is built f r o m the reproduction of itself in a different scale or a different level [Xant90]. Hence, in the family tree
example, tile control of the " a n c e s t o r " relation between X 1 and X5 is reproduced in the levels of the following diagram: XI~ X1 X2
XI~ ancestor
X 2
X4
X2-level 0
ancestor
ancestor
X 4 level 1
X5-level 2
Rule (c) does not suffice for the definition of the " a n c e s t o r " relation. Although it logically expresses whatever we want to define, it does not give the program the information necessary for the answer to queries on the relation " a n c e s t o r " .
246
Logic Programming: The PROLOG Paradigm
Let us see an example. We are given the following family tree and the corresponding PROLOG database: mary p a r e n t (mary, john) p a r e n t ( j o h n , anne) parent(john, irene)
john
r
/,1,/ /.2./
/,3,/
Y):--- parent(X, Y), ancestor(Y, X).
anne
/,4,/
irene
Assume that the following query is asked: ? a n c e s t o r ( m a r y , anne). PROLOG will attempt to answer this query, searching the state space depicted by
the following tree: ? ancestor(mary, anne). X/mary, Z/anne
5
? parent(mary, Y), ancestor(Y, anne).
? parent(mary, Y).
loop
Y/john
2
? ancestor(john, anne). X/john, Z/anne
5
? parent(john, Y), ancestor(Y, anne).
PROLOG fails to finish the search in the problem's state space.
The tree's
branchings will continue for ever and PROLOG will always be inside an endless loop trying to satisfy the goal " a n c e s t o r ( X , Z ) " , for different updates of the variables each time. Of course PROLOG will not stop the execution of the program without
9ivin9 a
"yes"
or
"no"
answer.
The halting of the program is caused by
the limitation of the machine on which this specific edition of PROLOG is running and not by the natural termination of a PROLOG program.
Operation Mechanism
247
We therefore need a method which guarantees the termination of the execution of a given recursive procedure.
Such a method can be implemented by using
a different Horn clause which states the m a r g i n a l
c o n d i t i o n s under which a
relation holds. In the example of the family tree, this condition is the relation " p a r e n t " , meaning that an ancestor relation between two persons holds marginally when one is the parent of the other. This means that" For all X, Z in the family tree, X is an ancestor of Z if X is a parent of Z. We need therefore to add in the database the programming clause: a n c e s t o r ( X , Z ) ' - p a r e n t ( X , Z).
/ 9 4 9/
Clause / 9 4 9 / acts also as a b o u n d a r y c o n d i t i o n of the defined relation. With the introduction o f / .
4 9
the corresponding state space of the reply in the
previous query is depicted by the tree: ? ancestor(mary, anne). X/mary,/4 \ X/Inary, Z/anne / 5 k Z/anne ? parent(mary, anne).
? parent(mary, Y), ancestor(Y, anne).
no
? parent (mary, Y). 1 Y/john
/
? ancestor(john, anne).
? parent(john, anne).
? parent(john, Y), ancestor(Y, anne).
yes ? parent(john, Y). 2 Y/anne ? ancestor(anne, anne). anne,
5•X/
Z/anne
? parent(anne, anne).
? parent(anne, anne), ancestor(anne, anne).
no
? parent(anne, anne). no
248
Logic Programming: The PROLOG Paradigm
Thus, by introducing clause / 9 4 9
PROLOG ends the search of tile state space
and replies positively to the query. The marginal condition of a recursive relation is also called the b o u n d of t h e reeursion.
The choice of tile bound of a recursive relation and the correspond-
ing programming clause has to be made very carefully. In general, the principle followed for problems which can have a recursive solution is: We divide the problem into two subgroups: (a)
Tile first subgroup contains the "simple" instance of the problem (the bound of the recursion).
(b)
The second subgroup contains the "general" problem instances whose solutions are found by reducing them to simple versions of tile problem (recursion).
Before getting into the presentation of specific nmnagement operations of lists, it is worthwhile to mention tile unification operation of lists.
3.~. 6 List Management Since lists are abstract predicates, subsection 3.2.4, they arc unified just like the predicates. In practice: A list L1 is unifiable with a list L2 if L1 and L2 have the same nunlber of elements, and all the internal elements of L1 are unifiable with the corresponding internal elements of L2. Therefore two lists can be unified if and only if their heads and their tails, i.e., the elements of the lists, can be unified. For exainple, for tile lists: L1
=
[a: [b,c]]
and L2 = [a: Y] the PROLOG query:
? LI=L2.
Operation Mechanism
249
succeeds with tile substitution Y = [b, c]. On the contrary, lists: L1 = [a, b] and L2 = [ h e i g h t , c o l o r , p o s i t i o n ] do not unify because they do not have tile same number of elements. Lists can bc used in the representation of sets.
However, there is one basic
difference. The order of tile elements in a set is not i m p o r t a n t , whereas tile order of the elements in a list is crucial. T h e principal operations between sets and lists are similar. Therefore, based on the recursive definition of lists and the unification procedure, we can define operations such as: (1) E x a m i n e if an element or a list of elements belong to a list, i.e., tile subset relation for lists. (2) Concatenation of lists, to define the union operation for lists.
3.4.6.1
T h e member p r e d i c a t e
The member predicate is used to examine if an element is p a r t of a list. This predicate is a built-in predicate in most PROLOG editions. In general: Element X is a m e m b e r of list L if (a)
X is tile head of L,
or
(b)
X is a m e m b e r of the tail of L.
We can thus recursively define the member predicate by: (1)
member(H, [ H : _ ]).
(2)
member(A, [ _ :
where " _ recursion.
T ] ) : - member(A, T).
" is the a n o n y m o u s variable.
Clause (1) is then the bound of the
250
Logic Programming: The PROLOG Paradigm
For example, for the PROLOG query: ? member (X, [a, b, c]). the state space is depicted by the tree" ? member(X, [a, b, c]).
H=x / H = a// X
"~r
= a
? member(X, [a" _ ]).
x= [b,c]
member(X, [b, c]).
H=X/ H =/b//
yes
x
member(X, [b" _ ]). yes
= [b, c]
X = b
member(X, [c]). H = X / "x,. A = X = H--c / X--c ~ T [ ]
/
member(X, [c- _ ]).
member(X, [ ]).
yes
no
Therefore, element X is a member of the list [a, b, c] only if Xma
or
X-b
or
X-c
We observe therefore that PROLOG, a t t e m p t i n g to satisfy the query:
? m~mb~(x, [a, b, c]). instantiates variable X with concrete constants.
3.4.6.2
T h e append p r e d i c a t e
With the use of the append predicate we construct a list L3 as a result of the concatenation of two lists L1 and L2.
Thus we create a new predicate,
append(L1, L2, L3). To define append we observe the following:
(a)
The result of the concatenation of a list with the empty list is the initial list, therefore, append([
],L,L)
/,
1 ,/
Operation Mechanism
(b)
251
The first list is of the form IX : L1]. If we concatenate this list with list L2, tile result will be a list which has the same head as the first list body, and its tail will be the result of tile concatenation of tile tail of the first list with L2. These observations can be better understood with the following diagram:
rx[
L1
I I
rxl
L2
I
L3
I
Based on the above observation we can write append(IX : L1J, L2, IX:
L3]):-
append(L1, L2, L3)
/
9
2,/
It is clear that clauses / . 1 9 and / . 2 9 form tile recursive definition of the append predicate. / 9 1 9 / is the bound of the recursion, and / 9 2 9 / is the reduction of the problem of concatenating two complex lists to tile concatenation of simpler lists, in other words, the recursion. Let us h)ok at an example. Suppose we are given the query: "? append([a, b], [1, 2], [a, b, 1, 2]).
Tile state space of the answer to the query is depicted by tile following tree: ? append([a, b], [1, 21, [a, b, 1, 2]). 2
I X/a, L1/[b], L2/[1, 2], L3/[b,1, 2]
? append([b], [1, 2], [b, 1, 2]). 2
IX~b, El~[ ], L2/[1,2], L3/[1,2]
? append([ ], [1, 2], [1, 2]). yes
from (1) and for L/[1,2]
Hence there is a solution path, so PROLOG will give a positive answer to the query. The append predicate is a built-in predicate in most PROLOG editions. Some other predicates which are used for list manipulation are presented in the exercises.
252
Logic Programming: The PROLOG Paradigm
3.5 Built-in Predicates B u i l t - i n p r e d i c a t e s are predicates defined within PROLOG and used in the management of d a t a and the interaction of the user with PROLOG.
3.5.1 Data Management Predicates: a s s e r t
and r e t r a c t
A P R O L O G program is a database with tile facts and rules of the program as the data. Any change in the d a t a of the program requires the u p d a t e of the PROLOG program itself. In the traditional programming languages, u p d a t i n g the program, i.e., adding and deleting data, as well as modifying the flow control of tile program, is carried out by the programmer. On the contrary, in PROLOG, the program is updated automatically during its execution, using the specific built-in predicates, a s s e r t and r e t r a c t . The a s s e r t predicate is used for the addition of facts and rules in the program. Its syntax is: assert(G) where G is any Horn clause. The a s s e r t ( G ) predicate
always succeeds, and has
as a result tile addition of G t o the data of the program. This predicate, a s s e r t , can be used in different ways. Let us take for example the following program which consists of a single clause: person(george).
/,
1 ,/
If we ask PROLOG the query: '? m o r t a l ( g e o r g e ) . we will get the answer "no", because in the program there is no relation defined for mortals. If now we state the query: ? assert(mortal(george)). then, because a s s e r t always succeeds, the p r o g r a m m i n g clause:
mortal(george).
/,
2 ,/
will be embodied in the program, resulting in the success of the query.
Built-in Predicates
253
Moreover, a s s e r t (:an be used to enter a new rule. In the previous exainple, instead of stating the query: ? assert(mortal(george)). we can state
? assert(mortal(X):- person(X)). The result of the above query will be tile introduction of the rule:
/,
mortal(X) :- person(X).
2' , /
instead of fact / 9 2 9 Tile position of introduction of a clause in the database can be chosen using the following variations of tile a s s e r t predicate:
asserta(G)
and
assertz(G)
which introduce tile clause G at the beginning or at the end of the program data, respectively. Just as we introduce d a t a to the program with a s s e r t , we can delete d a t a using the built-in predicate: retract(G) where G is a Horn clause. The operation of r e t r a c t
is the opposite of the operation
of a s s e r t .
3.5.2 Interaction Predicates: r e a d , w r i t e
and c o n s u l t
PROLOG, as a slick programming language, offers the ability of interaction of a program with tile peripheral units of the conltmter; tile screen and tile user. This is done through a number of specific built-in predicates, like r e a d , w r i t e
and consult.
254
Logic: Programming: The PROLOG Paradigm
The r e a d predicate is used for the input of characters coming from the keyboard. Its syntax is: read(X) where X is a variable, followed by the characters which the program must read. " r e a d ( X ) " always succeeds, resulting in the unification of variable X with the sequence of input characters. So, if we write: read(X) PROLOG will wait for some characters to be typed in from the keyboard. If we write "john", PROLOG will satisfy r e a d ( X ) , unifying variable X with the constant
john. Similar to "read" is the predicate: write(argument) where "argument" is any character string. " w r i t e " is used to display characters on the screen.
The write predicate always succeeds, resulting in the output of
the argument. If w r i t e is used as a query, then its argument will be sent, to the printer. Therefore, if we ask:
? write(''John is tall''). PROLOG will satisfy w r i t e by printing:
''John is tall'' By using r e a d and w r i t e we can develop a number of useful programs. For example, executing the following program:
print(Name, Serial_number): read(Name), read(Serial_number)~ write(Name) ~ write( c'
"),
write(Serial_number), nl print(Name, Serial_number).
Built-in Predicates
255
PROLOG will wait until we enter two character strings, which it will unify with tile variables Name aIl(t S e r i a l _ n u m b e r , respectively.
Then, it will print both
of these character strings on the same line, separated by the two spaces due to "write(' '
")".
After that, it will execute tile specialized predicate h i , which
puts the cursor at the beginning of the next screen or printer line. With the predicate c o n s u l t ( f i l e ) we have the ability to use a PROLOG program from a file. Predicate c o n s u l t ( f i l e ) always succeeds, resulting in saving the clauses which are in "file" in the computer's dynamic memory, so t h a t they can be used from PROLOG to
derive conclusions.
3.5.3 Equality in P R O L O G - Predicates =, ----, = ' = , PROLOG
is,
=\=, \---
uses different equality predicates to exaInine the equality of objects
in its language.
The most used equality predicates, which arc common in the
different editions of PROLOG, arc: "="
Where clause A - B succeeds when the objects (constants, variables, atoms or structures) A and B match. Where clause A - -
B succeeds if tile objects A and B are identical in
all respects. For examt)lc, equality" f a t h e r ( p a u l , andy) -- f a t h e r ( p a u l , X) succeeds by means of tile substitution X / a n d y , while equality: f a t h e r ( p a u l , andy) ---- f a t h e r ( p a u l , X) does not succeed. ':-:-"
Where clause "A1 - ' -
A2" su(:cceds when A1 and A2 are numerical
expressions which are equal. "is"
Where clause "X i s A" succeeds when X is a variable and A is a numerical expression.
With predicate "X i s A",
PROLOG finds tile
value of A and unifies it with variable X. Hcrc wc have to stress the difference between " i s " and " - " .
For example, the reply of PROLOG
256
Logic Programming: The PROLOG Paradigm to tile query: ? X-3-1. will be X - 3 - 1. This means that if we use " - " , the operation in the right hand side of the equation is not executed. If however we ask: ?X
3-1.
is
then the reply will be X-2
,,\=_,,
The predicate " \ - - "
is the negation of " - - " ,
A \ = = B succeeds when A - -
,,=\=,,
B fails.
Tile predicate " = \ = " is the negation of " - : - " ,
=\= A2" succeeds when A1 - ' -
clause "A1
which means that clause
therefore programming A2 fails, where A1 and
A2 are numerical expressions.
3.5.4 Arithmetic in P R O L O G Although P R O L O G was mainly designed as a symbolic programming language, it embodies a number of functions which are used for arithmetic operations. These functions, which are defined in most editions of P R O L O G , a r e : +
addition
-
subtraction
9
multiplication
/
division
div
integer division
rood
the remainder of integer division
There are also specialized built-in predicates which control i n e q u a l i t y between numerical expressions. >
greater
<
smaller
=>
greater or equal
=<
smaller or equal
Built-in Predicates
257
3.5.5 Type Checking of Objects: Predicates v a r , n o n v a r , i n t e g e r , atom, a t o m i c "var"
With the predicate v a r ( X ) we check whether X is a variable which has NOT been updated. If X is a variable which has not been updated, the predicate succeeds, whereas in any other case it fails.
"nonvar"
With the introduction of tile predicate n o n v a r ( X ) we check whether X is any PROLOG object other than a variable or an updated variable.
"integer"
The predicate i n t e g e r ( X ) sllcceeds if X is an integer. For example tile conjunctive query: ? integer(I),
I i s 3 + 5.
will fail, whereas" ? I i s 3 + 5,
integer(I).
will succeed and P R O L O G will reply I -
8.
This occurs because
before " i n t e g e r ( I ) " was executed, I had already been updated with tile result of tile addition 3 + 5. "real"
The predicate r e a l ( X ) succeeds if X is a real number.
"atom"
The predicate atom(X) succeeds only if X is an atomic term.
"atomic"
Tile predicate a t o m i c ( X ) succeeds only if X is an atomic term or a number.
3.5.6 The Operators Frequently, in order to facilitate the input and reading of complex predicates, subsection 3.3.5, we use predicates and functions as o p e r a t o r s .
So, for example,
for addition we write +(a, b), where "+" is an operator. To define an operator completely, we must clearly state its priority, its position and associativity, i.e., its relation with its operands. The complete definition of an operand is achieved by the use of the built-in predicate op, and by the introduction of a simple clause of the form: 9- (< priority >, <position }, [operators name list]).
258
Logic Programming: The PROLOG Paradigm
Tile programming clauses which define operators are stated in the beginning of the program. The p r i o r i t y of an operator states the order in which each operator will be applied in complex predicates, where there is more than one operator. The priority is a natural number whose range of values depends on the current language edition. For example, in TURBO-PROLOG the priority values are between 1 and 2000. Operators with priority value closer to 1 have higher priority compared to operators with priority value close to the upper priority limit. For example, we want the meaning of operation: 8+2,2 to be" 8+(2,2) yielding " 12 ", and not " (8 + 2) 9 2" yielding "20 ". Therefore the priority of " , " is numerically smaller than the priority of " + ". Tile p o s i t i o n of an operator denotes the position in which tile operator n l u s t appear relative to its operands.
There are three possibilities for tile position of
an operator: b e f o r e , b e t w e e n or a f t e r its operands.
Therefore, if we want to
assign to the predicate f a t h e r _ o f ( a , b) the meaning "a is the father of b", as an operator, then naturally we want the predicate to be between its operands, which means, "a f a t h e r _ o f b". In this case we have an infix operator and its position is denoted by the expression:
xfz Therefore, if we give priority 200 to the predicate f a t h e r _ o f , then we have to introduce tile following programming clause: 9
op(200, x f x , f a t h e r _ o f ) .
Similarly, we also have p r e f i x and p o s t f i x operators. Hence, if we define tile operator "-1" to express tile negation of facts, then it is natural to demand to have it appear before its operand, that is to have it be a prefix operator. If we give it priority 500, then we can define it by the programn~ing clause: 9- op(500, f z , ~).
Built-in P r e d i c a t e s
259
Now, if we want to define the operator " ! " to represent "factorial", then it would be natural to have " ! " appear after its operand, "x! ". Therefore " !" is a postfix operator. If we give it priority 400, then it can be defined t)y the clause: 9
op(400, xf,
!).
The a s s o c i a t i v i t y of operators has to do with whether the operator will have lower, higher or equal priority compared with its operands. The explicit declaration of priorities is important when the o p e r a n d p r i o r i t y of an operator is not clearly defined. For example, the arithmcti(: expression: a-b-c
means "(a m b) -- C'' and not "a -- (b - c)". Tile declaration of operator associativity is done by the separation of the symbols "x" and "y" which correspond to the operands of the operator. Therefore, with "fx" or "xf", that is with the use of variable x, we declare that the operand of the operator has s t r i c t l y l o w e r priority than the operator. With declarations
"fy" and "yf", that is with the use of variable y, we declare that the priority of the operand is lower than or equal to that of the operator. Of course we can also have a mixture of priorities between operands, as in the declaration "xfy". In most PROLOG editions, predicates for arithmetic operations and equality are built-in, defined as operators; with corresponding declarations for their priority, their position and their associativity. For example, 9-
op(500,
9-
op(400, y / x , [ . , / ] ) .
(i) (2)
[+, - ] ) .
.... op(700, xfx, [ - , is, <, >, - < , - > , - - ,
=\=,
\--,---]).
Clause (2) declares that ",", multiplication, and "/", division, are operators which have two operands and priority 400, whereas (1) declares that addition and subtraction have two operands and priority 500. Therefore, in arithmetic expressions with " ", " ", "," and "/", because of their lower priority, "," and "/ ---
m
executed first, and then "+" and " - " are executed.
/"
are
260
Logic Programming: The PROLOG Paradigm
If we want to define the logical connectives '%+", "V", "A", and "-~", for the formulation of Predicate and Propositional Logic clauses, we define the corresponding operators: :- op(800, x f x , ++). :- op(700, x f y , V). :- op(600, x f y , A). :-
op(500, fy,
Hence, we can introduce the logical equivalence: - ~ ( A A B ) ++ ~ A V ~ B directly in a PROLOG program as a fact.
3.5.7
The Towers of Hanoi
The game of the towers of Hanoi is a typical example of recursive definition. The game is played as follows. There are three posts, the left, the middle and the right ones, and N disks of different sizes having a hole in the middle. At the initial state, all the disks are on the left post in order of increasing size. The disk at the b o t t o m is the largest one. The goal, i.e., the final state, is to have all the disks transferred in the same size ordering to the right post, as shown in the illustration.
Built-in Predicates
261
The acceptable disk transfer movements have to obey the rules: (a)
We move only one disk at a time
(b)
We never place a bigger disk on top of a smaller one.
Obviously, we have to construct a p r o g r a m which finds a p a t h from tile initial to the final state, obeying the restrictions in movements. We observe that: (1)
In the final state, there are no disks on the left post.
(2)
We can use the right post as an auxiliary one and move N -
1 disks from
tile left to tile middle one, always according to rules (a) and (b). This will be exactly the recursive step.
Then we will have to move the final and
largest disk from the left post to the right post. (3)
Using the left post now as the auxiliary one, we move tile N -
1 disks from
tile middle post to the right one, according to rules (a) and (b). This way, we have solved the problem. Based on the above observations, we construct the following prograin" move(O,_
,
_
,
_
)'-!.
/,
1 ,/
/,
2 ,/
/,
3 ,/
/,
4 ,/
move(N, X, Y, Z ) : - N1 i s N - 1,
move(N1,X,Y,Z), print__move (X, Y),
move(N1, Y,Z,X). print_move(X, Y)-- write(' 'move a disk from X to Y socket''). hanoi(N)--
move(N, l e f t , m i d d l e , r i g h t ) .
P r o g r a m m i n g clauses / 9 1 9 / and / 9 2 9 / are tile i m p l e m e n t a t i o n of tile above observations. Clause / 9 3 9 / prints out the corresponding movement using tile predicate w r i t e . Clause / 9 4 9 / calls tile p r o g r a m with tile requested number of disks. Tracing the state space of the program, even for small values of N , is complex. In the general case of N disks, at least 2 N - 1 movements are needed. Tile reader can trace the state space of the p r o g r a m for N - 3 as an exercise.
262
Logic Programming: The PROLOG Paradigm
3.6 Negation in PROLOG The language of Predicate Logic, with the logical connectives and the quantifiers which it contains, can formalize most of the colloquial phrases.
For example,
phrase P of the spoken language P
:
"Every canary which is not sick, flies"
is symbolized in the Predicate Logic language with the phrase:
P(x)
:
(Vx) [ ( c a n a r y ( x ) A - - s i c k ( x ) ) - + flies(x)]
We therefore used the logical connective "-~" to express the negation of the predicate "sick". However the PROLOG language, based on Horn clauses, cannot express the negation directly, although PROLOG can answer with a "no" in queries. Hence, the formulation, the interpretation, and in general the handling of negation of clauses in PROLOG, a r e complex matters. The problems which arise are divided into three categories: (1)
How the "no" answer is interpreted.
(2)
How P R O L O G deduces negation of clauses.
(3)
How we use negation in PROLOG.
3.6.1 The Closed World Assumption and Negation by Failure According to the Closed World Assumption, CWA for short, subsection 3.1.3, if a statement A is neither directly nor indirectly expressed in a program P, that is, if A is neither a fact nor a conclusion based on the d a t a of P, then we accept that its negation holds. Let us consider the following program: p e r f e ct_s q u a r e (4).
perfe ct_s quare (9). and tile following query is set: ? perfect_square(16).
Negation in PROLOG
263
The reply of PROLOG will be "no". This PROLOG reply should not be interpreted as "16 is not a perfect square", where "not" denotes the standard negation in Mathematical Logic. W h a t is actually meant by the "no" answer is that having checked all the data of this program, the program cannot consider 16 as a perfect square.
This "thinking" mechanism of PROLOG and of Logic P r o g r a m m i n g is
based on the Closed World Assumption, and is called N e g a t i o n b y F a i l u r e . Actually, we are not able to express negative knowledge by Horn clauses: as defined, a program in Logic Programming contains exclusively Horn clauses which are able to express only positive knowledge, i.e., a program consists of facts, rules and queries in which no negation occurs. Applying resolution for a query Q, we actually prove that {facts, rules, -~Q} k- [--], i.e., {facts, rules} k- -~Q --+ [--1, Q This means that in general we cannot prove that a formula of the kind -~A, A without negation, is a consequence of the facts and the rules of some program. There are attempts to expand the methods of Logic Programming to formulae more complicated than Horn clauses in order to increase the expressive power of programs in Logic Programming, [Shep88].
3.6.2 Normal Goals Consider the following program P : l i k e s ( j o h n , mary).
/*
1 */
likes(john, apples). e a t s ( X , Y ) : ..... l i k e s (X, Y) , edible(Y).
/, /,
2 ,/ a ,/
edible(apples).
/,
4 ,/
If we ask P whether Mary is edible (!)
? edible(mary). the answer will be "no", i.e., ~edible(mary), because of the CWA. Then it is not meaningless to ask P"
? ~edible(mary).
/, 5 9 /
264
Logic Programming: The PROLOG Paradigm
To answer this query, the program will check whether tile goal
? edible(mary). succeeds.
Since " e d i b l e ( m a r y ) . " does not match any data of P, it does not
succeed. Therefore, the answer t o / .
5 9 / will be "no".
Goals of the kind "? -~A." are called n o r m a l goals, [Lloy87, NiMa95]. The matching procedure which takes place during the execution of programs allows the definition of success and failure of normal goals" When the goal "? A . " succeeds, then the goal "? -~A." fails, i.e., A is a consequence of the program being considered. When "? A." fails, then the goal "? -,A." succeeds, i.e., -~A is a consequence of the program being considered. Hence, negation in normal goals is also characterized by the failure or success of the corresponding unnegated goal, therefore it is called n e g a t i o n b y failure, NF for short. The relevant (meta)rule is:
? -~A.,
? A. fails NF
U] This means that when "? A." fails in the program P, then ~-~A -+ V] i.e., -~A is a consequence of P. Obviously we have to deal with two kinds of negation here, one being tile standard logical connective, and the other tile procedural one declared by the statement "? A. fails". Let us denote this procedural negation by ~ ; then tile fact that ? A. fails (:an bc denoted by ~ A. Informally, "? -~A." can bc considered as being equivalent to A. The rule NF actually allows the resolution between ? -~A. (i.e., A) and ~ A ! This kind of resolution is called N F - r e s o l u t i o n . Tile a t t e m p t to give a theoretical proof of the validity of the NF rule and to bring closer the logical negation and the procedural negation, has led to tile notion of the completion of t)rogranls [Clar78, Lloy87, NiMa95]. The principal idea is to consider the predicates occurring in some program P by means of : completely defined by means of ~ .
, i.e., +-, as
Negation in PROLOG
3. 6 . 3
265
Completion of Programs
Tile logical basis for the completion of a PROLOG program is given by the following axioms (see Remark 2.3.7), universally quantified: axiom of equality of terms
(A6)
x -
x
(AT)
x=y-~(A~A1)
rule of substitution of equal terms
the following valid formulae of PrL:
(1) (2) (3)
(B~A)
A(C~A)
++ ( B V C ) ~ A
(A(x, y) ~ B(x, y)) ++ [(gx)(3y)(A(x, y)/~ x = a A y = b) ~ B(a, b)] A(a,b) ++ (x = a A y = b ~ A ( x , y ) )
and some axioms concerning the properties of the functions appearing in the programs. Let x -7(= y denote ~(x = y). All functions which occur in the program to be completed, [Clar78, Lloy87, NiMa95], must obey the following specific, i.e., non-logical axioms, universally quantified:
f(xl,...
,xn) r g ( y l , . . . ,ym)
f(Xl,...
,Xn)
f(x) r x
--
f(Yl,-..,Yn)
for all functions f, g for which f ~ g --)"
(Xl --
?/1) A
9 9 9 A (Xn
= Yn)
for all x proper subterms of f ( x )
i.e., different flmctions have different values, a function has different values for different variables, and, the value of a function cannot be a variable occurring in the function, [Clar78]. Let us find the completion for the example of tile previous subsection. To ~nake the intuition behind the completion more obvious, we will use +- instead of :
.
Applying the formulae (2) and (3) to the clauses of P, we obtain the following equivalent form of P : likes(t1, t2) +-- (tl = mary A t2 = john) likes(t1, t2) ~ (tl = apples A t2 = john) eats(t3, t4) +-- ( 3 Y ) ( 3 X ) ( t 3 = Y A t4 = X A likes(X, Y) A edible(X)) edible(ts) e-- t5 = apples
266
Logic P r o g r a m m i n g : The PROLOG Paradigln
and by means of the formula (1) we have: l i k e s ( t 1 , t2) <--- ( t l = m a r y A t2 = john) V (tl = apples A t2 = john)
eats(t3, t4) +- ( 3 Y ) ( 3 X ) ( t 3 = Y A t4 = X A likes(X, Y) A edible(X)) edible(t5) +- t5 = apples Now we regard tile formulae at the left-hand side of +-- as defining completely the predicates "likes", "eats" and "edible": according to ttle CWA, the right-hand side of the above formulae is the only available information on "likes", "eats" and "edible" in P, hence it must definitely define these predicates. This means that e-can be considered as equivalence,
++ ! Consequently, P must have the following
complete form: likes(t1, t2) ~ (tl = m a r y A t2 = john) V (tl = apples A tg. = john) eats(ta, t4) ++ ( q Y ) ( 3 X ) ( t a = Y / x t4 = X / x likes(X, Y)/X edible(X)) edible(t5) ++ t5 = apples This way we are able to immediately define tile negation of these predicates, e.g., -,edible(t5)
+-~ t5 ~: apples
Hence, negated queries and normal goals are meaningful this way, and, moreover, negation by failure is closer to logical negation. Nevertheless, the interpretation of an implication as an equivalence cannot be justified within the framework of PrL, and it is meaningful only as a semantic interpretation of progra~ns. Formally: Definition 3.6.3.1:
(i)
If tile predicate A occurring in a program P appears in the head of the following Horn clauses of P :
A(xl,,...
, x ~ ) +-- W1
A ( x l k , . . . ,x,~k) +-- Wk
Negation in PROLOG
267
then the c o m p l e t e d e f i n i t i o n of A in P is (Vtl)---(Vtn) ( A ( t l , . . . ,tn) ++ E1 V - - - V Ek) for
E,
:=
(3y,)--.
(3vd) (t, =
A--.
A t,, =
A
Y l , . . . ,Yd all variables occurring in the clause A ( x l , , . . . ,xn,) +- Wi and t l , . . . ,tn are new variables, i.e., they do not occur in P. (ii) If there is no occurrence of A in the left-hand side of any clause of P, the c o m p l e t e definition of A in P is (Vtl) ..-(Vtn) - , A ( t l , . . . ,tn) i.e., ( V t , ) - - - ( V t n ) A ( t l , . . . ,tn) ++ El. (iii) The c o m p l e t i o n of P, compl(P), is the set of the complete definitions of all predicates occurring in P. m 3.6.3.1 The benefits of using compl(P) consist of the theoretical means allowing the increase of the expressive power of P without losing any consequence of P, and of a kind of legitimacy for the rule N F . The relevant theorems, whose proofs can be found, for example, in [NiMa95, Lloy87], are: T h e o r e m 3.6.3.2:
If P ~ A then compl(P) ~ A.
m 3.6.3.2
T h e o r e m 3.6.3.3:
The
NF
rule is sound for compl(P).
m 3.6.3.3
In general, the procedures in Logic Programming do not allow us to find the values x satisfying queries of the form "? --,A(x).", and it is often assumed that the NF rule applies only to ground instances, [NiMa95]. If a normal goal "+-- --A" contains only ground instances, then the corresponding negation is called safe, otherwise it is called unsafe. Many Logic Programming languages allow only safe negation and incorporate mechanisms to delay the processing of normal goals until the occurring variables take a concrete value.
268
Logic Programming: The PROLOG Paradigm
3.6.~ Normal Programs and Stratification Nevertheless, goals with negated predicates are allowed in Logic Programming and P R O L O G . If we allow negations to occur in the body of rules, the relevant programs are called n o r m a l or g e n e r a l p r o g r a m s , [NiMa95, Lloy87]. However, the situation then becomes more complicated: a normal program P may contain a clause of the form: A:-
--aA
i.e., A <-- -~A, which is equivalent to A; but in compl(P), it takes the form:
A ~-~A equivalent to A A -~A! Thus compl(P) is not a consistent set of sentences in this case! Nevertheless, normal programs which are stratified do have consistent completions. Let us consider the following program P : A : .... ---lB.
A:- B,
CO
CO
/,
1 ,/
/*
2 ,/
/,
3 ,/
Then compl(P) = {A <-+ ~ B v C}, which is obviously a consistent set. We can clarify what we mean by a program being stratified as follows. Notice that in P, A is determined both by --B and by B A C, i.e., by both B and C. Roughly speaking, what we would like to do is to separate A, B and C into a hierarchy of levels of definition, with predicates in the higher levels being defined by those in the lower levels. Certainly the highest "definition level" must contain A, since A depends on B and C. So B must be contained in a lower "definition level" than that of A, since B would have to have been determined already in order to be able to define A. Since these "definition levels" must completely determine the predicates in the upper levels, they have to be disjoint. Actually, we prefer to refine the approach we have just described to allow a predicate X to have the same level as a predicate Y it defines, provided it was X and not its negation which was used to define Y. In our example, the predicates occurring in P can be partitioned into the two
Negation in PROLOG
269
"definition levels" So = {B} and $1 = {A, C}, such that the negated predicate B which determines tile predicate A has been determined at a level lower than that at which A is determined. Since it was not tile negation of C, but C itself which was used to determine A, the level $1 may contain both A and C. The reason we want to allow this approach, is that stratification is designed to control unwanted effects of recursion. In general, a negated predicate D which determines a predicate F at the left-hand side of :-- must have been determined at a lower level, in order to avoid recursions during tile backtracking procedure which lead to formulae of tile kind D :- -~D, as for example in the following program: A ..... B .
B'---,A.
Formally, [NiMa95, Lloy97]: Definition 3.6.4.1:
A normal program P is called s t r a t i f i e d if tile set of the predicates which occur in P can be partitioned into S o , . . . , Sn such that if A .... B 1 , . . . , B m c P and A C Sk, then, if in Bi, 1 < i < m, occurs no negation, then Bi C S o U S , U - - - u S k while, if Bi is of the form -~Ci, then CiESoUS1u---uSk_ m 3.6.4.1 Theorem
3.6.4.2:
I f P is stratified, then compl(P) is consistent.
Theorem
m 3.6.4.2
3.6.4.3:
I f P is a n o r m a l program, then the NF rule is sound for compl(P).
m 3.6.4.3
Completeness results concerning the NF rule are beyond the scope of this book and can be found in [Lloy87, NiMa95].
270
Logic Programming: Tile PROLOG Para(tigm
Let us now see an example of the application of the NF rule to a normal progra,n, i.e., to a program expressing negative knowledge. Suppose we have a program P with data expressing negative knowledge: i n t e r e s t e d _ i n ( X , Y ) : - --unsuited(X, Y).
/*
1 */
u n s u i t e d ( X , Y ) : - ~ l i k e s ( X , Y).
/, /,
2 ,/ 3 ,/
/,
4 ,/
likes(mary, john). and we want to know whether Mary is interested in John, ? interested_in(mary, john).
The program will proceed as follows: / . 4 . / matches the left-hand side of / . 1 9 / for X - mary and Y - j o h n . The next goal is: ? --unsuited(mary, john).
/ 9 5 9/
The algorithm checks the goal: ? unsuited(mary, john).
/.
6 ./
/ 9 6 9 / matches the left-hand side of / 9 2 9 / for X - mary and Y - john. The next goal is: ? --likes(mary, john).
/ 9 7 9/
The algorithm checks the goal" ? likes(mary, john). which succeeds by / 9 3 9 By NF, the goal / 9 7 9 / fails, i.e., the goal / 9 6 9 / also fails, and, consequently, the goal / . 5 9 succeeds and the answer to the query / 9 4 9 / is "yes"! On the other hand, if we try to apply resolution to the corresponding logic sentences, we cannot obtain results easily, because P does not consist of Horn clauses. Let us consider the following equivalent form of P : interested_in(X, Y) V unsuited(X, Y)
(1)
unsuited(X, Y) V likes(X, Y)
(2) (3) (4)
likes(mary, john) -~interested_in (mary, john)
Negation in PROLOG
271
In this form of P, we can only apply resolution between (1) and (4). The sole result which can be obtained is "unsuited(nlary,john)". On the contrary, applying the m e t a r u l e NF, we are able to find an acceptable sollztion which is correct under tile
hypothesis that the data which completely characterizes the predicates in P is the one given in P !
3. 6.5
The Predicate f a i l
A given goal in PROLOG fails, which means that it takes a false truth-value, if all the possible attempts to satisfy it through backtracking fail. Failure, and consequently negation, carl be declared in PROLOG through a special built-in predicate which expresses failure, namely f a i l , which has a false truth-value and always fails. So fail ++ ~ A - ~ Q where Q is any formula of Predicate Logic. Let us take as an example the sentence: A :
"John is not a canary"
Sentence A can be expressed in PROLOG by the rule: canary(john) :- fail.
(,)
meaning that "the goal c a n a r y ( j o h n ) fails". Let us assume that the database of a PROLOG program contains only the above rule (,). Then the query: ? canary(john). will fail. After the unification of the query with the head of rule (,), the new subgoal f a i l fails, by means of the definition of f a i l .
This is exactly the procedural
interpretation of f a i l : the head of the rule, in whose body f a i l is found, fails. Here we have to emphasize that clauses of the form: A :- fail. are logically true clauses of Predicate Logic. The clause
QA-~Q ~
A
272
Logic Programming: The PROLOG Paradigm
is true in every interpretation of Predicate Logic (Definitions 2.5.5, 2.5.19, and the proof of Corollary 1.5.4). In general, negation in PROLOG can be expressed using a specific combination of cut and failure: let us take for example tile clause
P(x) :
(Vx) [(canary(x) A ~sick(x)) -+ flies(x)]
on page 262. Assume that we want to express the relation between flying objects and canaries.
Depending on the case, we will have canaries which fly, but also
canaries which do not fly, for example, because they are wounded. We therefore have to include in the program both possibilities, i.e., both the canaries which do not fly because they are sick and the canaries which fly. This can be done with the PROLOG program:
flies(X):-- sick(X),
!, fail.
flies(X) :- canary(X).
/* /,
1 */ 2 ,/
In this program we use the cut, !, to avoid the activation of the second rule in the case of a given canary X which is sick. If a canary X is sick, then the cut will prevent PROLOG f r o m backtracking to the second rule and f a i l will lead to the failure of the head of rule / .
1 9
Hence, we will conclude that this given
canary X does not fly. If a canary is not sick, or it is not specifically mentioned in the data base that the canary is sick, then the first rule will fail and PROLOG will backtrack (the cut is not executed), activating the second rule, whose head will be satisfied. We therefore see that negation in PROLOG can be expressed by the use of a cut-failure combination.
3. 6. 6 The Predicate n o t A different a t t e m p t to express negation in PROLOG is tile special predicate n o t ( Q ) , subsection 3.3.5, which denotes tile negation of Q, and is satisfied when the goal Q fails. The predicate n o t essentially simulates the procedural operation of the cut-failure combination and can be defined by tile following programming clauses:
Negation in PROLOG
not(Q):- Q,
!, f a i l .
not(Q).
273
/,
3 ,/
/,
4 9/
Therefore, if we ask: ? not(Q). PROLOG will activate rule / 9 3 9
If Q is not explicitly defined in the database or if we cannot conclude Q from the data of the program, then the subgoal Q in the body of the rule will fail. Next, PROLOG will attempt to satisfy not(Q) through the activation of the clause / 9 4 9 Since Q fails, not(Q) is satisfied by the NF rule, and PROLOG will reply "yes", i.e., the negation of Q holds. Now, if either Q is explicitly defined, or we can infer Q from the data, the subgoals Q, !, and f a i l in rule / , 3 9 will be satisfied in turn. The cut, !, prevents the activation of clause / 9 4 9 / and f a i l denotes the failure of not(Q). Therefore, the head of rule / 9 3 9 not(Q), does not succeed, and PROLOG will reply with "no", which means that Q can be satisfied, whereas not(Q) fails. The use of not may help a lot expressing negations in programs. In the example of the previous subsection, clauses / , 1 9 and / , 2 9 can be represented by the rule: f l i e s ( X ) :- c a n a r y ( X ) ,
not(sick(X)).
/,
1' 9 /
The predicate not is built-in in most PROLOG editions. If it is not, then we can define not by programming clauses / 9 3 9 / and / 9 4 9
Once again, we observe
that PROLOG offers the ability to define the predicates of its language through the language itself (PROLOG through PROLOG).
3.6. 7 Nonmonotonic Logics Reasoning based on the CWA, presents inherent weaknesses. Let us for example assume tile following singleton: s
=
v
274
Logic Programming: The PROLOG Paradigm
where a and q0 are clauses in Predicate Logic. If we represent by S
~CWAA
tile
fact t h a t clause A is a consequence of S under the Closed World Assumption, then: S ~CWA --a
and
S ~CWA -~qO
because a and ~o are neither explicitly defined, nor can be deduced from the data of set S. Therefore, the consequences of S under the CWA,
ConcwA ( s) = { v
ConcwA(S), are:
}
which is obviously inconsistent (for example, resolution two times will result in [--]). Therefore, the use of the Closed World Assumption leads to inconsistencies, even in trivial cases. One further problem which arises when working with the CWA has to do with the generality of the rules of a program and their exceptions. For example, according to rule / 9 1' 9 / of subsection 3.6.2, to conclude that something flies, we have to eliminate the possibility t h a t it is sick. This means t h a t we have to e x p l i c i t l y enter all the exceptions to rule / 9 1' 9
This in general is impossible, because the
exceptions might not be known in advance, but might arise during the execution of the program. In recent years, there have been a t t e m p t s to develop formal methods for conclusion deduction which do not present problems similar to those presented by the CWA. The related research aims at the development of new logics, called Nonmonotonic
Logics. The main difference between Nonmonotonic Logics and
the classical Predicate and Propositional Logics is that in Nonmonotonic Logics we can invalidate previous conclusions when we insert new conclusions. Typically, in the framework of classical Logic, the following holds: If S ~ A then it follows that S to S' ~ A meaning that if clause A is a consequence of a set S of clauses, then A remains a consequence of any extension of S by a clause set S'. In Nonmonotonic Logics, NM
for short, it holds that: If S
~NM A
then in general S U S'
~==NMA
which means that if clause A is a consequence of S in a Nonmonotonic Logic, then A in general does not remain a consequence of the extension of S by some clause set S'.
Expert Systems
275
Cancellation of previous conclusions after the expansion of data is closer to the natural way that people think, i.e., revising their beliefs according to the information which they have about their surroundings.
Nonmonotonic Logics aspire to
become the new foundation of Logic Programming languages, like PROLOG, aiming at the development of "smarter" and more capable systems. In Nonmonotonic Logics, the universe is not closed: The universe is neither static, nor once and for all given, but it constantly expands by the inflow of new data and information; whereas the concept of truth in the universe is relative, and always depends on the knowledge which the universe possesses at each moment. The inference mechanism in Nonmonotonic Logics is based on the Closed World Assumption at each time instant: Assume that the universe is expressed by the set $1 of clauses at time instant t~; whereas at time instant t2, with t2 > tl, it is expressed by the set $2, where $1 ~= $2. In order to find the consequences of $1 and $2, in other words, in order to answer queries concerning $1 or
$2, we
assume that the Closed World Assumption holds
in both $1 and $2. This means that at time instant tl, the universe is closed and is expressed by the set of clauses $1; and at time instant t2, the universe is also closed and is expressed by the set $2. More information about Nonmonotonic Logics can be found in, for example, [Turn84] and [Besn89].
3.7 Expert Systems 3.7.1 Artificial Intelligence The field of A r t i f i c i a l I n t e l l i g e n c e , AI for short, deals with the developnmnt and formalization of intelligent methods and solution procedures for problems. By the term "intelligent", we mean methods which allow reasoning and judgement in a way comparable to the human thinking process. Research in AI aims at the construction of symbolic representations of the world, and covers areas such as robotics, natural language understanding, and expert systems.
276
Logic Programming: The PROLOG Paradigm
3.7.2 Expert Systems and Knowledge Management E x p e r t s y s t e m s arc computer programs capable of solving complex problems which require extensive knowledge and experience ill order to be solved [BuSh84]. The greatest difference between the traditional programs and expert systems, is that traditional programs manage data whereas expert systems manage knowledge. The differences between the two different kinds of programming can be presented in detail in the following table:
Data Management
Knowledge Management
Data Representation and Use
Knowledge Representation and Use
Algorithmic Procedures
Search Procedures
Repeating Procedures
Conclusive Procedures
Expert systems are specialized systems, meaning that they refer to subfields of specialized application fields; such as medical diagnosis systems, mechanical failure diagnosis systems, systems for the synthesis of complex chemical compounds, etc.. The means by which expert systems achieve their goals are based on sets of facts and sets of h e u r i s t i c s , i.e., rules for knowledge management. These facts and rules are developed by specialized scientists who work in the field to which the expert system has applications. We can say that this collection of facts and heuristics constitutes the knowledge of the system in this specific field, and hence expert systems are often called K n o w l e d g e B a s e d S y s t e m s . An expert system consists of a d a t a b a s e , an i n f e r e n c e m e c h a n i s m , and the
interface for the communication between the system and the uscr. The d a t a b a s e is a simple set of elements and states, which constitutes the description of the universe of the given field. Knowledge representation [Watt90] in systems based on rules is achieved by using rules of the form: IF (condition_l) a n d . . .
(condition_k)
THEN ( action ) Tile i n f e r e n c e m e c h a n i s m consists of a set of general purpose rules, which is used to guide and control the inference process. This mechanism implements
Expert Systems
277
general models, which characterize and formalize processing and resolution procedures of problems; such as the depth-first search, the backtracking and other general purpose inference procedures described in previol~s sections. In cases where the inference mechanism can become autonoinous with respect to the specialized knowledge which a given expert system contains, and work independently, it can be used for the development of new expert systems. Such systems with general inference power are called shells. The i n t e r f a c e is responsible for communication between the user and the system. In most cases, specialized subsystems which analyze and synthesize natural language (parsers) are used, in order to get a natural and friendly communication between the user and the system. The acquisition and formalization of knowledge needed to be embodied in a given expert system, is one of the most fundamental and, at the same time, difficult stages during the development of the system. Indeed, it is called a "knowledge acquisition bottleneck".
The difficulty lies in the provision and simultaneous
formalization of specific rules which express the procedures which a specialist in problem solution in the field follows. Therefore, the work of the k n o w l e d g e e n g i n e e r , in other words the person whose duty is to find and express these procedures, is the basic element in the successflfl development and use of an expert systein. In recent years, due to the importance of knowledge acquisition mechanisms, a u t o m a t i c k n o w l e d g e a c q u i s i t i o n s y s t e m s have been developed. The methods used are in many cases very successflll, and present great theoretic interest [O1Ru87, Quin86]. The t o o l s used in the development of expert systems are divided into two main categories [WeKu84]: the h i g h level s y m b o l p r o c e s s i n g l a n g u a g e s , and the g e n e r a l p u r p o s e e x p e r t s y s t e m s or shells. The most widespread high level symbol processing languages are LISP [WiBe89] and P R O L O G . The shells are used to facilitate and accelerate the development of special expert systems. With the use of special user friendly interfaces and the structures which they embody, they facilitate both the input and the revision, if needed, of an expert system's facts and rules.
Furthermore, they embody an autonomous inference
procedure capable of transacting and deriving conclusions from different databases. This offers the ability to develop expert system prototypes rapidly.
278
Logic Programming: The PROLOG Paradigm
3.7.3 An Expert-System for Kidney Diseases The K i d n e y E x p e r t S y s t e m , KES, is a system which diagnoses kidney diseases, and was developed with the cooperation of the Logic and Logic Programming Group of the department of Mathematics in the University of Patras, and a team of specialized doctors from the University Hospital of Patras. In the presentation which follows, the clauses and procedures which implement KES have been limited only to those clauses and procedures which describe the structure and operation of the system, and at the same time display the use of PROLOG in the development of expert systems. The rules which constitute the special purpose Knowledge Base of KES are the formalization of the manner in which the doctor makes a diagnosis based on the symptoms of the patient: the fact that the patient is relatively old, that his urine presents a qualitative change as well as his blood, and that the kidneys appear swollen. If all these conditions are met, then the patient probably suffers from hydronephrosis. Therefore, the rules of KES are of the following general form:
and/or and/or
the data of the patient satisfy some given requirements reported(X) observed<symptom_l > ....................
and/or and/or
observed<symptom_k > found< lab_result_l >
and/or
found( lab_result_n ) the diagnosis is (disease) with probability P
IF
and
THEN
These rules were formed under the guidance of specialized doctors. Therefore, for hydronephrosis, the corresponding rule is: d i a g n o s e ( h y d r o n e f r o s i s , 0.7):- a g e ( H ) ,
S > 11,
reported(change_in_quality_of_urine), observed(bloodurine), observed( twos i de d_k i daney_e xp an s i on ).
where 0.7 is the probability that the given patient has hydronephrosis. The theoretic probability measure which is appended to each diagnosis is an a priori assessment of probability. Therefore, the possibility of diagnosing tile same disease through different paths and with different probability statements is offered.
Expert
The basic predicates included in
KES
Systems
279
are:
d a t a : Accepts four arguments: sex, age, first name, and last name of the patient. o b s e r v e d : Accepts one argument: the name of tile symptom observed by the exanliner.
found : Accepts one argument: the name of the result of a possible laboratory test. r e p o r t e d : Accepts one argument: one of pain, c h a n g e _ i n _ q u a l i t y _ o f _ u r i n e , change_in_quantity_of_urine, change_in_frequency_of_urination.
The
predicate r e p o r t e d is use(t for the grouping of symptoms. Therefore a rule of the above general format which does not contain in its body the predicate r e p o r t e d , with any argument, will not be activated, resulting in the acceleration of tile inference procedure. d i a g n o s e : Accepts two arguments: tile name of the (tiagnosis and tile corresponding probability that the patient has the disease stated by d i a g n o s e . After the description of tile t)a.sic predicates and tile st)ecial rule of
KES, we
carl
nmve on to the presentation of the t)rocedures an(t the corresponding programming clauses which inlplement tile inference inechalfisin of the system. At tile begiilning, we have to define the input procedure of tile patient's data, in other words, predicate data: data(S, A, L, F ) : write(' 'Patient's sex(m/f)' '), read(S), nl, write( ccPatient's age''), read(A), nl, write(' 'Patient's last name''), read(L), nl, write( ccPatient's first name''), read(F), nl assertz(sex(S)) , assertz(age(A)), as s e rt z (surname(L)), assertz(name (F)). Next the input procedure for the symptoms reported by tile patient has to be defined, in other words, predicate r e p o r t e d : reported(X):- write(' 'Reported from patient'' , X , read(ANSWER), nl, A N S W E R ---- r Cy ' '
~sertz(reported(X)).
"(y/n)?"),
280
Logic Programming: The PROLOG Paradigm
Now, tile input procedures for the symptoIns observed during the examination, as well as the possible laboratory results which the doctor might have available, have to be defined: observed(S) :. write(' 'Observed during the examination' ',S," (y/n)?' '), read(ANSWER) , nl, ANSWER ~ c c y, ,
~ssert-(observea(S)). found(R):-
write(''Laboratory read(ANSWER), nl, ANS%VER ~ c c y, ,
result'',R,''observed
(y/n)?''),
~ssert-(observed(8)). In tile above definitions, the existence of assertz basic implementation
results in tile insertion of tile
of the head of the corresponding
rule, when
it is satisfied.
Therefore, tile interaction between the system and the user during the inference procedure is recorded and stored for any fllrther processing. Finally we define the general diagnose predicate: proceed_in_diagnose :-- !, data(S, A, L, F ) , di agnose ( D, P ) , a s s e r t z (diagnose (D, P)). If tile above corporates
predicate
there will be a dialogue diagnosis
is set as a query
heuristics, then PROLOG
which
Let us assume
between
if we
assume
that the database
on to derive conclusions.
tile user and
can be derived, depending for exalnple
and
will move
PROLOG
which
in-
In reality,
will result in tile
on tile user's responses.
that tile database
contains
the following
diagnose
rules: diagnose(acute_pyelonephritis,
0.7) :reported(pain), observed(high_fever_with_shiver), observed(bloodurine).
diagnose(acute_pyelonephritis,
0.8) :age(A),
A
<9,
diagnose ( a c u t e _ p y e l o n e p h r i t is, 0.7), found (po s i t i ve_ur ine_cul t u r e ). d i a g n o s e ( a c u t e _ p y e l o n e p h r i t i s , 0.9) :reported(change_in_frequence_of_urinations),
reported(pain), diagnose(acute_pyelonephrit is, 0.8), observed(pain_in_kidneys) observed(micturit ion) , observed(dysury).
Expert Systems
281
Asking now the query: ? proceed_in_diagnose.
tile following dialogue takes place, where the user's responses are underlined: '7 P a t i e n t ' s
sex(m/f)
? Patient's
age
? Patient's
last
? Patient's
first
m
11 name S U R N A M E name NAME P R O L O G e n t e r s t h e following clauses: sex(m). age ( 11 ). surname(SURNAME). name(NAME).
? Reported from patient pain (y/n)?
y
P R O L O G e n t e r s tile clause:
reported(pain).
? Observed during examination pyuria (y/n)?
y
P R O L O G e n t e r s tile c l a u s e s
observed (pyuria). diagnose (acut e_pyelonephrit is, 0.7). and attempts to satisfy the predicate, diagnose, in a different way. Since in the terms of the second clause for diagnose, age(A) is mentioned, which is satisfied for A - II, the term A < 9 is not satisfied and therefore P R O L O G goes on to examine the third clause for the predicate, diagnose.
? Reported from patient change_in_frequence_of_urinations
(y/n)?
II
P R O L O G abandons this rule as well, and ends the attempt to satisfy the initial query, and then replies because of the cut.
D - acute_pyelonephritis
P=0.7
282
Logic Programming: The PROLOG Paradigm
3.8 The Evolution of Logic Programming 3.8.1 Editions of PROLOG PROLOG, in tile form of PROLOG I, was designed as an artificial language for natural language processing based on logic [Colin90, Col90a].
Despite its great
capability in the expression and resolution of complex problems, its abilities in arithmetic calculations were relatively limited. PROLOG II, the PROLOG which we have described, is a relatively slick and rich language in dealing with arithmetic calculations, due to the addition of infinite trees and the predicate " \ = ". The latest edition of PROLOG, PROLOG III [Colm90, Col90a], has much greater capabilities.
It includes a built-in mechanism to manage infinite trees, Boolean
Algebra, the predicates <,
=<,
>,
=>
and
\=
and the addition, subtraction, and multiplication functions with constants. PROLOG III is a Constraint Logic Programming language [Cohe90]. Constraint
Logic Programming is the result of tile a t t e m p t to enrich the Horn clause language with variables which take values in different domains, for exalnple, trees, Boolean Algebra, real or rational numbers, etc..
3.8.2 Dialects of PROLOG PROLOG, actually, offers a mechanisin for tile execution of logic prograins. Tile
dialects of PROLOG use the same basic mechanism, and their difference lies mainly in the Inanner of interaction with the user. MICRO-PROLOG [C1Mc84], TURBO-PROLOG, ARITY-PROLOG and A-PROLOG [Xant90] are dialects for small computers.
A-PROLOG is the Greek dialect and
uses Greek characters. IC-PROLOG, mu-PROLOG and nu-PROLOG allow for a more comt)lex selection of predicates for unification [Lloy87].
Nu-PROLOG, through its automatic pre-
processor, is much more responsive in controlling the program conlpared with tile other dialects of PROLOG, and is considered to be a language very (:lose to Ideal Logic Programming.
The Evolution of Logic Programming
3.8.3 P R O L O G
283
and Metaprogramming
In chapters 1 and 2, we saw that for a language Z; of Predicate or Propositional Logic, there exists a corresponding metalanguagc; the language in which we express (:oIilments for Z; and its clauses, Remark 1.5.2. Tile sam(; (:ai)ability also exists in PROLOG. The metaprograms of PROLOG have PROLOG programs as terms, and they examine their relations an(t properties. For exaInple [Kowa90], the predicate: demo(T, P) which is interpreted as "program T is used for the proof of (:lause P" Therefore, using the capabilities of PROLOG for metat)rograinming, beginning froIll simple programs and using metalanguage in every step, we can writ(; a PROLOG program which processes a group of PROLOG prograI~lS, ea(:h one of whi(:h t)roccsses other PROLOG t)rograms, etc.. The purpose of using metaprogramming is automatic development of PROLOG t)rograIns, and mltomati(: verification of properties of certain PROLOG programs.
3.8.4
PROLOG
and Parallelism
The rescarctl for tile (tevelopinent of faster and 1,lore effective computing inachines led to tile construction of parallel computers. The classic computers process each c()mnlan(t or statement of the program on(; after th(; other, sequentially; and so the machine is cat)able of executing only one coIninand or stateinent in any one coinputational step. In parallel (:omputers, one (:()Hq)utational step inight (:()nsist of mutually indet)cndcnt commands or statements which arc cxecute(t concurrently by means of multiprocessors. Although the use of parallel algorithms an(t parallel computers gives simple and natural solutions to complex problems, it also creates a mlmber of new problems having to do with the synchronization of the communications between the indet)(;ndent processes, as well as problems having to do with the control of the progra, Ii. These probleIns can be overcome by sp('(:ial purt)ose t)rogramming languages which are capable of expressing parallelism, e.g., OCCAM, and which a t t e m p t to bridge the gap between this new multiproccssor technology (hardware) an(t the classical sequential software.
284
Logic Programming: The PROLOG Paradigm
PROLOG is unable to express parallelism through its language. Moreover, as we have seen, a PROLOG prograln is always executed by sequential pro(:esscs. Therefore, parallel programming languages are being develope(i and have been enhanced in recent years. These languages are based on the parallel interpretation of logic programs and contain special elements for the synchronization of processes and the control of tile program. Tile R e l a t i o n a l L a n g u a g e , Clark and Gregory (1981), was the first logic t)rogramming language which could express nondeterministic choices between independent declarations of the program. The clauses of the language have one of the following forms: A :-
G1,
...,
Gk/B1,
A
G,,
...,
a,r /
: ....
Be,
...,
B~
[I S2 II-.. II S,.
(,)
(**)
where A, G 1 , . . . , Gk, B 1 , . . . , B,. are predicates, $1, . . . , Sr are conjunctions of predicates, A is the head of clauses (,) and (**), B1, ... , Br tile tail of (,), $1, . . . , S~ the tail of (**), and G1, ... , Gk the conditions (guards) which have to hold in order for the program to use (,) and (**). The difference between clauses (,) and (**) is procedural: the conjunctions S/, i = 1 , . . . , r, are executed by independent processors. Tim unification wittlout backtracking is executed in paralle.l. Tile Relational Language, the first parallel logic programming language, has relatively limited abilities. Therefore, a Relational Language program carl only verify the truth of simple relations between facts. C o n c u r r e n t PROLOG, Shapiro (1983), uses programming clauses of the form (,) but two different types of variables; the r e a d _ o n l y v a r i a b l e s and the ordinary variables. The read_only variables cannot be unified. Tile unification algorithm has to wait until these variables take a certain value.
Through this delay the
synchronization problem is also solved. PARLOG (parallel programming in logic), Clark and Gregory (1984), is an improvement of the Relational Language, whereas GHC (Gllardcd Horn Clauses), Ueda (1985), is a combination of the best eleHmnts of PARLOG and concurrent PROLOG.
In general, the field of parallel logic programming languages is still evolving. More information about these languages can be found in [Shap87].
PROLOG and Predicate Logic
285
3.9 PROLOG and Predicate Logic As we have seen, Logic P r o g r a m m i n g deals with a specific class of Predicate Logic clauses, tile Horn clauses.
Therefore, the PROLOG language without the
special predicate symbol ". ", subsection 3.3.7, is a subset of the language of Predicate Logic. PROLOG, which is actually a conclusion inference mechanism from assumptions within Logic Programming, can be distinguished into two varieties: (1)
Pure PROLOG
(2)
Real P R O L O G
Pure PROLOG does not contain control mechanisnls and therefore "cut", !, and predicates f a i l
and not, for the negation, are not defined in its language.
Real PROLOG is the extension of pure PROLOG by the predicates "cut", f a i l and not. This division of PROLOG into pure and real is very iInportant; because the completeness theorems of Predicate Logic hold in pure P R O L O G , yet they do NOT hold in real PROLOG: Tile data of a given PROLOG program are a set of propositions S of Predicate Logic. Every query of the form "? P. " in tile program leads to the proof or disproof of S }--R P.
In pure P R O L O G we are
sure that, because of the completeness theorem (Theorem 2.10.11) and Robinson's theorem (The.orem 2.9.8), tile unification algorithm will stop either after having found all tile values of variables for which S k R P, or after disproving S k-R P. In real PROLOG, the use of "cut" is decisive. By cutting off a sub-
tree of the solution tree with "cut", we might hinder the program from finding the only values of variables for which S ~-R P.
The exam-
ple of subsection 3.4.4 (pp. 240-242) is characteristic: Tile use of rule / 9 1' 9
forced P R O L O G t o give a wrong answer.
Therefore, whereas the programs of real PROLOG which use "cut", f a i l , and not, are very slick, the programmer and the user do not have any guarantee that the results of their programs will be correct. The programs in pure PROLOG are more demanding in terms of execution time. The need for faster programs led to the development of techniques which improved tile performance of programs even in real PROLOG. Real PROLOG, without
286
Logic Programming: The PROLOG Paradigm
Occur Check, Algorithm 2.9.7, with "cut", f a i l , and not, is the most effective and most widespread logic programming language. Tile fact that there is no theoretical guarantee regarding the correctness of its programs does not mean that its programs give the wrong result. It only means that tile programIner has to be ~nore careful with tile basic terms, "cut" and not. The use of real PROLOG offers in every PROLOG program greater speed and effectiveness at the cost of the lack of a general completeness theorem. A basic difference between Predicate Logic and PROLOG, both t)ure and real, is in the manipulation of a set of data S.
PROLOG organizes the data, numbering
the facts and the rules according to the order in which they are expressed. Furthermore, the order of predicates in the body of the rules is exactly tile order in which they are written in the program. Therefore, rule: A :-- B1 , B2.
(*)
does not match with rule A:
B2 ,
B1.
(**)
Although in Predicate Logic the two rules are equivalent because of tile commutativity of V, in PROLOG they are considered different. Moreover, tile choice between form (.) and form (**) directly affects the speed of execution of the program. This t)rocedural fashion in which PROLOG deals with data, causes problems in the t)rogram when there are facts of the form: A:
A.
(1)
or more generally: A:
B1, . . . , A , . . . ,
(2)
Bk.
In other words, Horn clauses of the form:
A~A or
(B~A...ABkAA)
~
A
PROLOG and Predicate Logic which are logically correct (because they are equivalent to -~A v A).
287 Hence,
whereas (1) and (2) do not give any information to the program, since they are clauses which are always true, any query of the form "? A. " causes an infinite loop through the recursive procedure. Tim program is unable to leave this loop by itself. Most of the infinite loops in PROLOG programs are due to the existence of trivial data of the form (1) or (2). On the contrary, logically true Horn clauses of the form: A :- fail. or equivalently, in Predicate Logic, expressions of the form:
PA~P-+
A
where P can be any sentence of Predicate Logic and A is a Horn clause, are used in real PROLOG to express negation. The negation in real PROLOG is much stronger than the negation in Predicate Logic. In Predicate Logic, the negation of a clause A is determined from the logically true clause: ~A ~
(A-~BA~B)
or equivalently:
~A e+ (A-~V]) Hence, if
SI-RA then --A and n o t ( A ) of real PROLOG are unsatisfiable sentences, Definition 2.5.15 and subsection 3.6.2. If however
S ~/R A i.e., if A is not provable from S with resolution, then, whereas in Predicate Logic we do not know if S F-R ~A,
in real PROLOG, because of the Closed World
Assumption (subsections 3.6.2 and 3.6.3), we will conclude not(A). Regardless of whether real PROLOG is complete or not, it is widely used and is the richest and most effective logic programming language. Definitely, controlling the correctness of the program always has to be carried out by the programmer and the user.
288
Logic Programming: The PROLOG Paradigm
3.10 Exercises 3.10.1 Write in PROLOG tile following expressions of everyday speech. (a) Chris hates studies. (b) Somebody loves George. (c) George is a friend of whomever he loves, if that person loves him too. (d) Mary likes whoever admires her. (e) Nick is afraid of whoever is bigger than him.
(f) Nick is jealous of whomever Mary likes, if this person is not shorter than Nick.
Solution: (a)
hate(chris, studies).
(b) love(X, nick). (c) is_friend(george, X ) : - love(X, george),
love(george, X).
(d) likes(mary, X):- admires(X, mary). (e) i s _ a f r a i d ( n i c k , X ) : - bigger(X, nick). (f) is_jealous(nick, X ) : - likes(mary, X ) ,
n o t ( s h o r t e r ( X , nick)).
3.10.2 Write the PROLOG program corresponding to tile proof of Exercise 2.12.37.
Solution: t(a, b, c, d).
/ 9 abed trapez, with vertices a, b, c, d 9 /
p ( a , b , c , d ) : t(a, b, c, d).
e(a,b,d,e;d,b)'- p(a,b,e,d). ? e(a,b,d,c,d,b).
/.
ab ll cd if abed trapez. . / /.
a b d - cdb if ab ll cd . / /.
abd-cdb
./
Exercises
289
3.10.3 Write a PROLOG program with the countries of the EU and their respective capital cities as the database.
Form the adequate PROLOG queries determining
the capital city of a given country as well as the country corresponding to a capital city. 3.10.4
Define, for the following database, the relations:
t lle , taZ er(X,Y)
(b) t ll, raZe(X)
(c) normal, n o r m a l ( X )
(d) short, s h o r t ( X )
Consider that a man, m, and a woman, f, are tall when they are respectively taller than 1.80 and 1.75, the corresponding limits for the relation s h o r t being 1.70 and 1.60. h e i g h t ( b i l l , m, 1.80). height(nick, m, 1.54). height(jim, m, 1.87). height(paul, m, 1.75). height(john, m, 1.66). height(alex, m, 1.76). height(mary, f, 1.73). h e i g h t ( b i l l , f, 1.61). height(joan, f, 1.68).
h e i g h t ( c a t h e r i n e , f, 1.78).
Solution: taller(X, Y):-- height(X,
_
tall(X):-- height(X,m,Y),
, Z),
height(Y,
_
,W ) ,
Y : > 1.80.
t a l l ( X ) :- height(X, f, Y), Y - > 1.75. normal(X) :- height(X,m, Y), Y < 1.70, Y < 1.80. normal(X) :- height(X,f,Y), Y > 1.60, Y < 1.75. s h o r t ( X ) : - height(X,m,Y), Y = < 1.70. short(X) :- height(X, f, Z ) ,
Y = < 1.60.
Z > W.
290
Logic Programming: The PROLOG Paradigm
3.10.5 Consider tile following tables of materials and their suppliers for a warehouse containing spare parts of cars: SUPPLIER
ARTICLE
SUP.CODE
PROFESSION
CITY
001
John
Manufacturer
ATHENS
002
Nick
Importer
PATRAS
010
John
Entrepreneur
SAL/ICA
110
Nick
Importer
PIRAEUS
ART.CODE
PRODUCT
MODEL
WEIGHT
003
Oil
30
ATHENS
004
Tyres
157/75
PATRAS
005
Lamps
RAAI
SAL/ICA
013
Oil
60
PIRAEUS
SUPPLIES
(a)
NAME
SUP.CODE
ART.CODE
QUANTITY
001
005
150
002
003
200
010
OO4
030
110
013
250
Represent the d a t a in the above tables in a PROLOG database containing 2-ary predicates only.
(b)
Give the adequate Horn clauses for the queries:
(1) (2)
W h a t are the names of the oil suppliers? Which city are tyre suppliers in, and which city are the lamp suppliers in?
(3) (4)
W h a t does John supply? Which oil suppliers supply less than 400 tons and are established in Patras?
(5)
Who are the lamp suppliers, and who are the tyre suppliers?
Exercises
291
Solution:
(a)
We introduce tile predicates: name(X, Y) profession(X, Y) city(X, Y) product (X, Y) type(X, Y) weight (X, Y) supp_art icle (X, Y) quantity(X, Y)
/ 9 / 9 / 9 / 9 /, /. / 9 / 9
X X X X X X X X
supplier supplier supplier product product product supplier supplier
code, code, code, code, code, code, code, code,
We thus form the following database: name(001, john). name (002, nick). name(010, john). name(110, nick). profess i on(001, manuf act urer). profession(002, importer). profession(010, entrepreneur). profession(110, importer). city(001, athens). city(002, patras). city(010, salonica). city( 110, piraeus). product (003, oil). product (004, tyres).
product (005, lamps). product (013,oil). type(003, ' ' 3 0 ' ' ). type(004, ' ' 157/75' ' ). type(005, ' 'RAAI' ' ). type(013, ' ' 6 0 ' ' ) .
weight (003,300). weight (004, 2000). weight (005, I0). weight (013,500). supp_article (001,005). supp_art ie ie (002,003). supp_art icle (010,004). supp_article (II0,013).
Y Y Y Y Y Y Y Y
supplier name 9 / supplier profession 9 / supplier city 9 / product name 9 / product type 9 / product weight 9 / article code 9 / product quantity 9 /
292
Logic Programming: The PROLOGParadigm
(b)
(1)
supp_oil (NameSupplier):product (CodProduct, oil),
supp_article (CodSupplier, CodProduct), name (CodSupplier, NameSupplier).
(2)
city_supp_tyres (CitySupplier):product (CodProduct, tyres), supp_article(CodSupplier, CodProduct),
city(CodSupplier, CitySupplier). city_supp_lamps (Cit ySupplier):product (CodProduct, lamps), supp_article(CodSupplier, CodProduct), city(CodSupplier, CitySupplier). (3)
supplies(NameSupplier, NameProduct) :name (CodSupplier, NameSupplier), supp_article(CodSupplier, CodProduct),
product(CodProduct, NameProduct). ? supplies(john, NameProduct).
(4)
supp_oil_le s s_400_patras (NameSupplier):product (CodProduct, 0il), weight (CodProduct, WeightProduct), WeightProduct < 400, supp_article(CodSupplier, CodProduct), city(CodSupplier, patras), name (CodSupplier, patras).
(5)
suppl ie s_lamps (NameSuppl ier):product (CodProduct, lamps), supp_article (CodSuppliers, CodProduct), name(CodProduct, NameSupplier). supplies_tyres(NameSupplier) :- product(CodProduct,tyres), supp_article(CodSupplier, CodProduct), name(CodProduct, NameSupplier).
Exercises
293
3.10.6 Analyse the PROLOG functioning of the following program, and give the corresponding state tree structure. A(a,b).
/,
B(X) :- C(X, r ) .
/ 9 2 9/
1 ,/
C(b,a) :-- A(a,b).
/,
C(X, Y):- A(X, Z). ? B(X).
/ 9 4 9/
3 9/
Solution: The goal B(X) is unified with the head of rule / 9 2 9
The new subgoal is
C(X, Y). C(X, Y) is unified with the head o f / . 3 9 for X / b , Y/a. However, A(a,b) is unified with fact / . 1 9 hence the program succeeds for X/b, Y/a, and PROLOG answers-
X--b Y-b PROLOG now frees variables X and Y from their values, and by backtracking it tries to satisfy the subgoal C(X, Y) differently. C(X, Y) is unified with the head of rule / 9 4 9 The new subgoal is A(X, Y), which is unified with fact / 9 1 9 / for
X/a, X/b. After having freed the variables and backtracking once again, is no longer able to satisfy the goal or any subgoal, and answers:
X-a Y=b The state space tree is" ? B(X). 2
X/b,Y / ~ ? A(a, b). yes
? c(x, Y).
? A(x, Y). x/~, Y/b ? A(a, b). yes
PROLOG
294
Logic Programming: Tile PROLOG Paradigm
3.10.7 The police have found an unfortunate woman by the name of Suzannc murdered, with her head smashed by a blunt instrument. The main roles in this sad story are played by Alan, 35, a butcher as well as a thief, and John, 25, a soccer player who is sentimentally attached to both Suzanne and Barbara. B a r b a r a is a 22 year old hairdresser who is married to Bert, a 50 year old lame joiner.
During tile
investigation, a revolver was found in John's house. For the police, jealousy and a planned robbery are possible motives. Help them find the murderer. Solution:
/ 9 assumptions and claims formulae from the police investigation 9 / person(john, 25, m, f o o t b a l l _ p l a y e r ) .
/ 9 m : male
*/
person(allan, 35, m, butcher). person(barbara, 22, f, h a i r d r e s s e r ) . person(bert, 50, m, carpenter). person(allan, 35, m, pickpocket). had_affair(barbara, john). had_affair(barbara, bert).
had_affair(susan, john). killed_with(susan, club). motive(money). mot i ve (3ealousy). owns(bert, wooden_leg). owns(john, pistol).
/ 9 "common logic" assumptions 9 / operat es_ident ically (wooden_leg, club). operates_identically(bar, club). operate s_ident i cally (pair_of_scissors, knife). operates_identically(football_boot, club). owns_probably(X, football_boot) :- person(X ..... owns_probably(X, pair_of_scissors) : person(X .... owns_probably(X, Object) :- owns(X, Object).
football_player). ).
Exercises
295
/ 9 assumptions about the murderer's motives- two categories of suspects:, / / 9 s u s p e c t s by their capability of c o m m i t t i n g tile m u r d e r , and / 9 s u s p e c t s by their motives
9/
9/
suspect_by_capability(X) :- killed_with(susan, Weapon), operates_identically(Object, Weapon), owns_probably(X, Object). suspect_by_motive(X):
motive(jealousy), person(X, _
, m , __ ),
had_affair(susan, X).
suspect_by_motive(X) :- motive(jealousy), person(X,_
),
,f,_
had_affair(X, Man) , had_affair(susan, Man). suspect_by_raotive(X) :- motive(money) ~ person( X . . . . .
/ 9 non-probabilistic " c o m m o n logic" a s s u m p t i o n
pi ckpo cke t ).
9/
mostly_suspected(X) :- suspect_by_capability(X), suspect_by _mot i re( X ). By its definition, the predicate m o s t l y _ s u s p e c t e d has no probabilistic interpretation, and depends only on the two predicates s u s p e c t _ b y _ c a p a b i l i t y and suspect_by_motive. Justify by a full track down the answers to the following PROLOG queries: ? suspect_by_capability (X). X = bert
X =john ? s u s p e ct_by_mot i v e ( X ) .
X = john X = barbara
X = allan ? mostly_suspected(X). X =john
296
Logic Programming: The PROLOG Paradigm
3.10.8 Reading and evaluating kilometric distances on a m a p often proves to be a very complex procedure in the preparation of an excursion. Hence a PROLOG program would be very useful.
Solution: As a sample program, consider: / 9 data from a map 9 / road(kalamata, t r i p o l i , 90).
road(tripoli, argos, 60). road(argos, korinthos, 49). road(korinthos, athens, 83). / 9 rules for the calculation of kilometric distances 9 / route(Wownl, Town2, Distance) :- road(Townl, Wo~m2,Distance). route(Wownl, Town2, Distance) :- road(Townl, X, Dist 1), route(X, Town2, Dist2), Distance -- Dist 1 + Dist2. W h a t is the distance between K a l a m a t a and Korinthos? ? route(kalamata, korinthos, X).
]
X = 199
3.10.9 Give the recursive definition of the predicate f a c t o r i a l ( X , based on: (a)
1! -
1
(b)
n! =
(n-1)!,n
Construct the state space tree of the program for the query:
? factorial(3, X).
Y),
Y being X ! ,
Exercises
297
Solution: Here (a) gives the b o u n d condition and (b) gives the recursive step. T h e definition we are seeking is thus: f a c t o r i a l ( I , 1) :- !.
/.
1 ./
/.
2 ./
f a c t o r i a l ( N , Factor) :- N1 = N - 1, f a c t o r i a l ( N 1, F a c t o r l ) , F a c t o r = N 9 F a c t o r l. T h e cut in f o r m u l a / ,
1 9
has been used in order to prevent P R O L O G from
b a c k t r a c k i n g d u r i n g the calculation of 1!.
T h e s t a t e space for the query: ? f a c t o r i a l ( 3 , X). is depicted by the following tree:
? factorial (3, X ) . 2 N/3, Factor/X
? N 1 = 2 , factorial(2, F a c t o r l ) , F a c t o r = 3 9 Factor1. 2 N/2, Factor/Factor1
? N 1 _.l.~...f.actorial(1, F a c t o r l ) ,
........... cut in 1
Factor = 2 9 F a c t o r l .
Factorl = 1
Factor 1 = 2 , 1 = 2
Factor - 3 , 2 yes
= 6 X --6
298
Logic Programming: The PROLOG Paradigm
3.10.10 Define recursively the predicate exp(x, y, z), m e a n i n g
"x y =
z",
which defines
the exponential function based on: x~ =
1
Xy
X.
--
X y- 1
3.10.11 Assume we wish to define a predicate declaring the n u m b e r of parents of a person x. According to the Bible, we have to declare t h a t A d a m and Eve had no parents. We thus write the p r o g r a m P I : /.
program P1 9 /
number_parents(adam, O) :- !.
/ 9 1 9/
number_parents(eve, 0 ) : - !.
/ 9 2 9/
number_parents(X, 2).
/ 9 3 9/
where we use the cut to avoid backtrackings in queries of the form: ? number_parents(adam, X). and .7 number_parents(adam, 0).
(a)
W h a t will PROLOG answer to the query:
(,)
? number_parents(eve, 2). and why?
(b)
W h a t will PROLOG answer to (.) if, instead of the p r o g r a m P1, we use the following p r o g r a m P2: /,
program P2
9/
number_parents(adam, N) :- !, N = 0. number_parents(eve, N) :- !, N = 0. number_parents(X, 2).
/, /, /,
1 ,/ 2 ,/ 3 ,/
Exercises
299
and what is the answer to
? number_parents(X, Y). given by the p r o g r a m P 2 ? 3.10.12 Define with a PROLOG p r o g r a m the absolute value of an integer.
3.10.13
Write a PROLOG p r o g r a m checking whether an element (a person or a list) is part of a list L, and examining all the elements of L.
Solution: member(H, [ H : _ ]). member(I, [ _ : T ] ) : - member(I, T).
3.10.14 Check whether the list H is m e m b e r of the list L, and find the first element of L.
Apply this to the list L - [ a , b, c].
Solution: member(H: [H _ ]):- !.
/,
1 ,/
member (I, [ _ : T]) :- member (I, T).
/*
2 */
The only solution found by PROLOG for the query:
? member(X, [a, b, c]). is X = a. The existence of the cut, !, in clause / 9 1 9 / makes the finding of other possible solutions impossible.
3.10.15 Write a program collecting the first n elements of a list.
300
Logic Programming: The PROLOG Paradigm
3.10.16 Write a program defining the subtraction of two lists. Solution:
subtract(L,[
],L) :- !.
s u b t r a c t ( [H : T], L, U):- member(H, L),
!, subtract(T, L, U).
s u b t r a c t ( [H : T], L, U]) :- !, subtract(T, L, U). subtract( . . . .
[ ]).
sambar(S,[S:
_ ]).
member(I, [ _ : T]):- member(I, T).
3.10.17 Define by means of a program, the relations member,
subset
and
intersection
of set theory. Solution:
I,
~ ,I member(H,[H : _ ]) member(I, [ _ : T]):- member(I, T).
/,
c ,/ subset([H : T], I) :- member(H, I ) , subset([
/,
N
subset(T, I).
],I).
*/
intersection([
],X,[
]).
i n t e r s e c t i o n ( [ X : T],I, [X: Z]):- member(X,I),
!,
i n t e r s e c t i o n ( T , I, Z).
3.10.18 Assume we are given a list of eight animals, V = [al, a2, r l , bl, b2, hl, ol, o2], which are classified as follows:
Exercises
301
F i s h 9 a l , a2
Animals
Reptiles "rl
N on- a q u a t ic
Herbivorous 9 hi
M a m m als
B i r d s 9 bl, b2
O m n i v o r o u s " 01, 02
Write a PROLOG database describing the above classification, and a PROLOG program replying to the queries(a)
W h a t is the classification of tile animal X?
(b)
Which animals have classification Y?
Solution:
aquatic(aquatic, [al, a2]). reptile(reptile, [rl]). bird(bird, [bl,b2]). mammal(V, X): herbivorous(V~ X) ; omnivorous(V~ X). herbivorous (herbivorous, [hi]). omnivorous(omnivorous, [ol,02]). classification(A, C) :- animal(C, L), member(A, L). animal(V~ X):- aquatic(V~X); terrestrial(V~X). terrestrial(terrestrial(V),X):- reptile(V~X); bird(V~X); mammal(V~X). In answer to the classification queries: (a)
? classification(X,
(b)
?
animal(X,
C).
Y).
3.10.19 Write a PROLOG program displaying a list of names in alphabetical order. The names in the list must be separated by a full stop, and the word " s t o p " must appear after the last name.
302
Logic Programming: The PROLOG Paradigm
Solution: The alphabetic ordering of the names can be achieved by means of a tree, every node of which is the predicate node(L, W, R), interpreted as "L is the next left node of the present node, R is its next right node, and the next middle ,lode is the name just read by the program". Each name will be compared to the origin, and recursively to the next left and right nodes of the corresponding node. This procedure will end when tile next node is a variable, or when the name is already in the tree. For example, for tile names john. mary. james, edward, stop.
we will have the following tree:
node
node
node
X
edward
james
john
node
U
Z
mary
Y
The program we are seeking is: order(X) :- reading(X), name(W, W l ) ,
c l a s s i f y ( W 1 , X , Y), order(Y).
order(X):- nl, write(" ordered words:''), nl, write_tree(X), nl, nl, write(''Done.''), nl. reading(W):-read(W),
W----stop,
!, fail.
classify(W, node(L, W, R),node(L, W, R)):- !.
classify(W, node(L, W1, R),node(U, W1, R)):lower(W, W1), !, classify(W, L,U). classify(W, node(L, W1,R),node(L, W1,U)):- !, classify(W,R, U).
lower([ ], _ ):- !, lower(Y,T). lower([W: Y] ,[Z, T] ) :- X < Z. write_tree(X):- raY(X), !. write_tree(node(L, W, R)) :- !, write_tree(L), name(W, Wl), write(W, Wl), write('C."),
write_tree(R).
V
Exercises
303
To run the program, the following instruction must be given: 9-
order(X).
and then the list of names to be ordered.
3.10.20
Write a PROLOG program forming a tree with a given list of numbers and printing that list as a tree with its elements ordered. 3.10.21 Write a program representing the diagram: e
T a
T d
> b
~f
T g
T > c
and finding the next node of c and b.
Solution: The arrangement of tile nodes is declared by tile predicate a r c ( x , y ) , whi('.h is interpreted as "an edge starting at x and ending at Y"- Tile p r o g r a m we are seeking is: arc(a, b). arc(c, b). arc(d, a). arc(d, c). arc(b, e).
~rr
f).
arc(f,g). arc(b, g).
'.~ ar~(c, z). ? arc(g, y).
0
,.~
~'~
0
,'~
~.~
~
..,
~
~'
~-
o.
.~"
~
~
~
N
.~
N
~
.,,
N
0
_.
~.,.
_
0
e-.,-
~
~"
~
~
=
~
~
~
~
9.
~ i"
~-~
~
0 0
0 0
~.
~
i~.
~
~
~.
W
i~.
0 0
o
~
~
m ~l
c~
m
c~
o
""
..
m
. o
o
o
~n cr~
0~
~.,
0 O~
c-l,-
L'O
O0 9
~"
~
0
"~
o
o
c~
oo o
Exercises
305
Solution" The given points are described by the predicate: p o i n t ( x , y)
where x and y are the coordinates of the points. The equation of the interior of the circle is" (x-c)
2+(y-d)
2 <
r2
In the Cartesian plane, the three points are collinear if: Y3 -- Yl X3 -- Xl
Y2 - Yl X2
--
Xl
Finally, the predicate: inside(point
(x,
y), (c, d), r)
declares that" p o i n t ( x , y)
lies inside the circle with centre (c, d) and radius r. Tile necessary built-in predicates and the formalism of arithmetical operations can be found in subsection 3.5.4.
3.10.24 Define combinations of the elements of a given list.
3.10.25 Write an algorithm checking whether a given point coincides with one of tile nodes of a given tree, every node of which has at most two next nodes (Definition 2.8.9). In the case that the given point does not belong to the nodes of the tree, the algorithm must concatenate that point as a new node of trees.
306
Logic Programming: The PROLOG Paradigm
3.10.26 George, Tim, John and Bill are soccer fans, and Nick and Jim are supporters of teams A and B respectively. Nick likes whomever supports team A, whereas George likes whomever is a soccer fan and does not support team B. Formulate the above claims in a PROLOG database, and answer the following queries: (a)
W h o m does Nick like?
(b)
Does George like Bill?
(c)
W h o m does George like?
Solution: /.
data claims 9 / support(nick, A). support (j is, B). is_occupied(george, f o o t b a l l ) .
is_occupied(tim, football). is_occupied(j ohn, football). is_occupied(bill, football). is_occupied(X, football) :- support(X, B). is_occupied(X, football) :- support(X, A). love(nick, X) :- support (X, A).
love(george, X) :- is_occupied(X, football), not(support(X,B)). / 9 queries and answers 9 / ? love(nick, X). X = jim ? likes(george, b i l l ) . yes ? likes(george, X). X = george
X --tim X = john
X -- bill
Exercises
307
3.10.27' Assume we are given a PROLOG program Q. Exalnine the queries: (1)
? not (p).
(2)
? not(not(p)).
(3)
? not(not(not(p))).
(2n)
? not(... (not(p))...). Y
J
2n
(2n+l)
? not(... (not(p))...). Y
J 2n+l
where p is a
PrL
predicate without free variables.
Solution: Assunle p succeeds in prograin Q. Then tile goal not(p) fails, and
PROLOG
answer "no" in query (1), "yes" in query (2), "no" in query ( 3 ) , . . . ,
will
"yes" in
query (2n), and "no" in query (2n + 1), by the definition of not. If p fails, then not(p) succeeds, and PROLOG will reply "yes" in query (1), "no" in query (2), "yes" in query ( 3 ) , . . . ,
"no" in query (2n), and "yes" in
query (2n + 1). In general, the predicate not2'~(p) takes the same value as p, while not2n+l(p) takes the same value as not(p), exactly like the
PL
negation.
The differences between not and PL negation were examined in sections 3.6.1 and 3.9.
3.10.28 Write a simple parser for the analysis of the sentences" The child sleeps The child eats apples
and
308
Logic Programming: The PROLOG Paradigm
Solution:
sent(X, Y):- rip(X, U), vp(U, V);
.p(x, Y):- det(X, u), noun(U,V). vp(X, Y):- ~wrb(X, Y). vp(X, Y):- tverb(X, U), np(U, Y). d e t ( [ t h e : Y], Y).
noun([child : Y], Y). noun( [apples : Y], Y). iverb([sleeps
: Y], Y).
tverb([sleeps
: Y], Y).
tverb([eats : Y], Y].
3.10.29 Form a simple parser analysing sentences like: "Mary ate the cake"
3.10.30 The following sentence of the English language is given: P :
George is out shopping or he is at home.
Give the corresponding clause for P as well as tile conclusions which can be inferred, given a database consisting of this sentence.
Solution: Let A and B be the sentences
A :
George is out shopping.
B:
George is at home.
In the context of PrL, P takes the form: P:
AVB
Exercises
309
which has two logically equivalent sentences: ~A --+ B
--,B~A T h e clauses of those sentences are:
e : - not(A).
/,
1 9/
A : - not(B).
/,
2 9/
If the d a t a b a s e of a P R O L O G P r o g r a m consists of clauses / 9 1 9 / and / 9 2 9 then the two queries: ? A. ? B. will not be answered, since the control of the p r o g r a m ' s execution will end in a loop, as one can easily see by analysing the corresponding s t a t e space. One way to arrive at a solution is to form two different programs, consisting of / 9 1 9 / and / 9 2 9 / respectively. In t h a t case, the corresponding s t a t e spaces of the p r o g r a m s for the same queries will be: PROGRAM I B :- not(A).
/*
1 */
? B. ? not(A). PROGRAM II A :- not(B).
/* 2 */
? A. ? not(B). PROLOG answers both queries of each of the two p r o g r a m s with " y e s " by means
of the NF rule. Thus, in PROGRAM I, " n o t ( A ) " and "B" are inferred; whereas in PROGRAM II, "A" and " n o t ( B ) " are inferred.
In other words, we observe t h a t
for the sentence P, there are two different, and contradictory, sets of conclusions. P r o b l e m s of this kind occur when we have sentences such as P, which c a n n o t take a Horn clause form, and for which wc cannot d e t e r m i n e w h e t h e r one, or more t h a n one of the constituent sentences is truc or false. For example, wc c a n n o t d e t e r m i n e the t r u t h value either of A or B. As wc have already seen in Definition 1.9.7, a Horn clause must contain at least one negative literal, and t h a t is not the case in A V B, where A and B are sentences w i t h o u t negation.
Bibliography 1. Suggestions for Further Reading 1.1 Propositional Logic For the history and evolution of the subject, read [Heij67, Boch62] and [NeSh93], which contains an extensive bibliography. For axiomatization and rules of inference, read [Schm60, Chur56, HiAc28, Hami78, Raut79, Mend64]. For an approach at a more advanced level, see the propositional part of [Klee52]. For the tableaux method, see [Smu168, NeSh93]. For Boolean and other algebras for logic, read [RaSi70, Rasi74]. For modal logic, read [Chel80, HuCr68, NeSh93, Raut79, Schm60]. For an advanced study of the relation of modal logic to predicate logic, read [Bent83]. For intuitionistic logic, see [Brou75, Dumm77, Fitt69, NeSh93, Raut79]. For resolution, see the propositional part of [ChLe73, Dela87].
1.2 Predicate Logic To see the beginning and evolution of the subject, see [Heij67, Boye68, Boch62] and [NeSh93], which contains an extensive bibliography. For the axiomatic method, read [Chur56, Hami78, Mend64, HiAc28], and the more advanced [Klee52]. For the tableaux method, see [Smu168, NeSh93]. For Herbrand's theorem, see [Herb30] in [Heij67], and also [ChLe73, Dela87, NeSh93, NiMa95]. For decidability, read [Acke54, Chur56, Klee52]. For resolution, [ChLe73, Dela87, NiMa95, NeSh93] and the standard [Lloy87]. 311
312
Bibliography
1.3 Logic Programming For logic programming in general, see [ChLe73, Dela87, Kowa79, Kowa90, Lloy87, NeSh93, NiMa95]. For PROLOG, read [Brat90, C1Mc84, C1Me94, Dela87, NiMa95, StSh86]. For the treatment of negation, read [Kowa90, Lloy87, NeSh93, NiMa95, Shep88, Shep92]. For metaprogramming, see [Kowa90] and [HiLl94], which develops the new programming language GODEL. For nonmonotonic logics, see [MaTr93].
2. References [Acke54]
Ackermann, W., Solvable Cases of the Decision Problem, NorthHolland Pub. Co., 1954.
[Bent83]
van Benthem, J. F. A. K., Modal Logic and Classical Logic, Bibliopolis, Napoli, 1983.
[Besn89]
Besnard, P., An Introduction to Default Logic, Springer-Verlag, 1989.
[Beth68]
Beth, E. W., The Foundations of Mathematics; a Study in the Philosophy of Science, 3 rd edn., North-Holland Pub. Co., 1968.
[Boch62]
Bochenski, J. M., Formale Logik, (in German), 2 "d edn., Verlag Karl Alber, Freiburg-Muenchen, 1962. Translated as: A History of Formal Logic, (in English), Thomas, I., tr., Chelsea Pub. Co., 1970.
[Boye68]
Boyer, C. B., A History of Mathematics, Princeton University Press, 1985. (Paperback version of the 1968 Wiley edition.)
[Brat90]
Bratko, I., PROLOG Programming for Artificial Intelligence, 2 nd edn., Addison-Wesley Pub. Co., 1990.
[Brou75]
Brouwer, L. E. J., Collected Works, Heyting, A., ed., North-Holland Pub. Co., 1975.
References
[BuSh84]
313
Buchanan, B. G., Shortliffe, E. H., eds., Rule-based Expert Systems: the MYSIN experiments of the Stanford Heuristic Programming Project, Addison-Wesley Pub. Co., 1984.
[CCPe85] Coelho, H., Cotta, J. C., Pereira, L. M., How to Solve it with PROLOG, 4 th edn., Minist(~rio do Equipamento Social, Laborat6rio Nacional de Engenharia Civil, Lisbon, 1985. [Chel80]
Chellas, B. F., Modal Logic, an Introduction, Cambridge University Press, 1980. The text most referred to in computer science literature.
[ChLe73]
Chang, C-L., Lee, C-T., Symbolic Logic and Mechanical Theorem Proving, Academic Press, 1973.
[Chur36]
Church, A., A Note on the Entscheidungsproblem, Journal of Symbolic Logic, 1, 1936, pp. 40-41, and correction, ibid., pp. 101-102.
[Chur56]
Church, A., Introduction to Mathematical Logic, Volume I, Princeton University Press, 1956.
[Clar78]
Clark, K. L., Negation as Failure, in [GaMi78], pp. 293-322.
[C1Mc84] Clark, K. L., McCabe, F. G., MICRO-PROLOG: Programming in Logic, Prentice-Hall International, 1984. [ClMe94]
Clocksin, W. F., Mellish, C. S., Programming in Springer-Verlag, 1994.
[Cohe90]
Cohen, J., Constraint Logic Programming Languages, Communications of the ACM, 33, 7, July 1990, pp. 52-68.
[Colm87]
Colmerauer, A., An Introduction to PROLOG III, in [Espr87], Part I, North-Holland, 1987, pp. 611-629.
[Colm90]
Colmerauer, A., An Introduction to
[Col90a]
Colmerauer, A.: An Introduction to PROLOG t h e ACM, 33, 7, July 1990, pp. 69-90.
[Curr63]
Curry, H. B., Foundations of Mathematical Logic, McGraw-Hill, 1963. Dover Pub., 1977.
PROLOG
PROLOG, 4 th
edn.,
llI, in [Lloy90], pp. 37-79. Ill,
Communications of
314
Bibliography
[Dela87]
Delahaye, J.-P., Formal Methods in Artificial Intelligence, Howlett, J., tr., North Oxford Academic Publishers Ltd., London, 1987. Wiley, New Tork, 1987.
[DiSc90]
Dijkstra, E. W., Scholten, C. S., Predicate Calculus and Program Semantics, Springer-Verlag, 1990.
[Dumm77] Dummet, M. A. E., Elements of Intuitionism, Clarendon Press, Oxford, 1977. [Espr871
ESPRIT '87: Achievements and Impacts, Proceedings of the 4 th Annual ESPRIT Conference, Brussels, September 28-29, 1987.
[Fitt69]
Fitting, M. C., Intuitionistic Logic, Model Theory and Forcing, NorthHolland Pub. Co., 1969.
[Fitt90]
Fitting, M. C., First-Order Logic and Automated Theorem Proving, Springer-Verlag, 1990.
[GaMi78] Gallaire, H., Minker, J., eds., Logic and Data Bases, Proceedings of the Symposium on Logic and Data Bases held at the Centre d'l~tudes et de Recherches de L'Ecole Nationale Sup~rieure de L'A~ronautique et de L't~space de Toulouse (C.E.R.T.), Toulouse, November 16-18, 1977, Plenum Press, New York, 1978. [Hami78]
Hamilton, A. G., Logic for Mathematicians, Cambridge University Press, 1978. There is a rev. edn., Cambridge University Press, 1988
[Heij67]
van Heijenoort, J., From Frege to G~del. A Source Book in Mathematical Logic, 1879-1931, Harvard University Press, 1967. It contains the most important papers in logic from 1879 to 1931.
[Herb30]
Herbrand, J., Recherches sur la thdorie de la ddmonstration, thesis at the University of Paris. Chapter 5 translated as: Investigations in Proof Theory: the Properties of True Propositions, Dreben, B., van neijenoort, J., tr., in [neij67], pp. 525-581.
[HiAc28]
Hilbert, D., Ackermann, W., Grundziige der theoretischen Logik, Springer-Verlag, Berlin, 1928. The 2 nd edn. translated as: Principles of Mathematical Logic, Hammond, L. M., Leckie, G. G., Steinhardt, F., tr., Luce, R. E., ed., Chelsea Pub. Co., 1950.
References [HiLl94]
315
Hill, P., Lloyd, J. W., The GODEL Programming Language, MIT Press, 1994.
[HuCr68] Hughes, G. E., Cresswell, M. J., An Introduction to Modal Logic, Methuen and Co. Ltd, 1968. Routledge, 1989. [Klee52]
Kleene, S. C., Introduction to Metamathematics, corrected reprint of the 1952 edn., North-Holland Pub. Co., 1971. This book contains unexcelled explanations of G6del's theorems.
[Kowa79] Kowalski, R. A., Logic for Problem Solving, North-Holland Pub. Co., 1979, reprinted, 1986. [Kowa90] Kowalski, R. A., Problems and Promises of Computational Logic, in [Lloy90], pp. 1-36. [Lloy871
Lloyd, J. W., Foundations of Logic Programming, 2 ~d edn., SpringerVerlag, 1987. This is the standard exposition of the subject.
[Lloy90]
Lloyd, J. W., ed., Computational Logic: Symposium Proceedings, Brussels, November 13-1~, 1990, ESPRIT Basic Research Series, SpringerVerlag, 1990.
[MaTr93] Marek, V. W., Truszczyfiski, M., Nonmonotonic Logic: Dependent Reasoning, Springer Verlag, 1993
Context
[Mend64] Mendelson, E., Introduction to Mathematical Logic, Van Nostrand Company, Inc., 1964. There is a 3 ~d edn., Wadsworth & Brooks-Cole, 1987. [Meta85]
Metakides, G. Mathematical Logic, (in Greek), University of Patras, Greece, 1985.
[Meta92]
Metakides, G. From Logic to Logic Programming, Kardamitsa Pub., Athens, Greece, 1992.
[Mink88]
Minker, J., ed., Foundations of Deductive Databases and Logic Programming, Morgan Kaufmann Pub., Los Altos, CA, 1988.
[Mitc86]
Mitcie, D., ed., Expert Systems in the Micro-Electronic Age, Edinburgh University Press, 1979, reprinted 1986.
(in Greek),
316
Bibliography
[Mosc92]
Moschovakis, V. N., ed., Logic from Computer Science: Proceedings of a Workshop held November 13-17, 1989, MSRI Publications 21, Springer-Verlag, 1992.
[NeSh93]
Nerode, A., Shore, R. A., Logic for Applications, Springer-Verlag, 1993.
[Nils71]
Nilsson, N. J., Problem-solving Methods in Artificial Intelligence, McGraw-Hill, 1971.
[NiMa951 Nilsson, U., Maluszynski, J., Logic, Programming and PROLOG, 2 "d edn., John Wiley and Sons, 1995. [O1Ru871 Olson, J. R., Rueter, H. H., Eztractin9 Expertise from Experts: Methods for Knowledge Aquisition, Technical Report, University of Michigan, Cognitive Science and Machine Intelligence Laboratory, N9 13, 1987. [Quin86]
Quinlan, J. R., Discoverin9 Rules by Induction from Large Collections of Examples, in [Mite86], pp. 168-201
[Rasi74]
Rasiowa, H., An Algebraic Approach to Non-Classical Logics, NorthHolland / American Elsevier Pub. Co., 1974. In this book, numerous logic calculi are studied algebraically.
[RaSi70]
Rasiowa, H., Sikorski, R, The Mathematics of Metamathematics, 3 rd edn., PWN-Polish Scientific Publishers, 1970. The standard book for an algebraic approach to logic.
[Raut79]
Rautenberg, W., Vieweg, 1979.
[Reit78]
Reiter, R., On Closed World Data Bases, in Logic and Databases, Plenum Press, New York, 1978.
[Robi651
Robinson, J. A., A Machine-Oriented Logic Based on the Resolution Principle. Journal of the Association for Computing Machinery, 12, 1, 1965, pp. 23-41.
Klassische
und nichtklassische
A ussagenlogik,
[Schm60] Schmidt, H. A., Mathematische Gesetze der Logik I, Springer-Verlag, 1960. A most comprehensive and elaborated text for propositional logic in the German literature.
References
317
[SchwT1]
Schwabhs 813, 1971.
[Shap87]
Concurrent PROLOG: Collected Papers, Shapiro, E., ed., vols. 1 and 2, MIT Press, 1987.
[Shep88]
Shepherdson, J., Negation in Logic Programming, in [Mink88], pp. 19-88.
[Shep92]
Shepherdson, J., Logics for Negation as Failure, in [Mosc92].
[Smu168]
Smullyan, R. M., First Order Logic, Springer-Verlag, 1968. Dover Publ., 1995.
[StSh861
Sterling, L., Shapiro, E., The Art of PROLOG: Advanced Programming Techniques, MIT Press, 1986.
[Thay88]
Thayse, A., ed., From Standard Lo9ic to Logic Programmin9: Introducing a Logic Based Approach to Artificial Intelligence, John Wiley & Sons, 1988.
[Turi37]
Turing, A. M., On computable numbers with an application to the Entscheidungsproblem , Proc. London Math. Sot., 42, 1937, pp. 230-265. A correction, ibid, 43, 1937, pp. 544-546.
[TurnS4]
Turner, R., Logics for Artificial Intelligence, Ellis Horwood Series in Artificial Intelligence, Halsted Press, 1984.
[Watt90]
Watt, D. A., Programmin9 Language Concepts and Paradi9ms , Prentice Hall, 1990.
W., Modelltheorie, Bd. I, Hochultaschenbuecher, Band
[WeKu84] Weiss, S. M., Kulikowski, C. A., A Practical Guide to Designing Expert Systems, Chapman and Hall Ltd., London, 1984. [WiBe89] Winston, P. H., Horn, B. K. P., LISP, 3 rd edn., Addison-Wesley, Reading, MA, 1981. [Xant90]
Xanthakis, S., PROLOG, Programmin9 Techniques, (in Greek), New Technology Editions, Athens, 1990.
[Zeno76]
Diogenes Laertius, Lives of Eminent Philosophers, Diogenes Laertius, Volume II, Book VII, Zeno, para. 76, Hicks, R. D., tr., Loeb Classical Library, pp. 182, 184 (Greek), pp. 183, 185 (English), Heinemann, London, and Harvard University Press, 1925.
Index of Symbols A 2, 17, 32, 96, 137, 260 2, 17, 32, 96, 137, 258, 260, 262 V 2, 17, 32, 96, 137, 260 -+ 3, 17, 32, 96, 137, 224, 260 ++ 3, 17, 32, 96, 137, 260 , 6, 47, 96, 218 ( 6,96
+ 97, 256-259 9 97, 256, 258-259 116 9 206, 215-216, 229-231 217, 223-224, 249
_
? 219-220 []
229-231
) 6,96
/*
+- 9
! 240
*/ 236
,-~ 10-11,264 u
255, 259
10-11
='=
I0-II
\---- 256, 259
~+
i0-I 1
:=
13
=\=
I0-II
-
255, 259 256, 259
256, 259
/ 256, 259
14, 120, 163
> 256, 259
~= 14
< 256, 259
0 -
14 15
=>
256, 259
=<
256, 259
21
~CWA 274 ~NM 274
21 22
~NM 274 \ = 282
J 3o 9
.,4 116
30, 231
,,4" 121 A I 28
FB 36, 163 F 39, 104, 163 [3 46-47 ; 47, 219
AH 134 A t 28
:-
AI 275
47-48, 218 52, 163
allowed_transition
V 96, 104, 137
append
250-251
3 96, 137
assert
217, 252
-
asserta
253
assertz
253
97, 108, 255, 259
< 97
319
238
320
Index of Symbols
atom 257
MGU 156
atomic 257
mod 256
CNF
29, 44, 125
NF 264
Con 21, 125 ConcwA 274
nl
Compl(P)
not
267
211
no 212, 263 272
consult 253, 255
NM 274
CST
nonvar 257
CWA
138, 141
OC 157
262
div 256
op 257
DNF 29, 125
PL 5
DS 156
PNF 125-126
E
100
PrL 96
f
10-11
R(s) 50 Rn(S) 51
fail 271 f ind_path 240
R* (S) 51
free(x, t, a)
r e a d 253-255
103
GHC 284
real
gt
retract
206
211,257 217, 252-253
GU 155
SNF 125, 129
Int 24
S o l u t i o n _ P a t h 239
integer is
257
255-256, 259
subform(A) 9 t 10-11
KES 278
Taut 22
E 114, 116
var 257
L* 120
write
~A 97
yes 212
member 249
211,253-255
Index of Terms adequacy, of logical connectives 27 adequate set 28 agrees, interpretation, with branch 164 agrees, truth valuation, with branch 56 and 6 algebra, truth values 10 algorithm, construction of a CNF 44-45 algorithm, construction of a CST 140-141 algorithm, construction of an Herbrand universe 132 algorithm, construction of a PNF 127-128 algorithm, construction of a semantic tree 145-146 algorithm, construction of a n S N F 129 algorithm, depth-first search 239-240 algorithm, Herbrand 213 algorithm, PL satisfiability of a sentence 150-152 algorithm, reduction to clausal form 153-154 algorithm, unification 154-155, 157-158, 232-235 allowed transition 238 alphabet, PROLOG 222 alphabet, propositional logic 5-6 alphabetic listing problem 300-303 anonymous variables 223 aquisition systems, knowledge 276 arc, of a tree 144 Aristotelian, syllogistic laws 20 Aristotelian, world 14 arithmetic, language of 97, 134 arithmetic, PROLOG 256
artificial intelligence 275 arity, of a predicate 96 ARITY-PROLOG 274 associativity 44, 106 associativity, operator 259 atomic formula 97 atomic propositions 6 atomic tableau, predicate logic 136-137 atomic tableau, propositional logic 32 atomic term 224 atoms 6, 97 atoms, PROLOG 222, 224 automatic theorem proving 31, 131 axiomatic proofs 38 axiomatic proofs, compactness theorem 64 axiomatic proofs, completeness theorem 64, 167 axiomatic proofs, soundness theorem 64, 167 axiomatic system, predicate logic 104 axiomatic system, propositional logic 38, 42 axioms, of equality 108 axioms, selection of 42 backtracking 209, 236-240, 242 BASIC 209-210 BASIC program 211 Beth-deduction 60 Beth-proof 36, 163 Beth-provable 36, 42, 142, 163-164 Beth-refutable 36, 142 Beth-refutation 36 Beth semantic tableau 31 binding, of connectives 9 binding, of variables 232 body, of a clause 48 body, of a list 229 321
322
Index of Terms
body, of a rule 216 Boolean algebra 11 Boolean valuation 10-11 bound occurences, of variables 99 bound of a recursion 248 bound variable 99 boundary conditions, for a relation 247 branch, complete, of a semantic tree 146 branch, contradictory, of a semantic tableau 34 branch, contradictory, of a semantic tree 145 branch, tree 144 Brouwer 18 calculus, )~ 214 characters, special, PROLOG 222. check, occur 157 Church 131 clause, empty 46 clause, Horn 48, 111,204-205, 215 clause, predicate logic 110 clause, propositional logic 46 clause, unit 48 clauses, set of 46, 131 closed formula 100 closed world assumption 213, 262 COBOL 209 comma 6, 47, 96, 218 commands 209 commands, imperative 210 comments, PROLOG programs 236 commutativity 44, 107 compactness theorem, axiomatic proofs 64 compactness theorem, deductions 64 compactness theorem, tableaux proofs 165 complete branch, of a tree 146 complete definition, of a predicate in a program 267 complete induction 7 complete systematic tableau 138, 141 complete tableau 34 completeness theorem, axiomatic proofs, predicate logic 167 completeness theorem, axiomatic proofs, propositional logic 64 completeness theorem, deductions 62
completeness theorem, resolution 65-66, 165 completeness theorem, semantic tableaux 59 completeness theorem, tableaux proofs 164 completion of a program 267 complex queries 220 composition, of substitutions 101 compound propositions 6, 20 concatenate 250 concurrent PROLOG 284 conditions, boundary, for a relation 247 conditions, marginal, for a relation 247 conjunction 2, 6, 46 conjunctive normal form, propositional logic 29, 44 conjunctive queries 220 connectives, logical 6, 10, 96 consequence, predicate logic 125 consequence, propositional logic 21 consistent, maximal set 77 consistent, semantically 23 consistent, sentence 122 consistent, set of sentences 122 constants, predicate logic 94, 97 constants, PROLOG 222, 224 constraint logic programming language 282 contradiction 14 contradictory branch, of a semantic tableau 34 contradictory branch, of a tree 145 contradictory tableau 34, 142 contrapositive law 20 contrapositive proof 59 crime solving 294-295 cut 240-245, 284 cut-failure combination 272 d a t a 204-205, 210, 215 data, addition 252-253 data, deletion 253 data, ill-defined 218, 242 data, lists 228 data, management queries 220 data, syntax 222 data, verification queries 220 database 216, 219, 276
Index decision problem 168-169 decision procedure 168 declarations, program 217 declarative interpretation of a program 235-238 deduction, Beth 60 deduction, compactness theorem 64 deduction, completeness theorem 62 deduction, soundness theorem 61 deduction, theorem of, predicate logic 110 deduction, theorem of, propositional logic 40 definition, recursive 245 degree, of a predicate 96 A-PROLOG 282 De Morgan's laws 20, 44 dense order 124 depth-first search strategy 238-240 derivation, from axioms 43 descendant 63 descendant, immediate 63 descriptive language 210 deterministic nature of PROLOG 238 diagnosis system 278-281 disagreement set 155-156 disjunction 2, 6, 46 disjunctive normal form, propositional logic 29 distributivity 44, 106-107, 126-127 double negation law 20 dragon 198-200 dual, to a quantifier 97 efficiency of a program 243 elementary extension, of an interpretation 121 elementary extension, of a language 120 empty clause 46 empty list 229 empty set of clauses 47 empty substitution 100 Epicurus 203 equal by definition 13 equality 108, 255-256 equivalence 3, 6, 15 equivalences, substitution of 39, 106 excluded middle 14, 20
323 existential quantifiers 96 expert systems 275 extension, of a truth valuation 12 extension, elementary, of an interpretation 121 extension, elementary, of a language 120 extension, non-elementary, of an interpretation 121 extension, non-elementary, of a language 121 fact, entering a new 252 fact, expert system 271 fact, predicate logic 112 fact, program 216-217 fact, propositional logic 48 fail, for a goal 48, 167, 213 failure-cut combination 272 false, logically 14 false, sentence 117-118 family tree 244-248 final node 144 final state 238-239 finding a path 239 finite degree, tableau 63 first step 6 flow, of a program 212 FORTRAN 204, 209 FORTRAN program 14 formula, closed 100 formula, matrix of 126 formula, predicate logic 94, 97, 106 formula, prefix of 126 formula, prenex 126 free occurences, of variables 99 free variable 99 free variable, for a term in a formula 103 Frege 42 full-stop symbol 206, 215-216, 229-231 functional programming 2, 214 functions, PROLOG 222, 225 functions, symbols of 96 functor 230 general programs 268 general unifier 155, 232-235 general unifier, most 156 generalized predicate 230
324
Index of Terms
generalization rule, predicate logic 104 geometry, Euclidean 192-195, 200-201 Gentzen 31 goal, definite 48 goal, fail 167, 213 goal, normal 264 goal, predicate logic 111 goal, program 48 goal, PROLOG 216 goal, propositional logic 48 goal, succeed 48, 167 Goedel 167 ground instance, of a clause 111 ground term 99 group 184-185 guarded Horn clauses 284 Hanoi, towers of 260-261 head, of a clause 48 head, of a list 229, 231,248 head, of a rule 216, 218 Heraklith 93 Herbrand interpretation 131, 134, 150 Herbrand theorem 135, 149 Herbrand universe 132 heuristics 276 high level symbol processing languages 277 Hilbert 18 Hintikka's lemma 56-57 Horn clause, guarded 276 Horn clause, predicate logic 111, 204-205, 215 Horn clause, propositional logic 48 IC-PROLOG 282 if and only if 6 if/then 6 immediate descendant 63 imperative commands 210 implication 3, 6 incomplete tableau 34 inconsistent, semantically 23 inconsistent, sentence 122 inconsistent, set of sentences 122 indirect proof 59 induction, complete 7 induction, course of values 7 induction, principle of
mathematical 6-7 inductive scheme for tableaux 55 inductive step 6 inequality, PROLOG 256 inference mechanism 204, 236, 276 infix operators 258 initial state 238-239 instance, ground, of a clause 111 instantiation of variables 154, 206, 232 interaction with program 253-255 interface with system 277 intermediate state 238 internal representation of a list 230 interpretation, agrees with branch 164 interpretation, elementary extension of 121 interpretation, declarative 217 interpretation, Herbrand 131, 134, 150 interpretation, predicate logic 95, 114-116 interpretation, procedural 217 interpretation, propositional logic 24 interpretation, theory of the 120 intuitionistic logic 18 keyboard interaction 253 kidney disease system 278 knowledge aquisition systems 277 knowledge based systems 276 knowledge engineer 277 knowledge management 276 knowledge representation 276 Koenig's lemma 63 Kripke semantics 14 )~-calculus 214 language, arithmetic 97, 134 language, descriptive 210 language, elementary extension of 120 language, imperative 210 language, predicate logic 96 language, programming 204 language, propositional logic 6 language, relational 276 law, contrapositive 20 law, De Morgan 20 law, double negation 20 law, excluded middle 20
Index law, Pierce 35 law, syllogistic 20 law, transportation 20 linear resolution 213 LISP 228, 277 list, body 229 list, concatenation 249-251 list, empty 229 list, head 229, 231,248 list, management 248 list, non-empty 229 list, PROLOG 228-231 list, subsets 249 list, tail 229, 231,248 list, tree structure 231 list, unification 248 literal, predicate logic 110 literal, propositional logic 44 logic, interpretation of a program 235 logic, intuitionistic 18 logic, modal 14 logic, nonmonotonic 273-275 logic, predicate 93 logic programming 31, 131, 163, 203-204, 214 logic programming, constraint 282 logic, propositional 5 logical connectives 6, 10, 96 logical length, of a proposition 6 logically equivalent 15 logically false 14 logically true 14 logically true, formula 124 loops, program 242, 246 management, data, predicates 252 management, data, queries 220 management, knowledge 268 management, lists 248 marginal conditions, for a relation 247 matching 232-235 matrix of a formula 126 maximal consistent set 77 metalanguage 21 metaprogramming 283 metaproposition 21 MICRO-PROLOG 282 middle, excluded 14, 20 modal logic 14
325 modal operators 14 model, of a sentence 120 modus ponens 38, 42, 104 Morgan, De 20 most general unifier 156 mu-PROLOG 282 murder problem 294-295 neck symbol 47-48, 215 negation 2, 6 negation, by failure 263, 264 negation by failure, rule 264 negation, double 20 negation, PROLOG 262 negation, safe 267 negation, unsafe 267 negative knowledge 263, 270 next node 143 node, final 144 node, next 143 node, previous 143 node, tableau 34 node, tree 143 non-elementary extension, of an interpretation 121 non-elementary extension, of a language 121 non-empty list 229 nonmonotonic logic 273-275 non-satisfiable, proposition 23 non-satisfiable, sentence 122 non-satisfiable, set of sentences 122 non-verifiable, clause 46 non-verifiable, proposition 14, 23 non-verifiable, sentence 122 non-verifiable, set of sentences 122 normal forms 27 normal forms, conjunctive, propositional logic 29 normal forms, disjunctive, propositional logic 29 normal forms, prenex 125 normal forms, Skolem 125, 129 normal, goal 264 normal, program 268 normalization of variables 160 not 6 numbers, PROLOG 222, 224 nu-PROLOG 282 object-oriented programming 214
326
Index of Terms
objects, PROLOG 222 objects, type checking 257 OCCAM 283 occur check 157 occurences, bound, of variables 99 occurences, free, of variables 99 operand position 258 operand priority 259 operation mechanism, PROLOG 232 operators, associativity 259 operators, infix 258 operators, modal 14 operators, position 258 operators, postfix 258 operators, prefix 258 operators, priority 257-258 operators, PROLOG 257-260 or 6 origin, of a tree 143 parallelism 283-284 PARLOG 284 parentheses 6, 96 partial validity 95 PASCAL 204, 209 path, finding 239 path, solution 239 period symbol 206, 215-216, 229-231 Pierce's law 35 Pompei 205-209 position of an operator 258 postfix operators 258 predicate logic 95 predicate logic, axioms 104 predicate, arity of 96 predicate, compound 225 predicate, data management 252-253 predicate, degree of 96 predicate, generalized 230 predicate, symbols 95-96 predicate, tree structure 226-228 predicates 94, 225-226 predicates, built-in 252 predicator 229 prefix operators 258 prefix, of a formula 126 prenex normal form 125 prenex formula 126 previous node 143
priority for operands 259 priority for operators 257-258 problem, decision 168-169 procedural interpretation 217 procedural part, of a program 216, 235 procedure, decision 168 procedure, program 217 program 112, 209 program, data 215 program, declarative interpretation 235-238 program, efficiency 243 program, flow 212, 252 program, interaction 253-255 program, logic interpretation 235-238 program, loops 242, 246 program, normal 268 program, procedural interpretation 235-238 program, queries 215 program, stratified 269 program, structure 215 program, updating 252 programming, functional 2, 214 programming, constraint logic 282 programming, language 204 programming, logic 31, 131, 163, 203-204 programming, object-oriented 214 PROLOG 43, 160, 163, 203-211, 214-215 PROLOG, alphabet 222 PROLOG, atoms 222, 224 PROLOG, constants 222, 224-226 PROLOG, deterministic nature 238 PROLOG, dialects 282 PROLOG, functions 222, 225-226 PROLOG, lists 228 PROLOG, negation 262 PROLOG, numbers 222, 224 PROLOG, operation mechanism 232 PROLOG, predicates 225-226 PROLOG, pure 285 PROLOG, real 285 PROLOG, special characters 222, 224 PROLOG, through PROLOG 240 PROLOG, tree structures 226-228 PROLOG, variables 222-223 PROLOG I 282
Index PROLOG II 282 PROLOG III 224, 282 proof 40 proof, axiomatic, predicate logic 109 proof, axiomatic, propositional logic 38 proof, Beth 36 proof, contrapositive 59 proof, indirect 59 proof, resolution, predicate logic 160 proof, resolution, propositional logic 31 proof, systematic tableaux 136 property, of a proposition 7 propositional logic 5 propositional logic, axioms 38, 42 propositions, atomic 6 propositions, compound 6 Protagoras 1 provable, Beth 36, 42, 142, 164 provable, by resolution 52 provable, from a set of formulae 109 provable, predicate logic 109 provable, propositional logic 40 proving, automatic theorem 31, 131 pure PROLOG 285 quantifiers 95 quantifiers, existential 96 quantifier, dual to 97 quantifiers, universal 96 queries, complex 220 queries, conjunctive 220 queries, data management 220 queries, data verification 220 queries, of a program 215-216, 219-222 read-only variables 284 real PROLOG 285 recursion, bound of 248 recursive definitions 229, 240, 244248, 251,260, 296-298, 300 recursive nature of unification 233 recursive relations 245 reflexivity 108 refutable, Beth 36, 142 refutable, by semantic tree 146 refutation, Beth 36 relational language 284 relations 210 relations, boundary conditions 247 relations, marginal conditions 247
327 relations, n-ary 116 relations, recursive 240, 245 renaming substitution 103 resolution, completeness theorem 65-66, 165 resolution, linear 213 resolution, NF 264 9 resolution, predicate logic 153, 160-163 resolution proof, predicate logic 160 resolution proof, PROLOG 204-209, 232 resolution proof, propositional logic 31, 43, 48, 52 resolution, rule of 49-50 resolution, soundness theorem 65, 165 resolvent, of clauses 50 Robinson 131, 158 rule, of resolution 49-50 rules 42, 216, 218, 268 rules, entering new 253 safe, negation 267 satisfiable, proposition 14, 23 satisfiable, sentence 122 satisfiable, set of sentences 122, 165 satisfiable, sequence 62 satisfy, truth valuation, of sequence 62 screen interaction 254 selection function 213 semantic tableau 31, 34 semantic tableaux, completeness theorem 59 semantic tableaux, soundness theorem 58 semantic tree 144 semantic tree, refutable by 146 semantically consistent 23 semantics, Kripke 14 semantics, propositional logic 5-6, 10 sentence 100, 112 sentence, consistent 122 sentence, false 117-118 sentence, satisfiable 122 sentence, true 117-118 sentence, universal 128 sentence, verifiable 122 sequences of special characters, PROLOG 222, 224 set, disagreement 155-156 set, of sentences, consistent 122 set, of sentences, inconsistent 122
328
Index of Terms
set, of sentences, non-satisfiable 122 set, of sentences, non-verifiable 122 set, of sentences, satisfiable 122 set, of sentences, verifiable 122 set-theoretic representation, of a proposition 46 set-theoretic representation, of a sentence 130-131, 166, 205 shells 277 short truth table 19 signed formula 32 Skolem function 129 Skolem normal form 125, 129 solution path 239 soundness theorem, axiomatic proofs, predicate logic 109, 119 soundness theorem, axiomatic proofs, propositional logic 64 soundness theorem, deductions 61 soundness theorem, NF 267, 269 soundness theorem, resolution 65, 165 soundness theorem, semantic tableaux 58 soundness theorem, tableaux proofs 164 special characters, PROLOG 222, 224 state, final 238 state, initial 238 state, intermediate 238 state space 238, 247-248, 250 stratification 268 stratified, program 269 structure, program 215 structures, from predicates 225 structures, tree, list 231 structures, tree, PROLOG 226-228, 232-235 subform(A) 9 subformula 9, 98 subgoal, predicate logic 111 subgoal, propositional logic 48 subterm 98 substitution 100, 204-209, 232-235 substitution, empty 100 substitution of equivalences 39, 44, 106 substitution, renaming 103 substitution set 100 substitutions, composition of 101 succeed, for a goal 48, 167
syllogistic laws 20 symbols, predicate logic 95-96 symbols, propositional logic 6 symmetry 108 syntactic analysis 226-228 syntax, data 222 syntax, predicate logic 96-97 syntax, propositional logic 5-6 system, automatic knowledge aquisition 277 system, knowledge based 276 systematic tableau, complete 138, 141 systematic tableaux proofs 136 tableau, atomic 136-137 tableau, complete 34 tableau, complete systematic 138, 141 tableau, contradictory 34, 142 tableau, incomplete 34 tableau, proof, compactness 165 tableau, proof, completeness 164 tableau, proof, soundness 164 tableau, proof, systematic 136 tableau, semantic 31, 34 tables, truth 10-11, 17 tail, of a clause 48 tail, of a list 229, 231,248 tail, of a rule 216, 218 tautology 14, 22, 44, 105 term, atomic 224 term, ground 99 term, predicate logic 97 theorem proving, automatic 31, 131,214 theorems 42 theory, of the interpretation 120 theory, of types 214 thief 236-238 towers of Hanoi 260-261 transition, allowed 238 transitivity 108 transportation law 20 trapezium 200-201,288 tree 143 tree, refutable by semantic 146 tree, semantic 145 tree, state space 247-248, 250 tree, structures, PROLOG 226-228, 232-235
Index tree, structures, list 231 true, logically 14 true, logically, formula 124 true, sentence 117-118 truth table 10-11, 17 truth table, short 19 truth valuation 11 truth valuation, agrees with branch 56 Turing 131 TURBO-PROLOG 282, 274 type checking 257 types, theory of 214 unification algorithm 154-155, 157-158, 232-235 unification, lists 248 unification, predicate logic 153, 204-209 unification, process 223, 228, 232-235 unifier 156 unifier, general 155, 232-235 unifier, most general 156 unit clause 48 universal quantifiers 96 universal sentence 128 universal validity 95 universe, Herbrand 132 universe, of an interpretation 116
unsafe, negation 267 unused, node 34 updating variables 232 used, node 34 validity, partial 95 validity, universal 95 valuation, Boolean 11 valuation 10 valuation, truth 11 variables 94, 97, 223 variables, anonymous 223 variables, bound 99 variables, bound occurences 99 variables, free 99 variables, free for a term in a formula 103 variables, free occurences 99 variables, normalization 160 variables, PROLOG 223 variables, read-only 284 variables, updating 232 variants, of sets of formulae 103 verifiable, proposition 14, 23 verifiable, sentence 122, 132 verifiable, set of sentences 122 verification queries, data 220 warehouse problem 290-292 well-formed expression 6
329