Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany
6486
Michael Johnson Dusko Pavlovic (Eds.)
Algebraic Methodology and Software Technology 13th International Conference, AMAST 2010 Lac-Beauport, QC, Canada, June 23-25, 2010 Revised Selected Papers
13
Volume Editors Michael Johnson Macquarie University Sydney, Australia E-mail:
[email protected] Dusko Pavlovic University of Oxford Oxford, UK E-mail:
[email protected]
Library of Congress Control Number: 2010941005 CR Subject Classification (1998): D.2, F.3, D.3, F.4.1, D.2.4, D.1 LNCS Sublibrary: SL 2 – Programming and Software Engineering ISSN ISBN-10 ISBN-13
0302-9743 3-642-17795-6 Springer Berlin Heidelberg New York 978-3-642-17795-8 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2011 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180
Preface
This volume contains the papers presented at AMAST 2010: the 13th International conference on Algebraic Methodology and Software Technology. The major goal of the AMAST conferences is to promote research that may lead to the setting of software technology on a firm, mathematical basis. Toward this goal, the conference supports a broad cooperation between academia and industry. The virtues of a software technology developed on a mathematical basis include the provision of software that is: 1. 2. 3. 4.
Correct, and the correctness can be proved mathematically Safe, so that it can be used in the implementation of critical systems Portable, i.e., independent of computing platforms and language generations Evolutionary, i.e., it can be self-adaptable and evolves with the problem domain 5. Secure, so that its network and user interactions can be predicted and controlled The previous editions of the AMAST Conference were held at Iowa City (1989, 1991), Twente (1993), Montreal (1995), Munich (1996), Sydney (1997), Manaus (1999), Iowa City (2000), Reunion Island (2002), Stirling (2004), Saaremaa (2006) and Urbana-Champaign (2008). Each conference over the last fifteen years was accompanied by a proceedings volume, published in the Springer Lecture Notes in Computer Science series. This 13th edition of AMAST took place during June 23–26, 2010 in LacBeauport, in Qu´ebec, Canada. It was was colocated with MPC 2010: the 10th International Conference on Mathematics of Program Construction, held during June 21–23, 2010. There were 33 submissions. Each submission was reviewed by at least three, and on the average 3.9, Program Committee members. The committee decided to accept ten full-length research presentations and four system demonstrations. The program also included two invited talks, given by Jane Hillston (Edinburgh University), and Catuscia Palamidesi (INRIA). Jane Hillston also provided a paper for Part 1 of this volume. The contributed research papers are in Part 2 and Part 3 contains the system demonstrations. We are grateful to the members of the Program Committee and the external referees for their care and diligence in reviewing the submitted papers, and to the staff of Springer-Verlag. The review process and compilation of the proceedings were greatly helped by Andrei Voronkov’s EasyChair system.
August 2010
Michael Johnson Dusko Pavlovic
Conference Organization
Program Chairs Michael Johnson Dusko Pavlovic
Macquarie University, Australia Kestrel Institute, USA and University of Oxford, UK
Program Committee Paolo Baldan Gilles Barthe Michel Bidoit Manfred Broy Roberto Bruni Iliano Cervesato Adriana Compagnoni Jos´e Luiz Fiadeiro Kokichi FUTATSUGI Rob Goldblatt Ichiro Hasuo Rolf Hennicker H´el`ene Kirchner Barbara K¨ onig Narciso Marti Oliet Michael Mislove Larry Moss Till Mossakowski Peter D. Mosses Andrzej Murawski Uwe Nestmann Fernando Orejas Leila Ribeiro Grigore Rosu Jan Rutten
Dipartimento di Matematica Pura e Applicata, Universit´ a di Padova, Italy IMDEA Software, Spain INRIA Saclay, France TUM, Germany University of Pisa, Italy Carnegie Mellon University - Qatar Campus, Qatar Stevens Institute of Technology, USA University of Leicester, UK JAIST, Japan Victoria University of Wellington, New Zealand RIMS, Kyoto University, Japan Ludwig-Maximilians-Universit¨ at M¨ unchen, Germany INRIA, France Universit¨at Duisburg-Essen, Germany Universidad Complutense de Madrid, Spain Tulane University, USA Department of Mathematics, Indiana University, Bloomington, USA DFKI Lab Bremen, Germany Swansea University, UK University of Oxford, UK Technische Universit¨at Berlin, Germany UPC, Spain Universidade Federal do Rio Grande do Sul, Brazil University of Illinois at Urbana-Champaign, USA CWI, The Netherlands
VIII
Conference Organization
Lutz Schr¨oder Wolfram Schulte Douglas Smith Carolyn Talcott Andrzej Tarlecki
Varmo Vene E.P. de Vink James Worrell
DFKI Bremen and Universit¨ at Bremen, Germany Microsoft Research, USA Kestrel Institute, USA SRI International, USA Institute of Informatics, Faculty of Mathematics, Informatics and Mechanics, Warsaw University, Poland University of Tartu, Estonia Technische Universiteit Eindhoven, The Netherlands University of Oxford, UK
External Reviewers Ludwig Adam Sebastian Bauer Laura Bocchi Jewgenij Botaschanjan Marzia Buscemi Yuki Chiba Mihai Codescu Andrea Corradini Silvia Crafa Vijay D’silva Tobias Eibach Cristian Ene Jean-Christophe Filliˆatre Reiner H¨ahnle Daniel Hedin Torsten Hildebrandt Cl´ement Hurlin
Dieter Hutter Stefan Kiefer Ekaterina Komendantskaya Dexter Kozen C´esar Kunz Alberto Lluch Lafuente Masaki Nakamura Kazuhiro Ogata Catuscia Palamidessi Kirstin Peters Ricardo Pe˜ na Erik Poll Bernhard Reus Mehrnoosh Sadrzadeh Francesco Tapparo David Trachtenherz Virginie Wiels
Local Organizers Claude Bolduc, Jules Desharnais, and B´echir Ktari (Universit´e Laval, Canada)
Sponsoring Institutions – Universit´e Laval, Qu´ebec, Canada, http://www.ulaval.ca – Centre de recherches math´ematiques, Universit´e de Montr´eal, Montr´eal, Canada, http://www.crm.umontreal.ca
Table of Contents
Part 1. Invited Paper Structural Analysis for Stochastic Process Algebra Models (Invited Talk) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jie Ding and Jane Hillston
1
Part 2. Contributed Research Papers Verification of Common Interprocedural Compiler Optimizations Using Visibly Pushdown Kleene Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claude Bolduc and B´echir Ktari
28
On the Expressiveness of the π-Calculus and the Mobile Ambients . . . . . Linda Brodo
44
Integrating Maude into Hets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mihai Codescu, Till Mossakowski, Adri´ an Riesco, and Christian Maeder
60
Model Refinement Using Bisimulation Quotients . . . . . . . . . . . . . . . . . . . . . Roland Gl¨ uck, Bernhard M¨ oller, and Michel Sintzoff
76
Type Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ralf Hinze
92
Coalgebraic Semantics for Parallel Derivation Strategies in Logic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ekaterina Komendantskaya, Guy McCusker, and John Power
111
Learning in a Changing World, an Algebraic Modal Logical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prakash Panangaden and Mehrnoosh Sadrzadeh
128
Matching Logic: An Alternative to Hoare/Floyd Logic . . . . . . . . . . . . . . . . Grigore Ro¸su, Chucky Ellison, and Wolfram Schulte
142
Program Calculation in Coq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Julien Tesson, Hideki Hashimoto, Zhenjiang Hu, Fr´ed´eric Loulergue, and Masato Takeichi
163
Cooperation of Algebraic Constraint Domains in Higher-Order Functional and Logic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rafael del Vado V´ırseda
180
X
Table of Contents
Part 3. System Demonstrations Proving Termination Properties with mu-term . . . . . . . . . . . . . . . . . . . . . . Beatriz Alarc´ on, Ra´ ul Guti´errez, Salvador Lucas, and Rafael Navarro-Marset
201
BAL Tool in Flexible Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . Diego P´erez Le´ andrez, M. Carmen Ruiz, J. Jose Pardo, and Diego Cazorla
209
A Complete Declarative Debugger for Maude . . . . . . . . . . . . . . . . . . . . . . . . Adri´ an Riesco, Alberto Verdejo, and Narciso Mart´ı-Oliet
216
An Assume Guarantee Approach for Checking Quantified Array Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohamed Nassim Seghir
226
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
237
Structural Analysis for Stochastic Process Algebra Models Jie Ding1 and Jane Hillston2 1
School of Information Engineering, Yangzhou University, Yangzhou, 225009, China
[email protected] 2 LFCS, School of Informatics, Edinburgh University, UK
[email protected]
Abstract. Stochastic process algebra models have been successfully used in the area of performance modelling for the last twenty years, and more recently have been adopted for modelling biochemical processes in systems biology. Most research on these modelling formalisms has been on quantitative analysis, particularly the derivation of quantified dynamic information about the system modelled in the face of the state space explosion problem. In this paper we instead consider qualitative analysis, looking at how recent developments to tackle state space explosion in quantified analysis can be also harnessed to establish properties such as freedom from deadlock in an efficient manner.
1
Introduction
Stochastic process algebras were introduced in the early 1990s as a formal modelling formalism for performance modelling which allowed continuous time Markov chains (CTMCs) to be specified in a rigorous, compositional manner (see for example [1, 10, 12]). Like all discrete state modelling formalisms the process algebra models suffered from problems of state space explosion when representing large or complex systems, and subsequent work focussed on exploiting the compositionality of the process algebra to decompose or simplify the underlying CTMC e.g. [2, 5, 11, 15, 19, 22]. More recently stochastic process algebras such as the stochastic π-calculus [16, 20] and PEPA [3, 4] have been used for modelling biochemical mechanisms within cells. In these cases the state space explosion problem becomes almost insurmountable. Consequently in many cases models are analysed by discrete event simulation rather than being able to abstractly consider all possible behaviours. However an alternative approach has emerged which is suitable for models which are comprised of large numbers of repeated components, fluid approximation [13]. In this approach an alternative state representation is chosen, and system dynamics are approximated by a continuous updating of state rather than the usual discrete steps which are represented in process algebra semantics. Whereas process algebra semantics usually capture the structure of the system in terms of interacting components and each of their local states, in this M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 1–27, 2011. c Springer-Verlag Berlin Heidelberg 2011
2
J. Ding and J. Hillston
approach only a aggregation is captured which counts the number of components which currently exhibit a particular behaviour [25]. Whilst the focus of stochastic process algebras has understandably been primarily quantitative analysis, qualitative analysis can also provide valuable insight into the behaviour of a system. In contrast, in Petri net modelling there are well-established techniques of structural analysis [6, 9, 17, 23, 24]. In this paper we show how the new state representation schema for the stochastic process algebra PEPA, developed to support fluid approximation, makes it possible to readily adapt structural analysis techniques for Petri nets to PEPA. Moreover, the compact representation form means that qualitative analysis can now be applied to systems of a size and complexity which could not previously be considered. The remainder of this paper is organised as follows. Section 2 introduces PEPA and the numerical representation schema used throughout this paper. In Section 3 we explain how this representation makes apparent the P/T structure underlying every PEPA model and in Section 4 we show how this structure can be used to uncover invariants of a model. In Section 5 and 6 we discussed linearised state space and a new deadlock checking algorithm respectively. Section 7 presents some related work and we conclude in Section 8.
2
The PEPA Modelling Formalism
This section will briefly introduce the PEPA language and its numerical representation scheme. The numerical representation scheme for PEPA was developed by Ding in his thesis [7], and represents a model numerically rather than syntactically supporting the use of mathematical tools and methods to analyse the model. 2.1
Introduction to PEPA
PEPA (Performance Evaluation Process Algebra) [12], developed by Hillston in the 1990s, is a high-level model specification language for low-level stochastic models, and describes a system as an interaction of the components which engage in activities. In contrast to classical process algebras, activities are assumed to have a duration which is a random variable governed by an exponential distribution. Thus each activity in PEPA is a pair (α, r) where α is the action type and r is the activity rate. The language has a small number of combinators, for which we provide a brief introduction below; the structured operational semantics can be found in [12]. The grammar is as follows: S ::= (α, r).S | S + S | CS P ::= P P | P/L | C L where S denotes a sequential component and P denotes a model component which executes in parallel. C stands for a constant which denotes either a sequential component or a model component as introduced by a definition. CS stands for
Structural Analysis for Stochastic Process Algebra Models
3
constants which denote sequential components. The effect of this syntactic separation between these types of constants is to constrain legal PEPA components to be cooperations of sequential processes. Prefix: The prefix component (α, r).S has a designated first activity (α, r), which has action type α and a duration which satisfies exponential distribution with parameter r, and subsequently behaves as S. Choice: The component S + T represents a system which may behave either as S or as T . The activities of both S and T are enabled. Since each has an associated rate there is a race condition between them and the first to complete is selected. This gives rise to an implicit probabilistic choice between actions dependent of the relative values of their rates. Hiding: Hiding provides type abstraction, but note that the duration of the activity is unaffected. In P/L all activities whose action types are in L appear as the “private” type τ .
Q denotes cooperation between P and Q over action types Cooperation:P L in the cooperation set L. The cooperands are forced to synchronise on action types in L while they can proceed independently and concurrently with other enabled activities (individual activities). The rate of the synchronised or shared activity is determined by the slower cooperation (see [12] for details). We write P Q as an abbreviation for P Q when L = ∅ and P [N ] is used to represent L N copies of P in a parallel, i.e. P [3] = P P P . Constant: The meaning of a constant is given by a defining equation such as def A = P . This allows infinite behaviour over finite states to be defined via mutually recursive definitions. On the basis of the operational semantic rules (please refer to [12] for details), a PEPA model may be regarded as a labelled multi-transition system (α,r) C, Act, −→ |(α, r) ∈ Act where C is the set of components, Act is the set of activities and the multi(α,r)
relation −→ is given by the rules. If a component P behaves as Q after it (α,r)
completes activity (α, r), then denote the transition as P −→ Q. The memoryless property of the exponential distribution, which is satisfied by the durations of all activities, means that the stochastic process underlying the labelled transition system has the Markov property. Hence the underlying stochastic process is a CTMC. Note that in this representation the states of the system are the syntactic terms derived by the operational semantics. Once constructed the CTMC can be used to find steady state or transient probability distributions from which quantitative performance can be derived. 2.2
Numerical Representation of PEPA Models
As explained above there have been two key steps in the use of fluid approximation for PEPA models: firstly, the shift to a numerical vector representation of
4
J. Ding and J. Hillston
the model, and secondly, the use of ordinary differential equations to approximate the dynamic behaviour of the underlying CTMC. In this paper we are only concerned with the former modification. This section presents the numerical representation of PEPA models developed (α,r P →Q )
(α,r)
α in [7]. For convenience, we may represent a transition P −→ Q as P −→ α Q, or often simply as P −→ Q since the rate is not pertinent to structural analysis, where P and Q are two local derivatives. Following [13], hereafter the term local derivative refers to the local state of a single sequential component. In the standard structured operational semantics of PEPA used to derive the underlying CTMC, the state representation is syntactic, keeping track of the local derivative of each component in the model. In the alternative numerical vector representation some information is lost as states only record the number of instances of each local derivative:
Definition 1 (Numerical Vector Form[13]). For an arbitrary PEPA model M with n component types Ci , i = 1, 2, · · · , n, each with di distinct local derivan tives, the numerical vector form of M, m(M), is a vector with d = i=1 di entries. The entry m[Cij ] records how many instances of the jth local derivative of component type Ci are exhibited in the current state. In the following we will find it useful to distinguish derivatives according to whether they enable an activity, or are the result of that activity: Definition 2 (Pre and post local derivative) α
1. If a local derivative P can enable an activity α, that is P −→ ·, then P is called a pre local derivative of α. The set of all pre local derivatives of α is denoted by pre(α), called the pre set of α. α
2. If Q is a local derivative obtained by firing an activity α, i.e. · −→ Q, then Q is called a post local derivative of α. The set of all post local derivatives is denoted by post(α), called the post set of α. 3. The set of all the local derivatives derived from P by firing α, i.e. α
post(P, α) = {Q | P −→ Q}, is called the post set of α from P . Within a PEPA model there may be many instances of the same activity type but we will wish to identify those that have exactly the same effect within the model. In order to do this we additionally label activities according to the derivatives to which they relate, giving rise to labelled activities: Definition 3 (Labelled Activity) 1. For any individual activity α, for each P ∈ pre(α), Q ∈ post(P, α), label α as αP →Q .
Structural Analysis for Stochastic Process Algebra Models
5
2. For a shared activity α, for each (Q1 , Q2 , · · · , Qk ) ∈ post(pre(α)[1], α)×post(pre(α)[2], α)×· · ·×post(pre(α)[k], α), label α as αw , where w = (pre(α)[1] → Q1 , pre(α)[2] → Q2 , · · · , pre(α)[k] → Qk ).
Each αP →Q or αw is called a labelled activity. The set of all labelled activities is denoted by Alabel . For the above labelled activities αP →Q and αw , their respective pre and post sets are defined as pre(αP →Q ) = {P }, post(αP →Q ) = {Q}, pre(αw ) = pre(α), post(αw ) = {Q1 , Q2 , · · · , Qk }. In the numerical representation scheme, the transitions between states of the model are represented by a matrix, termed the activity matrix — this records the impact of the labelled activities on the local derivatives. Definition 4 (Activity Matrix, Pre Activity Matrix, Post Activity Matrix). For a model with NA labelled activities and ND distinct local derivalabel matrix, and the entries are tives, the activity matrix C is an ND × NA label defined as following ⎧ ⎨ +1 if Pi ∈ post(αj ) C(Pi , αj ) = −1 if Pi ∈ pre(αj ) ⎩ 0 otherwise where αj is a labelled activity. The pre activity matrix Cpre and post activity matrix Cpost are defined as +1 if Pi ∈ pre(αj ) Pre , C (Pi , αj ) = 0 otherwise. +1 if Pi ∈ post(αj ) Post C (Pi , αj ) = 0 otherwise. From Definitions 3 and 4, each column of the activity matrix corresponds to a system transition and each transition can be represented by a column of the activity matrix. The activity matrix equals the difference between the pre and post activity matrices, i.e. C = CPost − CPre . The rate of the transition between states is specified by a transition rate function, but we omit this detail here since we are concerned with qualitative analysis. See [7] for details. For the remainder of this paper we assume that the PEPA models considered satisfy two assumptions. Firstly, that there is no cooperation within groups of components of the same type. Secondly, that each column of the activity matrix of a model is distinct, i.e. each labelled activity is distinct in terms of pre and post local derivatives.
6
J. Ding and J. Hillston
3
Place/Transtion Structure Underlying PEPA Models
In the previous section we defined the numerical state vector and the activity matrix for a PEPA model. With the exception of rate information, the PEPA model can be recovered from the activity matrix since this captures all the structural information about the system. Moreover, this representation of the system allows an underlying P/T structure to derived from the PEPA model, as we will now show. 3.1
P/T Systems
P/T nets1 , are a form of Petri net which can also be interpreted in terms of conditions and events. When the net is considered together with an initial marking it is termed a P/T system. Definition 5 (P/T net, Marking, P/T system, [6]) 1. A P/T net is a structure N = (P, T, Pre, Post) where: P and T are the sets of places and transitions respectively; Pre and Post are the |P | × |T | sized, natural valued, incidence matrices. 2. A marking is a vector m : P → nonnegative integer.
that assigns to each place of a P/T net a
3. A P/T system is a pair S = N , m0 : a net N with an initial marking m0 . There is a substantial body of theory developed for the structural analysis of P/T systems (see for example [6] for an overview). Thus, providing a mapping from a PEPA model to an underlying P/T system facilitates the application of these results to PEPA models. Moreover this is straightforward to do since the numerical vector form and the activity matrix correspond directly with the marking and the incidence matrix used to characterise a P/T system. 5, it is easy to see that the structure N =
From Definition D, Alabel , CPre , CPost derived from a PEPA model is a P/T net, where D, Alabel are the sets of all local derivatives and all labelled activities of the PEPA model respectively, and CPre , CPost are the pre and post activity matrices respectively. Given a starting state m0 , S = N , m0 is a P/T system. Clearly, each reachable marking m from m0 is a state of the aggregated CTMC underlying the given PEPA model. This leads us to: Theorem 1. There is a P/T system underlying any PEPA model,
that is N , m0 , where m0 is the starting state; N = D, Alabel , CPre , CPost is a P/T net: where D is the local derivative set, Alabel is the labelled activity set; CPre and CPost are the pre and post activity matrices respectively. 1
“P/T” signifies “place/transition”.
Structural Analysis for Stochastic Process Algebra Models
7
In the following, the P/T structure underlying a PEPA model is denoted as N = D, Alabel , CPre , CPost or S = N , m0 , which are comprised of local derivative and labelled activity sets, the pre and post activity matrices, and the starting state. We now introduce some terminology related to P/T systems for PEPA models (see [6] for reference). For convenience, in order to distinguish the P/T system used in the context of a PEPA model, it will be referred to as the P/T structure underlying the model. A transition α is enabled in a state m if and only if m ≥ CPre [·, α]; its firing α yields a new state m = m + C[·, α]. This is denoted by m → m . In a P/T system an occurrence sequence from m is a sequence of transitions tk t1 σ = t1 · · · tk · · · such that m → m1 · · · → mk · · · . The language of S = N , m0 , denoted by L(S) or L(N , m0 ), is the set of all the occurrence sequences from the starting state m0 . A state m is said to be reachable from m0 if there exists σ a σ in L(S) such that m0 → m, that is m = m0 + C · σ, where σ is the firing count vector corresponding to σ. The set of all the reachable states from m, called the reachability set from m, is denoted by RS(N , m). According to the definition, the reachability set of the P/T system S = N , m0 is
RS(N , m0 ) = m ∈ |P | | ∃σ ∈ L(S) such that m = m0 + C · σ ,
where σ is the firing count vector of the occurrence sequence σ, |P | represents the number of elements in P , i.e. |P | = #P . The correspondence between P/T systems and PEPA models is shown in Table 1. Table 1. P/T structure in PEPA models P/T terminology P : place set
PEPA terminology D: local derivative set
T : transition set
Alabel : labelled activity set
Pre: pre matrix
CPre : pre activity matrix
Post: post matrix
CPost : post activity matrix
C = Post − Pre: incidence matrix
C = CPost − CPre : activity matrix
m: marking
m: state vector
RS(N , m0 ): reachability set (from m0 ) RS(N , m0 ): state space (with starting state m0 )
3.2
Example
In this subsection we present a simple example PEPA model and illustrate the concepts introduced above. The system consists of two components, a service user and a service provider. There are three activities in the system: the user and the
8
J. Ding and J. Hillston
provider cooperate on task1 ; the user then undertakes task2 independently and the provider carries out a reset. The PEPA model of this system is shown on the left hand side of Figure 1. 2,0,2,0
T
reset
U ser1 =(task1 , a).U ser2
reset
2,0,0,2
T
def
P rovider1 =(task1 , a).P rovider2
1,1,2,0
T
task2
def
U ser2 =(task2 , b).U ser1
task1
reset
1,1,1,1
T
reset task2
def
P rovider2 =(reset, d).P rovider1
} P rovider1 [N]. U ser1 [M ] {task
task1
task2
0,2,2,0
T
task2 task1
1,1,0,2
T
0,2,1,1
T
reset
reset
task2
1
task2
task1
2,0,1,1
T
def
0,2,0,2
T
Fig. 1. Transition diagram of Model 1 (M = N = 2)
Recall that the numerical vectors indicate the system states while the activity matrix embodies the rules of system operation. Regardless of the activity rates, the same numerical vectors and activity matrix describe the operational behaviour of the model. The activity matrix C and pre activity matrix CPre of Model 1 are shown in Table 2. For convenience, the labelled activities are denoted by ltask1 , l task2 and l reset respectively. If we assume that there are two users and two providers in the system, i.e. M = N = 2 then the starting state is m0 = (2, 0, 2, 0)T and the diagram of the transitions between the states is as shown on the right hand side of Figure 1. Table 2. Activity matrix and pre activity matrix of Model 1 (b) Activity Matrix CPre
(a) Activity Matrix C l U ser1 U ser2 P rovider1 P rovider2
task1
−1 1 −1 1
l
task2
1 −1 0 0
l
reset
0 0 1 −1
U ser1 U ser2 P rovider1 P rovider2
ltask1 ltask2 lreset 1 0 0 0 1 0 1 0 0 0 0 1
According to the semantics of PEPA task1 is enabled when there is at least one instance of U ser1 and one instance of P rovider1 . The mathematical expression of this is m ≥ (1, 0, 1, 0)T , i.e. m ≥ CPre (·, ltask1 )2 . Thus each column of the pre activity matrix reflects the required condition for enabling the corresponding 2
In this context one vector is greater than another is interpreted element-wise, i.e. every element of the first vector is greater than the corresponding element of the second.
Structural Analysis for Stochastic Process Algebra Models
9
labelled activity. For example, in the initial state m0 , task1 can be performed because m0 ≥ CPre (·, ltask1 ). Both task2 and reset are not enabled since m0 CPre (·, ltask2 ) and m0 CPre (·, lreset ). After firing task1 at m0 , the state vector becomes m1 = (1, 1, 1, 1)T , reflecting update by the corresponding column of the activity matrix: m1 = m0 + ltask1 . Since m1 ≥ CPre (·, ltask2 ) and m1 ≥ CPre (·, lreset ), m1 can fire either task2 or reset. Suppose task2 is fired, then we get m2 = (2, 0, 0, 1)T . That is m2 = m1 + ltask2 = m0 + ltask1 + ltask2 .
(1)
Note that each column of a matrix can be extracted by multiplying the matrix on the right by an appropriate vector: l task1 = C(1, 0, 0)T ,
l task2 = C(0, 1, 0)T ,
l reset = C(0, 0, 1)T .
Thus (1) can be written as m2 = m0 + ltask1 + ltask2 = m0 + C(1, 1, 0)T .
(2)
Generally, the execution of a labelled activity α in state m yields the state m , α denoted by m −→ m . This can be expressed as m = m + C(·, α)
(3)
where C(·, α) is the transition vector corresponding to α, i.e. the column of the activity matrix that corresponds to α. If an occurrence sequence σ = α1 · · · αk · · · α ∈ Aω label from m0 yields the state m, i.e. α
α
α
m0 →1 m1 · · · →k mk · · · → m, σ
then we write m0 → m. We define the execution count vector of a sequence σ as σ[α] = (α, σ), where (α, σ) is the number of occurrences of α in σ. Integrating the evolution equation in (3) from m0 to m we get: m = m0 + C · σ.
(4)
The formula (4) is called the state equation, reflecting that each state in the state space is related to the starting state through an algebraic equation. This is consistent with the fact that each system state results from the evolution of the system from the starting state. Of course, the state equation does not involve the time information, so it cannot be used to do quantitative analysis. However, as we shall see, this is sufficient for qualitative analysis of the system.
4
Invariance in PEPA Models
Many systems have structural properties which should remain unchanged throughout the operation of the system and it can be important to establish such invariants. Conversely, finding characteristics of a model which do not vary
10
J. Ding and J. Hillston
with the dynamics may indicate a problem with the model and/or the system represented. In either case, invariance analysis is an important approach to qualitative analysis. Formally an invariant is a property of the state of the system which is preserved by all transitions.
Definition 6 (Invariant). An invariant of a PEPA model is a vector y ∈ such that for any state m in the state space, yT m is a constant, or equivalently yT m = yT m0 ,
(5)
since the starting state m0 is a state and the constant is just yT m0 . It is well-known from P/T systems that such vectors can be expressed in terms of the incidence matrix and thus we can establish an analogous result for the activity matrix of a PEPA model. Lemma 1. If yT C = 0, then y is an invariant. Proof. For any state m in the state space, there exists a corresponding sequence σ such that m = m0 + Cσ. Multiplying both sides of this equation by the vector yT , we have yT m = yT m0 + yT Cσ. Obviously, yT m = yT m0 holds if and only if yT Cσ = 0.
One particular form of invariant can always be found for PEPA model. These invariants reflect conservation laws for the components of a PEPA model. The syntax of PEPA ensures that components can neither be created nor destroyed during the evolution of a model. Thus, the population of each component type is a constant in any state. Let P be an arbitrary component type of a given PEPA model. The state transition of component type P can only move P between its local derivatives. That is, it is not possible to change into any other component type’s local derivative. For an arbitrary labelled activity α, if there is a pre local derivative of α, there must exist a post local derivative of α and this post local derivative must be within component type P . Therefore, in each column of the activity matrix for each “1” there must be a corresponding “−1”. Consequently for each component type the net effect of any activity is zero. Thus Lemma 2. Let C be the activity matrix of a given PEPA model, D be the set of all local derivatives. For an arbitrary component type P of this model, let DP be the local derivative set of P . Define a vector yD with #D entries: 1 if P ∈ DP yP [P ] = 0 if P ∈ D\DP Then yPT C = 0.
Structural Analysis for Stochastic Process Algebra Models
11
Remark 1. Obviously, yP in Lemma 2 is an invariant by Lemma 1. Let P be the set of all component types of the given PEPA model. According to the definition of is also So the sum an invariant. invariant, a linear combination of invariants T T P ∈P yP = 1 is also an invariant. In fact, 1 C = P ∈P yP C = 0. Lemma 2 and Lemma 1 imply the following: the population of each component type and therefore all component types in any state are constants. This fact is termed the conservation law. Proposition 1 (Conservation Law). For a given PEPA model, let D be the local reachability set. For an arbitrary component type P ∈ P, let DP be the local derivative set of P . Then m[P ] = m0 [P ]. (6) P ∈DP
P ∈DP
m[P ] =
P ∈D
m0 [P ].
(7)
P ∈D
Proof. For any m, there exists a σ such that m = m0 + Cσ. By Lemma 2, yPT C = 0 where yPT is given in this lemma. So we have m[P ] = yPT m = yPT (m0 + Cσ) = yPT m0 = m0 [P ]. P ∈DP
P ∈DP
Moreover, let P be the set of all component types of this model, then m[P ] = m[P ] = m0 [P ] = m0 [P ]. P ∈D
P ∈P P ∈DP
P ∈P P ∈DP
P ∈D
Proposition 1 can easily lead to the boundedness of the underlying state space. Corollary 1 (Boundedness). The state space underlying a PEPA model is bounded: for any state m and any local derivative P , 0 ≤ m[P ] ≤ max m0 [P ] , P ∈P
P ∈DP
where P is the component type set, DP is the local derivative set corresponding to component type P , m0 is the starting state. 4.1
Example
Let us consider an example. Consider Model 2, the PEPA model shown in Figure 2. Here two types of components, namely X and Y , are synchronised through the shared activities action1 and action2. As we will see, this model, unlike Model 1, exhibits invariance beyond the simple conservation law.
12
J. Ding and J. Hillston
def
X1 = (action1 , a1 ).X2
Y1
X1
def
X2 = (action2 , a2 ).X1 def
Y1 = (action1 , a1 ).Y3 + (job1 , c1 ).Y2
action1, a1 action2, a2
def
Y2 = (job2 , c2 ).Y1 def
Y4 = (action2 , a2 ).Y2 + (job4 , c4 ).Y3
X2
Y3
Y2
action2, a2
action1, a1
def
Y3 = (job3 , c3 ).Y4
job1, c1 job2, c2
job3, c3 job4, c4
Y4
(X1 [M1 ] X2 [M2 ]) {action1,action2} (Y1 [N1 ] Y2 [N2 ] Y3 [N3 ] Y4 [N4 ]) .
Fig. 2. Model 2 and the transition systems of its components
Let m[Xi ], m[Yj ] (i = 1, 2, j = 1, 2, 3, 4) denote the numbers of the components X and Y in the local derivatives Xi , Yj respectively. For this model we assert the following: the difference between the number of Y in their local derivatives Y3 and Y4 , and the number of X in the local derivative X2 , i.e. m[Y3 ] + m[Y4 ] − m[X2 ], is a constant in any state. This fact can be explained as follows. Note that there is only one way to increase m[Y3 ] + m[Y4 ], i.e. executing the activity action1. Whenever action1 is performed, then a copy of Y enters Y3 from Y1 . Meanwhile, since action1 is shared by X, a corresponding copy of X will go to X2 from X1 . In other words, m[Y3 ]+m[Y4 ] and m[X2 ] increase equally and simultaneously. On the other hand, there is also only one way to decrease m[Y3 ] + m[Y4 ] and m[X2 ], i.e. performing the shared activity action2. This also allows m[Y3 ] + m[Y4 ] and m[X2 ] to decrease equally and simultaneously. So, the difference m[Y3 ] + m[Y4 ] − m[X2 ] will remain constant in any state and thus at any time. This assertion, i.e. “m[Y3 ] + m[Y4 ] − m[X2 ] is a constant”, can be easily verified. That is, y = (0, −1, 0, 0, 1, 1)T is an invariant of Model 2, since yT m = m[Y3 ] + m[Y4 ] − m[X2 ]
(8)
is a constant. Lemma 1 provides a method to find invariants: any solution of yT C = 0, i.e. T C y = 0, is an invariant. For Model 2, its activity matrix C is as depicted in Table 3 (the labels of those labelled activities are omitted since there are no confusions). We can solve the linear algebraic equation CT y = 0. The rank of C is three. So by linear algebra theory the rank of the solution space {y : CT y = 0} is also three (6 − 3 = 3). We can readily find three vectors which form the basis of the solution space: y1 = (1, 1, 0, 0, 0, 0)T , y2 = (0, 0, 1, 1, 1, 1)T , y3 = (0, 1, 1, 1, 0, 0)T .
Structural Analysis for Stochastic Process Algebra Models
13
Table 3. Activity matrix of Model 2
X1 X2 Y1 Y2 Y3 Y4
action1 action2 job1 job2 job3 job4 −1 1 0 0 0 0 1 −1 0 0 0 0 −1 0 −1 1 0 0 0 1 1 −1 0 0 1 0 0 0 −1 1 0 −1 0 0 1 −1
These vectors are invariants of Model 2, with y1 corresponding to the conservation law for component type X, y2 corresponding to the conservation law for component type Y , and y3 corresponding to the invariant discussed above, although in a slightly different form. 4.2
Liveness in PEPA Models
We should point out that yT C = 0 is not a necessary condition for an invariant y. For example, consider Model 2 with m0 = (100, 0, 0, 0, 0, 3)T . Then the state space of the model in terms of numerical vectors has four elements: m0 and m1 = (100, 0, 0, 0, 1, 2)T ,
m2 = (100, 0, 0, 0, 2, 1)T ,
m3 = (100, 0, 0, 0, 3, 0)T .
y = (0, 1, 1, 0, 0, 0)T is an invariant since yT mk = 0 (k = 0, 1, 2, 3), but yT C = 0. However, for a class of PEPA models, i.e. live PEPA models, the inverse of Lemma 1 is true: y is an invariant can imply yT C = 0. Definition 7 (Liveness for PEPA). Denote by N , m0 the P/T structure underlying a PEPA model. 1. A labelled activity α is live if for any state in the reachability set, there exists a sequence of activities such that the state after performing this sequence can perform an α activity. 2. If all activities are live, then both N , m0 and the PEPA model are said to be live. The liveness defined for PEPA is analogous to that defined for P/T nets (see [6]). For some PEPA models, if they have no deadlocks then they are live, see Lemma 8. The following proposition directly derives from P/T theory (page 319, [24]): for a live P/T net yT C = 0 is equivalent to yT m = yT m0 . Proposition 2 If a PEPA model is live, i.e. the underlying P/T structure
D, Alabel , CPre , CPost , m0 is live, then for any state m in the state space, yT C = 0 ⇐⇒ yT m = yT m0 .
14
J. Ding and J. Hillston
For the class of live PEPA models, finding invariants is much simpler—we only need to solve the activity-matrix-based linear algebraic equation CyT = 0.
5
Linearisation of State Space for PEPA
As discussed earlier using the numerical vector form means that the size of the state space is substantially reduced with respect to the full syntactic representation generated by the structured operational semantics. Nevertheless, this state space can still grow rapidly. For example for Model 1 the state space size is (M + 1) × (N + 1) and when M, N are large, this may exceed what can be handled as Table 4 demonstrates. Table 4. Elapsed time of state space derivation. All experiments were carried out using the PEPA Plug-in (v0.0.19) for Eclipse Platform (v3.4.2), on a 2.66GHz Xeon CPU with 4Gb RAM running Scientific Linux 5. The runtimes here are elapsed times reported by the Eclipse platform. (M, N ) (300,300) (350,300) (400,300) (400,400) time 2879 ms 4236 ms “Java heap space” “GC overhead limit exceeded”
These problems motivated the development of the alternative ordinary differential equation (ODE) semantics for PEPA giving a fluid approximation [13]. However qualitative analysis techniques for ODEs are not yet well-established and we were particularly interested in knowing when models have a deadlock which cannot be distinguished in a transient trace of the ODE simulation from a steady state. Thus we sought to exploit the link to an underlying P/T structure to develop an alternative deadlock checking algorithm which does not rely on state space exploration. 5.1
Linearisation of State Space
As shown in Section 3, the states of a PEPA model can be expressed using the state equations. If the state space can be described using such linear equations, then its storage will be much more compact. In this subsection we present a linearised description of the state space of PEPA models. The full reachability set RS(S) of a given PEPA model with the activity matrix C and starting state m0 is
RS(N , m0 ) = m ∈ |D| | ∃σ ∈ L(S) such that m = m0 + C · σ .
This reachability set is descriptive rather than constructive. So it does not help us to derive and store the entire state space. Moreover, for some σ ∈ |D| ,
Structural Analysis for Stochastic Process Algebra Models
15
m = m0 + Cσ ∈ |D| may not be valid states because there is no valid occurrence sequences corresponding to σ. However m is said to belong to a generalisation of the reachability set: the linearised reachability set. Before giving these definitions, we first define notions of flow and semiflow in the context of PEPA. (For the definitions of these concepts in the context of P/T systems, see [6]). Definition 8 (Flow, Semiflow, Conservative and Consistent). Let C be of model with the underlying P/T structure the activityPrematrix a PEPA D, Alabel , C , CPost , m0 .
1. A p-flow is a vector y : D → such that yT C = 0. Natural and nonnegative flows are called semiflows: vectors y : D → such that yT C = 0. The model is conservative if there exists a p-semiflow whose support covers D, that is {P ∈ D | y[P ] > 0} = D.
2. A basis ( fundamental set) of p-flows (p-semiflows), B = {y1 , y2 , · · · , yq } (Φ = {y1 , y2 , · · · , yq }) is a minimal subset which will generate any p-flow (p-semiflow) as follows: y = yj ∈Ψ kj yj , kj ∈ .
3. A t-flow is a vector x : Alabel → such that Cx = 0. Natural and nonnegative flows are called semiflows: vectors x : Alabel → such that Cx = 0. The model is consistent if there exists a t-semiflow whose support covers Alabel .
By Proposition 1, any PEPA model is conservative. Obviously, a p-semiflow is a special kind of p-flow while a t-semiflow is a special t-flow. Moreover, according to Lemma 1, a p-flow is an invariant. Let B and Φ be a basis of p-flows and a fundamental set of p-semiflows respectively. Then for any m ∈ RS(N , m0 ), we have Bm = 0 and Φm = 0. However this does not imply that any m ∈ |D| that satisfies Bm = 0 or Φm = 0 is in RS(N , m0 ). But they do belong to generalised reachability sets. We adopt the following definitions from those for linearised reachability sets in P/T systems [24]:
Definition 9 (Linearised Reachability Set). Let S be a P/T structure. 1. Its linearised reachability set using the state equation is defined as
LRSSE (S) = m ∈ |P | | ∃σ ∈ |T | such that m = m0 + C · σ .
2. Its linearised reachability set using the state equation over reals is defined as
LRSSER(S) = m ∈ |P | | ∃σ ≥ 0 such that m = m0 + C · σ .
3. Its linearised reachability set using a basis B of p-flows is defined as
LRSPf (S) = m ∈ |P | | B · m = B · m0 .
16
J. Ding and J. Hillston
4. Its linearised reachability set using a fundamental set of p-semiflows is defined as
LRSPsf (S) = m ∈ |P | | Φ · m = Φ · m0 .
These sets are characterised by linear algebraic equations which means the job of determining whether a state belongs to a set is reduced to verifying the equations. Clearly, RS(S) ⊆ LRSSE (S). The difference between the definitions of LRSSE (S) and LRSSER (S) is embodied in the different conditions imposed on σ. There is no doubt that LRSSE (S) ⊆ LRSSER (S). Since for any m ∈ LRSSER (S), m = m0 + C · σ, then Bm = Bm0 + BC · σ = Bm0 , so m ∈ LRSPf (S) and thus LRSSER (S) ⊆ LRSPf (S). The definitions of LRSPf (S) and LRSPsf(S) are directly related to the invariants in the system. Clearly, LRSPf (S) ⊆ LRSPsf (S). The relationships between these reachability sets are shown in the following lemmas, analogous to those for reachability sets in [24]. Lemma 3. Let S be a P/T structure, then 1. RS(S) ⊆ LRSSE (S) ⊆ LRSSER (S) ⊆ LRSPf (S) ⊆ LRSPsf (S). 2. If N is conservative, then LRSPf (S) = LRSPsf (S). 3. If N is consistent, then LRSSER(S) = LRSPf (S). For the P/T structure S underlying a PEPA model, we will see that LRSSE (S) = LRSSER (S). Lemma 4. Let S be the P/T structure underlying a PEPA model. LRSSE (S) = LRSSER (S). Proof. For any m ∈ LRSSER (S), there exists σ ≥ 0, such that m = m0 + C · σ. Recall that m, m0 ∈ |D| , all elements of C are either −1, 0 or 1, and we assume that each column of C is distinct. So all elements of σ must be integers. Since σ ≥ 0, thus σ ∈ |Alabel | . That is, m ∈ LRSSE (S). So LRSSE (S) ⊃ LRSSER (S). Since LRSSE (S) ⊂ LRSSER (S), therefore LRSSE (S) = LRSSER (S).
Note that this proof is based on the assumption of distinct columns in the activity matrix. However, this is not a serious restriction because the activity matrix of any PEPA model can be put into this form by simple pre-processing.
Structural Analysis for Stochastic Process Algebra Models
17
We now introduce the concept of equal conflict (see [6]). Definition 10 (Equal Conflict). Let N = (P, T, Pre, Post) be a P/T net. 1. The P/T net N is called equal conflict (EQ), if pre(l) ∩ pre(l ) = ∅ implies Pre[·, α] = Pre[·, α ]. 2. The P/T net N is ordinary if each entry of Pre and Post is either zero or one. 3. An ordinary EQ net is a free choice (FC) net. 4. A PEPA model is called an EQ model if the P/T structure underlying this model is EQ. The following proposition relates the EQ condition to conditions on the PEPA model. Proposition 3. For a PEPA model, we have 1. A PEPA model is an EQ model if and only if for any two labelled activities α and α , their pre sets are either equal or distinct, i.e., either pre(α) ∩ pre(α ) = ∅ or pre(α) = pre(α ). 2. An EQ PEPA model is a FC model. Proof. 1. Let α and α be two arbitrary labelled activities. Suppose pre(α) ∩ pre(α ) = ∅. What we need to prove is : CPre [·, α] = CPre [·, α ] ⇐⇒ pre(α) = pre(α ). Clearly CPre [·, α] = CPre [·, α ] implies pre(α) = pre(α ), since each nonzero entry of a column of CPre represents a pre local derivative of the corresponding activity. Now we prove “⇐=”. Assume pre(α) = pre(α ) = {P1 , P2 , · · · , Pk }. By the definition of the pre activity matrix, in the columns corresponding to α and α , all entries are zeros except the entries corresponding to the local derivatives Pi (i = 1, 2, · · · , k) which are ones. So these two columns are the same, i.e. CPre [·, α] = CPre [·, α ]. 2. According to the definition of pre and post activity matrices, all elements are either zero or one, that is, any PEPA model is ordinary. So an EQ PEPA model is a FC model.
Remark 2. For an arbitrary PEPA model, if the labelled activities α and α are both individual, i.e. #pre(α) = #pre(α ) = 1, it is easy to see that either pre(α) ∩ pre(α ) = ∅ or pre(α) = pre(α ). If α is individual but α shared, then #pre(α) = 1 < 2 ≤ #pre(α ) and thus pre(α) = pre(α ). Therefore, if a local derivative can enable both individual and shared activities, then the PEPA model is not an EQ model. For example, in Model 2 def
Y1 = (action1, a1).Y3 + (job1, c1).Y2 where action1 is shared while job1 is individual, so Model 2 is not an EQ model.
18
J. Ding and J. Hillston
Definition 11. [6]. A P/T system S = N , m0 is reversible if the starting state is reachable from any state in the reachability set, i.e. for any m ∈ RS(S), there exists m ∈ RS(N , m) such that m = m0 . A reversible system means that the starting state is reachable from any state in the reachability set. A live, bounded, and reversible FC system has a good property with respect to the linearised reachability set. Lemma 5. (page 225, [6]) If S is a live, bounded, and reversible FC system, then RS(S) = LRSSE (S). Based on the above lemma, we show that for a class of P/T structure underlying a PEPA model, all the generalised sets are the same. Theorem 2. If S underlying a PEPA model is a live, reversible and EQ P/T structure, then RS(S) = LRSSE (S) = LRSSER (S) = LRSPf (S) = LRSPsf (S).
(9)
Proof. By Proposition 4 in Section 6, S is live implies that S is consistent. Since S is conservative by Proposition 1, then according to Lemma 3, LRSSER (S) = LRSPf (S) = LRSPsf (S). By Lemma 4, LRSSE (S) = LRSSER (S) = LRSPf (S) = LRSPsf (S). Since S is a bounded FC system according to Corollary 1 and Proposition 3, therefore by Lemma 5, RS(S) = LRSSE (S) = LRSSER (S) = LRSPf (S) = LRSPsf (S).
For live, reversible and EQ PEPA models, because all the states can be described by a matrix equation, the state space derivation is always possible and straightforward. Moreover, the storage memory can be significantly reduced. The validation of a state, i.e. judging whether a vector belongs to the state space, is reduced to checking whether this vector satisfies the matrix equation and thus searching the entire state space is avoided. 5.2
Example
Recall Model 1 and suppose that the starting state is m0 = (M, 0, N, 0)T . So this is a live, reversible and EQ PEPA model. It is easy to determine that in numerical form the state space is:
Structural Analysis for Stochastic Process Algebra Models
RS(S) = {(x1 , M − x1 , y1 , N − y1 )T | x1 , y1 ∈
19
, 0 ≤ x1 ≤ M, 0 ≤ y1 ≤ N )}.
and as stated earlier there are (M + 1) × (N + 1) states in RS(S). Now we determine LRSPsf (S) based only on the activity matrix and the starting state. The activity matrix of Model 1 is ⎛ ⎞ −1 1 0 ⎜ 1 −1 0 ⎟ ⎟ C=⎜ (10) ⎝ −1 0 1 ⎠ . 1 0 −1 Solving CT y = 0, we get a basis of the solution space, which forms the rows of the fundamental set Φ: 1100 Φ= . (11) 0011 Note that
m[x1 ] + m[x2 ] = m0 [x1 ] + m0 [x2 ] LRSPsf (S) = {m ∈ N4 | Φm = Φm0 } = m ∈ N4 m[y1 ] + m[y2 ] = m0 [y1 ] + m0 [y2 ]
= m ∈ N4 | m[x1 ] + m[x2 ] = M ; m[y1 ] + m[y2 ] = N
= (x1 , M − x1 , y1 , N − y1 )T | x1 , y1 ∈ N, 0 ≤ x1 ≤ M, 0 ≤ y1 ≤ N = RS(S).
By Proposition 3, we have RS(S) = LRSSE (S) = LRSSER (S) = LRSPf (S) = LRSPsf (S). This is consistent with Theorem 2.
6
Improved Deadlock-Checking Methods for PEPA
A deadlock in a set characterises the existence of a state in this set, from which no transition can be enabled. Deadlock-checking is an important topic in qualitative analysis of computer and communication systems. It has popular applications, in particular in protocol validation. The current deadlock-checking algorithm for PEPA relies on exploring the entire state space to find whether a deadlock exists. For large scale PEPA models, deadlock-checking can become impossible due to the state-space explosion problem. This section presents an efficient deadlockchecking method which does not heavily depend on the size of the state space. We start by formally defining deadlock. Definition 12 (Deadlock-freedom for PEPA). Let the P/T structure underlying any given PEPA model be N , m0 .
20
J. Ding and J. Hillston
1. A deadlock of the model or the underlying P/T structure is a state in the state space which cannot enable any transition, i.e. cannot preform any activity. 2. The model or the structure N , m0 is deadlock-free if it has no deadlock. Clearly the numerical form state space of a PEPA model is the same as the reachability set of the P/T structure underlying the model. So it is equivalent to give the deadlock definition in the context of the model or the corresponding P/T structure. in state m it can be expressed as m CPre [·, α] or If activity α is disabled Pre [P, α]. If one of these conditions holds for all activities in P ∈pre(α) m[P ] < C Alabel , i.e. there is no activity that can be enabled at m, then m is a deadlock. The following theorem gives a mathematical characterisation of deadlock-free models. Theorem 3. (Theorem 30, [24]). Let S be a P/T system. If there is no (integer) solution to ⎧ ⎨ m − Cσ = m0 , m, σ ≥ 0, ⎩ Pre [P, α] ∀α ∈ Alabel , P ∈pre(α) m[P ] < C then S is deadlock-free. According to this theorem, to decide whether there is a deadlock, it is necessary to compare each state in the state space with each column of the pre activity matrix. This is not an efficient way to find a deadlock, especially for large scale PEPA models. The condition P ∈pre(α) m[P ] < CPre [P, α] means that there exists P such that m[P ] < CPre [P, α]. All elements of m are nonnegative integers, and for all P and α, any entry of CPre [P, α] is 1 or 0. So m[P ] < CPre [P, α] ⇐⇒ m[P ] = 0 and CPre [P, α] = 1. Clearly, m ≥ 1 cannot be a deadlock, since m ≥ CPre [·, α] for any α. Thus, only a state m which contains zeros can possibly be a deadlock. This observation is very helpful but not enough for deadlock-checking. All such states with zeros in some entries, have to be found and checked, which is not always feasible. Theorem 2 specifies the structure of the state space, but it requires the condition of “liveness” in advance, which already guarantees deadlock-freedom. In the following subsection, an equivalent deadlock-checking theorem for a class of PEPA models is given, which allows equivalent deadlock-checking in the linearised state space. 6.1
Equivalent Deadlock-Checking
Before stating our main results, we first list several lemmas which are used in the proof of our theorem.
Structural Analysis for Stochastic Process Algebra Models
21
The reachability set of a P/T net can be regarded as a directed graph, i.e. each state is a node,a transition from one state to another state is a directed edge between the nodes. A directed graph is strongly connected if it contains a directed path from u to v and a directed path from v to u for every pair of vertices u, v. A graph is called connected if every pair of distinct vertices in the graph can be connected through some path. Obviously, a strongly connected graph is a connected graph. The two definitions have been introduced for nets (see [18]). For a net, the following lemma provides two sufficient conditions of strongly connected. Lemma 6. (Property 6.10, page 217, [6]) Let N be a graph and C its incidence matrix. 1. If N is connected, consistent and conservative, then it is strongly connected. 2. If N is live and bounded then it is strongly connected and consistent. If N is the P/T structure underlying a PEPA model, these conditions can be simplified. Proposition 4. Suppose S = N , m0 is a P/T system underlying a PEPA model. 1. If N is consistent, then the state space is strongly connected. 2. If S is live, then N is strongly connected and consistent. Proof. By Proposition 1 and Corollary 1 the P/T structures underlying PEPA models are conservative and bounded. Note that each state in the state space is reachable from the initial state, i.e. the state space is connected. So according to Lemma 6, Proposition 4 holds.
Lemma 7. (Theorem 6.19, page 223, [6]) Let S be a bounded strongly connected EQ system. Then S is live if and only if it is deadlock-free. Lemma 8. (Theorem 6.22, page 225, [6]) If S is a live EQ system, then for any ma , mb ∈ LRSSE (S), RS(N , ma ) ∩ RS(N , mb ) = ∅. This lemma implies that there are no spurious deadlocks in live EQ systems, i.e. there are no deadlocks in LRSSE (S). Theorem 4. If the P/T structure S underlying a PEPA model is a consistent, EQ system, then 1. LRSSE (S) is deadlock-free ⇐⇒ RS(S) is deadlock-free. 2. LRSSER (S) is deadlock-free ⇐⇒ RS(S) is deadlock-free. 3. LRSPf (S) is deadlock-free ⇐⇒ RS(S) is deadlock-free.
22
J. Ding and J. Hillston
4. LRSPsf (S) is deadlock-free ⇐⇒ RS(S) is deadlock-free. Proof. It is easy to see that each “=⇒” holds because LRSPsf (S) ⊃ LRSPf (S) ⊃ LRSSER (S) ⊃ LRSSE (S) ⊃ RS(S). Now we show that each “⇐=” holds. Note that S is consistent, and by Proposition 4, S is strongly connected. Then according to Lemma 7, RS(S) is deadlockfree implies that S is live. Since S is an EQ system, then by Lemma 8, LRSSE (S) is deadlock-free. Since now we have LRSSE (S) = LRSSER (S) by Lemma 4, so LRSSER (S) is deadlock-free. Note that by Lemma 3, the conservativeness and consistence of the system imply that LRSPf (S) = LRSPsf (S) = LRSSER (S). So LRSPf (S) and LRSPsf (S) are deadlock-free.
6.2
Deadlock-Checking Algorithm in LRSPsf
Theorem 4 allows us to check the corresponding linearised state space to determine whether a consistent and EQ model has deadlocks. Note that “consistent” and “EQ” can be efficiently checked as properties of the activity matrix. As previously mentioned, an activity α is disabled in m if and only if there is a P such that m[P ] < CPre [P, α] which must imply m[P ] = 0 and CPre [P, α] = 1. Thus only states m with zeros in some places can possibly be a deadlock. Based on this idea, we provide a deadlock-checking algorithm, see Algorithm 1. In this algorithm, K(α) is theset of vectors that cannot enable α. The intersected set of all K(α), i.e. K = α∈Alabel K(α), is the deadlock candidate set, in which each vector cannot fire any activity. K LRSPsf is used to check whether the deadlock candidates are in the linearised state space LRSPsf . Algorithm 1. Deadlock-checking in LRSPsf 1: for all α ∈ Alabel do 2: if α is an individual activity then 3: K(α) = {m ∈ |D| | m[P ] = 0, CPre [P, α] = 1}
4: 5:
// where {P } = pre(α)
else if α is a shared activity then K(α) = {m ∈ |D| | m[P ] = 0, CPre [P, α] = 1} P ∈pre(α) 6: end if 7: end for 8: K = K(α)
α∈Alabel
9: If K LRSPsf = ∅, then LRSPsf is deadlock-free. Otherwise, LRSPsf at least has one deadlock.
Since this algorithm depends on the system structure and the equation-based state representation the computational complexity is much reduced compared
Structural Analysis for Stochastic Process Algebra Models
23
with state space exploration. It is important to note that although Theorem 4 requires the conditions of consistent and EQ, Algorithm 1 is free from these restrictions since it deals with the linearised state space. That means, for any general PEPA model with or without the consistency and EQ conditions, if the generalised state space has no deadlocks reported by Algorithm 1, then the model has no deadlocks. However, if it reports deadlocks in the generalised state space, it is not possible to tell whether there is a deadlock in the model, except for a consistent and EQ model. Nevertheless, the algorithm provides a method which can tell when and how a system structure may lead to deadlocks. We should point out that each entry of each numerical state (regardless of whether it is in the state space or the linearised state space) is an integer bounded between zero and the population of the corresponding component type. So all the sets appearing in this algorithm are finite. Thus, this algorithm is computable. 6.3
Examples
Example 1: always deadlock-free. Recall Model 1, with system equation } P rovider1 [N ]. U ser1 [M ] {task 1
The activity matrix C and pre activity CPre of Model 1 are ⎞ ⎛ ⎞ ⎛ −1 1 0 100 ⎜ 1 −1 0 ⎟ ⎜0 1 0⎟ Pre ⎟ ⎟ C=⎜ =⎜ ⎝ −1 0 1 ⎠ , C ⎝1 0 0⎠. 1 0 −1 001
According to Algorithm 1, K(task1) = {m | m[U ser1 ] = 0 or m[P rovider1 ] = 0}, K(task2) = {m | m[U ser2 ] = 0},
K(reset) = {m | m[P rovider2 ] = 0},
K = K(task1) ∩ K(task2) ∩ K(rest) = {m | m[U ser1 ] = 0, m[U ser2 ] = 0, m[P rovider2 ] = 0} ∪ {m | m[P rovider1 ] = 0, m[U ser2 ] = 0, m[P rovider2 ] = 0}. From Section 5.2: LRSPsf (S) = {m ∈ N4 | Φm = Φm0 } = {(x1 , M − x1 , y1 , N − y1 )T | x1 , y1 ∈ N, 0 ≤ x1 ≤ M, 0 ≤ y1 ≤ N }.
So K ∩ LRSPsf = ∅. That is to say, the system has no deadlocks. Example 2: deadlocks in some situations. Now we consider a variant of Model 1 in which all actions are shared, and the model has a consistent and EQ P/T structure.
24
J. Ding and J. Hillston
U ser1 U ser2 P rovider1 P rovider2
task1 task2 −1 1 1 −1 −1 1 1 −1
⎛
⎛
⎞ −1 1 ⎜ 1 −1 ⎟ ⎟, C=⎜ ⎝ −1 1⎠ 1 −1
CPre
1 ⎜0 =⎜ ⎝1 0
⎞ 0 1⎟ ⎟. 0⎠ 1
Fig. 3. Activity matrix and pre activity matrix of Model 3
Model 3 def
P rovider1 = (task1 , 1).P rovider2
def
def
P rovider2 = (task2 , 1).P rovider1
U ser1 = (task1 , 1).U ser2
def
U ser2 = (task2 , 1).U ser1
(U ser1 [M1 ] U ser2 [M2 ]) {task (P rovider1 [N1 ] P rovider2 [N2 ]). ,task } 1
2
Figure 3 gives the activity matrix and pre activity matrix of Model 3. First, let us determine LRSPsf (S). Solving CT y = 0, we get a basis of the solution space which forms the rows of Φ: ⎛
⎞ 1 1 0 0 Φ = ⎝0 1 1 0⎠ . 0 0 1 1 m0 = (M1 , M2 , N1 , N2 )T , so
LRSPsf (S) = {m ∈ N4 | Φm = Φm0 } =
⎧ ⎨ ⎩
m∈N 4
⎫ m[U ser1 ] + m[U ser2 ] = M1 + M2 ; ⎬ m[U ser2 ] + m[P rovider1 ] = M2 + N1 ; ⎭ m[P rovider1 ] + m[P rovider2 ] = N1 + N2
Note that each of the semiflows corresponds to an invariant of the model. The first and third express the conservation laws. The second expresses the coupling between the components, i.e. the cooperations ensure that the numbers of local derivatives in the two components always change together. Secondly, we determine the potential deadlock set K. According to Algorithm 1, K(task1) = {m | m[U ser1 ] = 0 or m[P rovider1 ] = 0}, K(task2) = {m | m[U ser2 ] = 0 or m[P rovider2 ] = 0}, K = K(task1)∩K(task2) = {m | m[U ser1 ] = 0, m[U ser2 ] = 0}∪{m | m[U ser1 ] = 0, m[P rovider2 ] = 0} ∪ {m | m[P rovider1 ] = 0, m[U ser2 ] = 0} ∪ {m | m[P rovider1 ] = 0, m[P rovider2 ] = 0}
Structural Analysis for Stochastic Process Algebra Models
25
Finally, the deadlock set in LRSPsf is ⎧ ⎫ m[U ser1 ] + m[U ser2 ] = M1 + M2 ; ⎨ ⎬ K ∩ LRSPsf = m ∈ 4 m[U ser2 ] + m[P rovider1 ] = M2 + N1 ; ⎩ ⎭ m[P rovider1 ] + m[P rovider2 ] = N1 + N2
{m | (m[U ser1 ] = m[P rovider2 ] = 0) ∨ (m[P rovider1 ] = m[U ser2 ] = 0)} (m = (0, M1 + M2 , N1 + N2 , 0)T ∧ M1 + N2 = 0) 4 = m∈ | ∨ (m = (M1 + M2 , 0, 0, N1 + N2 )T ∧ M2 + N1 = 0) (m = (0, M1 + M2 , N1 + N2 , 0)T ∧ M1 = N2 = 0) 4 = m∈ | . ∨ (m = (M1 + M2 , 0, 0, N1 + N2 )T ∧ M2 = N1 = 0)
In other words, for Model 3 with m0 = (M1 , M2 , N1 , N2 )T , only when M1 = N2 = 0 or M2 = N1 = 0, K ∩ LRSPsf = ∅, i.e. the system has at least one deadlock. Otherwise, the system is deadlock-free as long as M1 + N2 = 0 and M2 + N1 = 0.
7
Related Work
The relationship between PEPA models and Petri nets has been studied previously and in one case led to an attempt to develop structural analysis for PEPA models. However, in all the previous work the PEPA models were subjected to the standard structured operational semantics and interpretation as a CTMC [12]. In [21] Ribaudo defined stochastic Petri net semantics for various stochastic process algebras, including PEPA. As in our work here, her approach associated each local derivative with a place and each activity with a transition. To cope with the difference between action types and transitions, she defined a labelling function that maps transition names into action names. Similarly, our approach is to attach distinct labels to each action name, as indicated by the definition of labelled activity. However, since Ribaudo’s approach does not include aggregation as we do, the mapping semantics in [21] does not help with the state-space explosion problem in structural analysis for large scale PEPA models with repeated components. In fact, since the instances of the same component type are considered as distinct copies, their local derivatives are consequently distinct. So the number of places will increase with the number of repeated components, which is in contrast to the fixed number of places in our approach. Moreover, we are able to define transition rate functions that capture the rate information for each system state and each transition. Therefore, our approach is more convenient for quantitative application, such as simulation and fluid approximation for PEPA models. Ribaudo’s work was motivated by investigation into the relationship between formalisms whereas our work is more application-oriented. A higher level mapping between PEPA and stochastic Petri nets is considered in [14]. This work, carried out primarily as investigation of the relative expressiveness of bounded stochastic Petri nets and PEPA models, does not consider
26
J. Ding and J. Hillston
structural analysis for the purposes of qualitative analysis as in this paper. Instead there a consideration of the relationship between place invariants of the Petri net and components of the PEPA model. The work presented here shows that this is related to the conservation law of the PEPA models. The previous work on structural analysis of PEPA models in [8] has some similarities with our approach. However, the class of PEPA considered in [8] is somewhat restricted; in particular no repeated components are allowed, which is also because no aggregation technique is employed. Moreover, the problem of the difference between actions and transitions is not considered. Furthermore, there is no rate information considered in [8], and therefore their considerations cannot be extended to quantitative analysis.
8
Conclusions
This paper has revealed the P/T structure underlying PEPA models. Based on the techniques developed for P/T systems, we have solved the derivation and storage problems of state space for a class of large scale PEPA models. For any general PEPA models, we demonstrated how to find their invariants. These invariants can be used to reason about systems in practice. Our main contribution in this paper, is the structure-based deadlock-checking method for PEPA. This method can efficiently reduce the computational complexity of deadlockchecking and avoid the state-space explosion problem. The philosophy behind our approach, i.e. structure-based and equation-based considerations, can be applied to other problems such as logical model-checking.
References 1. Bernardo, M., Gorrieri, R.: A tutorial on EMPA: A theory of concurrent processes with nondeterminism, priorities, probabilities and time. Theoretical Computer Science 202, 1–54 (1998) 2. Bohnenkamp, H.C., Haverkort, B.R.: Semi-numerical solution of stochastic process algebra models. In: Katoen, J.-P. (ed.) AMAST-ARTS 1999, ARTS 1999, and AMAST-WS 1999. LNCS, vol. 1601, pp. 228–243. Springer, Heidelberg (1999) 3. Calder, M., Duguid, A., Gilmore, S., Hillston, J.: Stronger computational modelling of signalling pathways using both continuous and discrete-state methods. In: Priami, C. (ed.) CMSB 2006. LNCS (LNBI), vol. 4210, pp. 63–77. Springer, Heidelberg (2006) 4. Calder, M., Gilmore, S., Hillston, J.: Modelling the influence of RKIP on the ERK signalling pathway using the stochastic process algebra PEPA. In: Priami, C., Ing´ olfsd´ ottir, A., Mishra, B., Riis Nielson, H. (eds.) Transactions on Computational Systems Biology VII. LNCS (LNBI), vol. 4230, pp. 1–23. Springer, Heidelberg (2006) 5. Clark, G., Hillston, J.: Product form solution for an insensitive stochastic process algebra structure. Performance Evaluation 50(2-3), 129–151 (2002) 6. Colom, J.M., Teruel, E., Silva, M.: Logical properties of P/T system and their analysis. MATCH Summer School (Spain) (Septemper 1998)
Structural Analysis for Stochastic Process Algebra Models
27
7. Ding, J.: Structural and Fluid Analysis of Large Scale PEPA models — with Applications to Content Adaptation Systems. Ph.D. thesis, The Univeristy of Edinburgh (2010) 8. Gilmore, S., Hillston, J., Recalde, L.: Elementary structural analysis for PEPA. Tech. rep. The University of Edinburgh, UK (December 1997) 9. Giua, A., DiCesare, F.: Petri nets structural analysis for supervisory control. IEEE Transactions on Robotics and Automation 10(2), 185–195 (1994) 10. G¨ otz, N., Herzog, U., Rettelbach, M.: TIPP– a language for timed processes and performance evaluation. Tech. rep., Tech. Rep.4/92, IMMD7, University of Erlangen-N¨ ornberg, Germany ( November 1992) 11. Harrison, P.G.: Turning back time in Markovian process algebra. Theor. Comput. Sci. 290(3), 1947–1986 (2003) 12. Hillston, J.: A Compositional Approach to Performance Modelling (PhD Thesis). Cambridge University Press, Cambridge (1996) 13. Hillston, J.: Fluid flow approximation of PEPA models. In: International Conference on the Quantitative Evaluation of Systems (QEST 2005). IEEE Computer Society Press, Los Alamitos (2005) 14. Hillston, J., Recalde, L., Ribaudo, M., Silva, M.: A comparison of the expressiveness of SPA and bounded SPN models. In: Haverkort, B., German, R. (eds.) Proceedings of the 9th International Workshop on Petri Nets and Performance Models. IEEE Computer Science Press, Aachen (September 2001) 15. Hillston, J., Thomas, N.: Product form solution for a class of PEPA models. Performance Evaluation 35(3-4), 171–192 (1999) 16. Kuttler, C., Niehren, J.: Gene regulation in the π-calculus: Simulating cooperativity at the lambda switch. In: Priami, C., Ing´ olfsd´ ottir, A., Mishra, B., Riis Nielson, H. (eds.) Transactions on Computational Systems Biology VII. LNCS (LNBI), vol. 4230, pp. 24–55. Springer, Heidelberg (2006) 17. Lautenbach, K.: Linear algebraic techniques for place/transition nets. In: Brauer, W., Reisig, W., Rozenberg, G. (eds.) APN 1986. LNCS, vol. 254, pp. 142–167. Springer, Heidelberg (1987) 18. Memmi, G., Roucairol, G.: Linear algebra in net theory. In: Brauer, W. (ed.) Net Theory and Applications. LNCS, vol. 84, pp. 213–223. Springer, Heidelberg (1980) 19. Mertsiotakis, V.: Approximate Analysis Methods for Stochastic Process Algebras. Ph.D. thesis, Universit¨ at Erlangen-N¨ urnberg, Erlangen (1998) 20. Priami, C., Regev, A., Shapiro, E., Silverman, W.: Application of a stochastic name-passing calculus to representation and simulation of molecular processes. Inf. Process. Lett. 80(1), 25–31 (2001) 21. Ribaudo, M.: Stochastic Petri net semantics for stochastic process algebras. In: Proceedings of the Sixth International Workshop on Petri Nets and Performance Models. IEEE Computer Society, Washington (1995) 22. Sereno, M.: Towards a product form solution for stochastic process algebras. The Computer Journal 38(7), 622–632 (1995) 23. Silva, M., Colom, J.M., Campos, J., Gamma, C.: Linear algebraic techniques for the analysis of Petri nets. In: Recent Advances in Mathematical Theory of Systems, Control, Networks, and Signal Processing II, pp. 35–42. Mita Press (1992) 24. Silva, M., Teruel, E., Colom, J.M.: Linear algebraic and linear programming techniques for the analyisis of place/transition net systems. In: Reisig, W., Rozenberg, G. (eds.) APN 1998. LNCS, vol. 1491. Springer, Heidelberg (1996) 25. Tribastone, M., Gilmore, S., Hillston, J.: Scalable Differential Analysis of Process Algebra Models. IEEE Transactions on Software Engineering (to appear, 2010)
Verification of Common Interprocedural Compiler Optimizations Using Visibly Pushdown Kleene Algebra Claude Bolduc and B´echir Ktari D´epartement d’informatique et de g´enie logiciel Universit´e Laval, Qu´ebec, QC, G1K 7P4, Canada
[email protected],
[email protected]
Abstract. Visibly pushdown Kleene algebra is an algebraic system using only propositional reasoning while still being able to represent wellknown constructs of programming languages (sequences, alternatives, loops and code blocks) in a natural way. In this paper, this system is used to verify the following interprocedural compiler optimizations: interprocedural dead code elimination, inlining of functions, tail-recursion elimination, procedure reordering and function cloning. The proofs are equational and machine-verifiable.
1
Introduction
Compilers usually do optimizing transformations on programs, for example to make them faster. Some transformations can be applied by looking at a single function (these are called intraprocedural optimizations), but others need to analyze the entire program (these are called interprocedural optimizations). Writing correct optimizing transformations is a difficult (and thus error-prone) task. Some works in certifying compilers [5,9,10] and translation validation [12] attacked this problem. Of these attempts, the most powerful and flexible system is Proof-Carrying Code (PCC) [10]. In particular, PCC allows any logical system to be used as the basis of the system. Much of the work on PCC has focused on fragments of first-order predicate logic or some of their extensions (see for example [11]). Although first-order predicate logic is expressive and well known, we think it is an heavyweight solution. In this paper, we evaluate a possible lightweight alternative for the verification of compiler optimizations: visibly pushdown Kleene algebra (VPKA) [3]. VPKA is an algebraic system that represents equality between visibly pushdown languages, a subclass of context-free languages. The advantages of using VPKA instead of first-order predicate logic are its use of propositional reasoning while still being able to represent well-known constructs of programming languages (sequences, alternatives, loops and code blocks) in a natural way. Indeed, the generated proofs are formal, equational proofs. Our goal is that these proofs should be understandable by programmers. M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 28–43, 2011. c Springer-Verlag Berlin Heidelberg 2011
Verification of Common Interprocedural Compiler Optimizations
29
It has been shown that VPKA can be used for the interprocedural analysis of (mutually) recursive programs [3]. So, VPKA can be used both to do interprocedural analysis of a program1 and to verify its optimization. VPKA is a conservative extension of Kleene algebra (KA) [4], an axiomatic system that axiomatises equality between regular languages. Results concerning the verification of several intraprocedural optimizations in Kleene algebra with tests (an extension of KA) have already been obtained by Kozen and Patron [8]. This paper shows that VPKA extended with tests is suitable to verify some interprocedural optimizations. In the first place, we present the theoretical basis behind VPKA: visibly pushdown regular expressions.
2
Visibly Pushdown Regular Expressions (VPRE)
Visibly pushdown regular expressions are an extension of regular expressions with a restricted set of context-free grammars. They are an alternative way to represent visibly pushdown languages. The definition of VPRE given here is slightly different from the original one [3], but it is equivalent. Visibly pushdown languages (VPL) were introduced by Alur and Madhusudan [1]. A word in a VPL is thought of as a string representing an execution of an interprocedural program. So, the alphabet is divided into three disjoint finite sets Σi , Σc and Σr which represent, respectively, the set of internal actions, the set of calls and the set of returns of a program. The idea behind such a partition is to fix, according to the current input symbol, when a pushdown automaton can push (only for call actions) and pop (only for return actions). The class of languages defined by such pushdown automata is called VPL. This is a strict subclass of deterministic context-free languages and a strict superclass of regular languages and balanced languages [1]. For example, if a ∈ Σi , c ∈ Σc and r ∈ Σr , then {cn rn | n ∈ IN} is a VPL, but not {an rn | n ∈ IN}. Some set operations are now defined. The empty word is denoted by ε and the set of finite words on Σi ∪Σc ∪Σr by (Σi ∪Σc ∪Σr )∗ . Let S, T ⊆ (Σi ∪Σc ∪Σr )∗ . The concatenation operation is defined as usual: S • T := {st | s ∈ S ∧ t ∈ T }. The power S n with respect to • is defined inductively by S 0 := {ε} and S n+1 := S • S n . The Kleene star operator is defined by S ∗ := (∪ n | n ∈ IN : S n ), which is a quantification over ∪ having dummy n, range n ∈ IN and body S n . Using the above operators, any regular language (RL) can be generated from ∅, {ε}, and {a} for a ∈ Σi ∪ Σc ∪ Σr . VPLs differ from RLs mostly for their wellmatched words2 . So, a way is needed to generate any VPL that contains only wellmatched words. These words are better described via context-free grammars. Thus, a restricted set of context-free grammars are introduced. Let Σi , Σc and Σr be disjoint finite sets of atomic elements. Let V be a finite set of symbols containing symbols S and T , and let N (V ) := {P(X,Y ) | X, Y ∈ V }. A well-matched visibly pushdown grammar (WMVPG) over Σi , Σc and Σr 1 2
For example, verifying if a program respects a given file manipulation security policy. Intuitively, a word is said to be well matched if every call action in the word has a matching return action in it and vice versa.
30
C. Bolduc and B. Ktari
is a tuple G := (N (V ), P(S,T ) , →) where N (V ) is the set of nonterminals, P(S,T ) is the starting nonterminal and → is a finite set of explicit rewrite rules of the form – P(X,Y ) → ε, where X, Y ∈ V ; – P(X,Y ) → a, where a ∈ Σi and X, Y ∈ V ; – P(X,Y ) → c P(Z,W ) r, where c ∈ Σc , r ∈ Σr and W, X, Y, Z ∈ V and implicit rewrite rules P(X,Y ) → P(X,Z) P(Z,Y ) for each X, Y, Z ∈ V . The language generated by G is the set of words (well-matched words, in fact) that can be derived by the rewrite rules of G when starting with P(S,T ) . Nonterminals of the form P(X,Y ) are used in WMVPGs to enforce the definition of a “starting element” X and an “ending element” Y for each nonterminal. Note how useful this is when defining the “cut” in implicit rewrite rules. Note also that the implicit rewrite rules allow multiple choices for a derivation of a word. Among those choices, one can force himself to always use an explicit rewrite rule to derive the first nonterminal generated by an implicit rewrite rule. Intuitively, this strategy would be the same as defining the implicit rewrite rules in WMVPGs by P(X,Z) → w P(Y,Z) for each X, Y, Z ∈ V and w ∈ {ε} ∪ Σi ∪ Σc × N (V ) × Σr whenever the explicit rewrite rule P(X,Y ) → w exists. Other choices are possible and we will see that the multiple choices for a derivation is an essential ingredient to ease program manipulations. We want to use WMVPGs in set expressions. For example, for a WMVPG G and elements a1 , a2 ∈ Σi , an expression like {a1 } • G ∪ {a2 } should be possible. Understand that one only need to know the explicit rewrite rules of G along with its starting nonterminal P(S,T ) since the others components of G can be inferred from them. To ease the writing of lenghty grammars, we define a different notation (a “block” notation) for WMVPGs that not only shows the structure of the explicit rewrite rules in the context of a program but also emphasizes the starting nonterminal. Let G := ({P(x,y) | x, y ∈ V }, P(s,t) , →) be a WMVPG. We write y
y
[ ε ],
x
[ a ],
x
w
y
[ c↓↑r ]
x
z
to respectively represent the explicit rewrite rules P(x,y) → ε,
P(x,y) → a,
P(x,y) → c P(z,w) r . y
y
We call unary block each rule of the form [x ε ] or [x a ] . The labels x and y are respectively called the starting label and the ending label of the block. We also call binary block each rule of the form [x c ↓z ↑w r ]y . The labels x, y, z and w are respectively called the starting label, the ending label, the call label and the return label. Let B be the block notation of the explicit rewrite rules of G. Define B 1 as the set of unary blocks and B 2 as the set of binary blocks. The starting nonterminal t t P(s,t) is used to surround B by writing (|s B |) . Thus, G is abbreviated as (|s B |) .
Verification of Common Interprocedural Compiler Optimizations
31
Here are some examples of languages generated by WMVPGs, using the block notation. Let Σi := {a, b}, Σc := {c, d}, Σr := {r, s} and V := {v, w, x, y, z}. Then, xx
y
(| [ a ] |) = {an | n > 0} , xx
v
z
z
y
yy
(| [ c ↓ ↑ r ], [ ε ] |) = {cn rn | n ∈ IN} ,
z
z
xx v
x yz
x
(| [ c ↓ ↑ r ], [ b ], [ b ], [ d ↓ ↑ s ], [ a ] |) = {a(cda)n b(sr)n | n ∈ IN} . xy
w
y
y
w
x
x
Note from the examples that, for convenience, we write the blocks as a list rather than as a set. However, the list works like a set. We are now ready to define visibly pushdown regular expressions (VPRE). Let Σi , Σc and Σr be disjoint finite sets of atomic elements. Define a grammar pattern of a WMVPG G on Σi , Σc and Σr to be a partial operator obtained by – replacing the terminals in G by variables (placeholders) of the same type (Σi , Σc or Σr ); y y – writing [x 1 ] instead of [x ε ] ; – adding, for convenience, blocks of the form [x 0 ]y . The arity of a grammar pattern is the number of variables it contains. Define G as the set of all grammar patterns on Σi , Σc and Σr . A VPRE is any well-formed expression that can be generated from the base elements 0, 1, {a | a ∈ Σi ∪ Σc ∪ Σr }, the unary operator ∗ , the binary operators · and +, and the operators in G. The language denoted by a VPRE p is noted L(p) and is defined by L(0) := ∅,
L(1) := {ε},
L(a) := {a} for any a ∈ Σi ∪ Σc ∪ Σr ,
and extends over the structure of VPRE where · becomes •, + becomes ∪, and ∗ becomes the set operator ∗ . An operator of G along with its operands become a WMVPG G, and so L(G) is the language generated by G3 .
3
Visibly Pushdown Kleene Algebra (VPKA)
VPKA characterizes the equality of the languages denoted by two VPREs. Before showing the axioms, we recall Kozen’s definition of Kleene algebra [4]. Definition 1 (Kleene algebra). A Kleene algebra (KA) is an algebraic structure (K, +, ·, ∗ , 0, 1) satisfying the following axioms4 . p + (q + r) = (p + q) + r p(q + r) = pq + pr (p + q)r = pr + qr qp + r p → q ∗ r p pq + r p → rq ∗ p 3 4
p(qr) = (pq)r p+q =q+p p+p= p 1 + p∗ p p∗ 1 + pp∗ p∗
p+0=p p0 = 0 = 0p p1 = p = 1p pq ↔p+q =q
Blocks of the form [x 0 ]y do not translate to a rule. They are just omitted in the grammar. Those blocks are a way to identify the fail instruction in programs. In the sequel, we write pq instead of p · q. The increasing precedence of the operators is +, · and ∗ .
32
C. Bolduc and B. Ktari
The axiomatization of VPKA proposed below adds five axioms to KA. It is not the complete axiomatization of [3], but a weaker version. Two axioms are omitted because they are not used in this paper. Note that the complexity of the equational theory of the full axiomatization is EXPTIME-complete [3]. The first two axioms of VPKA represent the explicit rewrite rules of the gramy y mar. Axiom (1) also states that if [x 0 ] ∈ B 1 , then 0 (|x B |) . This adds nothing to the axiomatization since, by Kleene algebra, 0 is the least element of the algebra. The third axiom represents the implicit rewrite rules of the grammar. Axioms (4) and (5) are induction axioms that define an expression (|x B |)y as the least solution for the nonterminal P(x,y) of the inequational system described by the hypotheses of axioms (4) and (5). The inequational system described in axiom (4) (respectively, axiom (5)) is essentially the explicit rewrite rules of the grammar along with the strategy of always using an explicit rewrite rule to derive the first (respectively, second ) nonterminal generated by an implicit rewrite rule. Note that, as solutions for the inequational system, axioms (4) and (5) use expressions of the form s(u,u ) , for u, u ∈ V . Notice the use of the functions F∗B and B∗B in axioms (4) and (5) to restrict the inequational system to solve. They are used because it happens that some rules of B are never used for any valid derivation of P(x,y). So, F∗B (the “forward strategy”) and B∗B (the “backward strategy”) approximate the needed blocks. They are the least fixed points of the monotone functions F1B : 2V ×V → 2V ×V and B1B : 2V ×V → 2V ×V defined for any T ⊆ V × V by: y
F1B (T ) := T ∪ {(y, y ) | (∃ z, m | (z, y ) ∈ T : [z m ] ∈ B 1 )} y ∪ {(y, y ), (w, w ) | (∃ z, c, r | (z, y ) ∈ T : [z c ↓w ↑w r ] ∈ B 2 )} , z 1 1 BB (T ) := T ∪ {(y, y ) | (∃ z, m | (y, z) ∈ T : [y m ] ∈ B )} ∪ {(y, y ), (w, w ) | (∃ z, c, r | (y, z) ∈ T : [y c ↓w ↑w r ]z ∈ B 2 )} . Definition 2 (Visibly pushdown Kleene algebra). Let Σi , Σc and Σr be disjoint finite sets of atomic elements. A VPKA is a structure (K, +, ·, ∗ , 0, 1, G) generated by Σi , Σc and Σr under the axioms of KA and such that the following y additional laws hold for each expression (|x B |) , on a finite set of symbols V , representing an operator of G along with its operands, and for all x, y, y , u, u ∈ V and s(u,u ) ∈ K: y
y
for [ m ] ∈ B1 ,
m (| B |), x
y
w
w
x
y
y
x
y
z
(2)
y
(| B |) · (| B |) (| B |) , x
y
for [ c ↓ ↑ r ] ∈ B2 ,
c · (| B |) · r (| B |), z
(1)
x
x
(3)
Verification of Common Interprocedural Compiler Optimizations
33
∧ u, u | (u, u ) ∈ F∗B ({(x, y)}) : u
(∧ m | [ m ] ∈ B1 : m s(u,u ) ) u
v
∧ (∧ m, v | [ m ] ∈ B1 : m · s(v,u ) s(u,u ) ) u
w
(4)
u
∧ (∧ c, z, r, w | [ c ↓ ↑ r ] ∈ B2 : c · s(z,w) · r s(u,u ) ) u
z
w
v
∧ (∧ c, z, r, w, v | [ c ↓ ↑ r ] ∈ B2 : c · s(z,w) · r · s(v,u ) s(u,u ) ) z
u
y
→ (| B |) s(x,y) ,
x
∧ u, u | (u, u ) ∈ B∗B ({(x, y)}) : u
(∧ m | [ m ] ∈ B1 : m s(u,u ) ) u
u
∧ (∧ m, v | [ m ] ∈ B1 : s(u,v) · m s(u,u ) ) v
w
(5)
u
∧ (∧ c, z, r, w | [ c ↓ ↑ r ] ∈ B2 : c · s(z,w) · r s(u,u ) ) u
z
w
u
∧ (∧ c, z, r, w, v | [ c ↓ ↑ r ] ∈ B2 : s(u,v) · c · s(z,w) · r s(u,u ) ) z
v
y
→ (| B |) s(x,y) . x
Using axioms (1) to (4), this helpful theorem can be proved [3]: y y (| B |) = ( m | [ m ] ∈ B 1 : m) x
x
y v + ( m, v | [ m ] ∈ B 1 : m · (| B |)) x
v
(6)
w w y + ( c, z, r, w | [ c ↓ ↑ r ] ∈ B 2 : c · (| B |) · r) x
z
z
y w w v + ( c, z, r, w, v | [ c ↓ ↑ r ] ∈ B 2 : c · (| B |) · r · (| B |)) . x
z
z
v
Of course, a similar theorem holds for the “backward strategy” version of (6). 3.1
Visibly Pushdown Kleene Algebra with Tests
Tests are an essential ingredient to analyze imperative programs. We add them in a way similar to Kleene algebra with tests [6]: a Boolean algebra (B, +, ·, , 0, 1) generated by atomic tests B is added to VPKA, where B ⊆ K. Let TestsB be the set of all test expressions that can be generated from B and the operators of Boolean algebra. For example, if a, b ∈ B, then b is a test expression of TestsB and so is b · a + a. Note that 0 ∈ TestsB and 1 ∈ TestsB . We would like to use tests in WMVPGs and operands of grammar patterns. How can we interpret a test? It seems natural to think of tests as a subset of
34
C. Bolduc and B. Ktari
internal actions. So, we extend the definition of explicit rewrite rules to allow tests. We allow additional explicit rewrite rules of the form P(x,y) → b where b ∈ y TestsB (its unary block notation is [x b ] ). This may seem disturbing at first since grammar patterns do not allow complex expressions in unary blocks. However, tests enjoy an interesting property when B is finite: they can be represented by a disjoint sum of Boolean atoms. So, they are easy to deal with in unary blocks. The axioms of VPKA with tests are the axioms of Kleene algebra, Boolean algebra and axioms (1) to (5) adapted in a natural way to allow unary blocks with tests. Furthermore, to help program analysis, we can also add some axioms to VPKA with tests (this is not done here, but it is useful in [2]). 3.2
Metablocks
The forms of explicit rewrite rules (the blocks in (| |)-expressions) for a WMVPG are simple, but it can be tedious to write such rewrite rules for a large expression. To simplify this process, we define metablocks which are abbreviations (similar to regular expressions) of a list of blocks. Thus, metablocks allow us to write more complex explicit rewrite rules to ease program manipulation. Let Σi , Σc , Σr and B be finite sets. Let V be a finite set of symbols. A metablock is an expression [x e ] y where x, y ∈ V and e is an expression of the set MBexp that is defined as the smallest set containing a for each a ∈ Σi ∪TestsB , (c ↓z ↑w r) for each c ∈ Σc , r ∈ Σr and z, w ∈ V , and closed under “operators” ·, + and ∗ . Note that metablocks are written with bigger square brackets than blocks. A metablock is reduced to a list of unary and binary blocks by the function mb defined inductively by: y
y
– mb([[x a ] ) := [x a ] for a ∈ Σi ∪ TestsB and x, y ∈ V ; y y – mb([[x (c ↓z ↑w r) ] ) := [x c ↓z ↑w r ] for c ∈ Σc , r ∈ Σr and x, y, z, w ∈ V ; y z y – mb([[x p · q ] ) := mb([[x p ] ), mb([[z q ] ) for x, y ∈ V and a fresh label z (a label not in V ); y y y – mb([[x p + q ] ) := mb([[x p ] ), mb([[x q ] ) for x, y ∈ V ; y y z z y ∗ y – mb([[x p ] ) := [x 1 ] , mb([[x p ] ), mb([[x p ] ), mb([[z p ] ), mb([[z p ] ) for x, y ∈ V and a fresh label z. It is possible to extend the laws of (| |)-expressions to (| |)-expressions containing metablocks. See [2] for more details.
4
Verification of Common Interprocedural Compiler Optimizations
Let us examine how some interprocedural compiler optimizations can be represented with VPKA. First, an encoding of unoptimized and optimized programs in VPKA is necessary. This is done in three steps (as in [3]): 1. Define the desired abstraction for atomic program instructions and variables through the sets Σi , Σc , Σr and B;
Verification of Common Interprocedural Compiler Optimizations
35
2. Encode the control flow by expressions p and q respectively for the unoptimized program and the optimized one; 3. Encode some optimization assumptions5 and the desired semantics of atomic program instructions, variables and variable passing mechanism by a set of equational hypotheses H. These steps are semiautomatic. In particular, the encoding in step 2 of the standard programming constructs gives an expression of MBexp: s; t := s · t,
if b then s else t := b · s + b · t,
while b do s := (b · s)∗ · b,
where b is a test and s and t are programs; whereas any function f gives a τ metablock [ f s ] where s is the body of the function and τ is a label used to indicate the end of the body of any function. For more flexibility, the encoding of atomic instructions, call instructions and variables is left at the user’s discretion. To ease the use of functions in metablocks, the following abbreviation is used when “calling” functions in VPKA: z := ( z ↓z ↑τ z ) for a label z, a call action z ∈ Σc and a return action z ∈ Σr . The encoding steps 1 to 3 will become clear when dealing with the program of Fig. 1a. The verification of a sequence of interprocedural optimizing transformations is done by verifying if H → p = q is a theorem of VPKA. The optimization is considered safe if and only if the formula is a theorem of VPKA (after all, the formula expresses that the two programs are “semantically” equivalent under hypotheses H). Our verification deals only with halting programs, since nonhalting programs may lead to incorrect results. Halting programs are obtained by restricting recursive procedures and loops to simple cases. We now present some interprocedural compiler optimizations and we show how they can be verified using VPKA. 4.1
Interprocedural Dead Code Elimination
Dead code elimination is the removal of unreachable instructions. When dealing with procedures, it is possible that all actual calls of a procedure in a program will not allow it to reach a set of instructions. There is a common case when this situation happens: the case where the preconditions of a procedure are not respected. Simple Non Recursive Example. Take the C program of Fig. 1a. In this program, the pointer n cannot be null. So, the compiler can remove the test n == NULL without modifying the behaviour of the code. This will speed up the application (not too much in this example, but it can be very handy in real situations). However, note that it is useful for a programmer to write this test to have a bullet-proof code and ease maintenance over time. So, it is not the task of the programmer to remove this test, but the task of the compiler! 5
Optimization assumptions will be used in Sect. 4.2 and Sect. 4.3.
36
C. Bolduc and B. Ktari
int main(void) { int x = 2;
int main(void) { int x = 2;
increment(&x); increment(&x);
increment(&x); increment(&x);
}
return 0; /* Success. */
void increment(int∗ n) { if (n == NULL) printf ( stderr , ”Error”); else ∗n += 1; } (a) Unoptimized program
}
return 0; /* Success. */
void increment(int∗ n) {
}
∗n += 1; (b) Optimized program
Fig. 1. Simple non-recursive program example
Figure 1b is the C program without the test6 . Using visibly pushdown Kleene algebra, it is possible to prove that the optimized program is equivalent to the unoptimized one. Let b be the representation of the test n != NULL. Also, let a represent the test that the address of the variable x exists. Let – – – –
p be the internal action printf ( stderr , ”Error”); q be the internal action ∗n += 1; s be the internal action int x = 2; t be the internal action return 0. Of course, it is a return action in the code, but we treat it as an internal action followed by a return action (going out of context). This will be clear in the encoding of the program’s control flow.
Since s creates the local variable x, we have the hypothesis s = s · a. We also use the hypothesis a · t = t since the memory for x is still allocated just before the return of the context (it is a local variable). Moreover, the action ∗n += 1 modifies neither the test n != NULL nor the existence of x. So, the hypothesis a · b · q = q · a · b is correct in this program. For the representation of the function calls, let m and m be respectively the call of the main function and its return. Also, let f and f be respectively the call of the increment function and its return. Since the variable x is passed to the function increment by a pointer, we have the following valid hypotheses: a · f = f · a · b and a · b · f = f · a. 6
Of course, in this code the remaining increment function should be inlined by the compiler to make it more efficient. We will talk about this issue in Sect. 4.2.
Verification of Common Interprocedural Compiler Optimizations
37
With these actions and hypotheses, the control flow of the two programs can easily be encoded in visibly pushdown Kleene algebra. The first program gives τ
τ
ττ
(| [ m ] , [ s · f · f · t ] , [ b · p + b · q ] |) . m
m m
f
So, we must show that, under the preceding hypotheses, τ
τ
ττ
τ
τ
ττ
(| [ m ] , [ s· f · f · t ] , [ b ·p +b·q ] |) = (| [ m ] , [ s· f · f ·t ] , [ q ] |) . (7)
m m
m
m
m m
f
f
First, note that no valid derivation can start with τ . So, by (6), τ
τ
ττ
(| [ m ] , [ s · f · f · t ] , [ b · p + b · q ] |) = 0 . τ m
m
(8)
f
Now, let us prove (7). τ
= =
= = = = = = = =
= = =
τ
τ
τ
(|m [ m m ] , [ m s · f · f · t ] , [ f b · p + b · q ] |) {{ Metablock & Equations (6) and (8) }} m · (|m [ m m ] τ , [ m s · f · f · t ] τ , [ f b · p + b · q ] τ |)τ · m {{ Metablocks & Equations (6) and (8), multiple times }} τ τ τ τ m · s · f · (|f [ m m ] , [ m s · f · f · t ] , [ f b · p + b · q ] |) · f τ τ τ τ · f · (|f [ m m ] , [ m s · f · f · t ] , [ f b · p + b · q ] |) · f · t · m {{ Metablocks & Equations (6) and (8), multiple times }} m · s · f · (b · p + b · q) · f · f · (b · p + b · q) · f · t · m {{ Hypothesis: s = s · a }} m · s · a · f · (b · p + b · q) · f · f · (b · p + b · q) · f · t · m {{ Hypothesis: a · f = f · a · b }} m · s · f · a · b · (b · p + b · q) · f · f · (b · p + b · q) · f · t · m {{ Distributivity of · over + }} m · s · f · (a · b · b · p + a · b · b · q) · f · f · (b · p + b · q) · f · t · m {{ Contradiction of tests & Idempotency of tests }} m · s · f · (a · 0 · p + a · b · q) · f · f · (b · p + b · q) · f · t · m {{ Zero of · & Identity of + }} m · s · f · a · b · q · f · f · (b · p + b · q) · f · t · m {{ Hypotheses: a · b · q = q · a · b, a · b · f = f · a and a · f = f · a · b }} m · s · f · q · f · f · a · b · (b · p + b · q) · f · t · m {{ Distributivity of · over + & Contradiction of tests & Idempotency of tests }} m · s · f · q · f · f · (a · 0 · p + a · b · q) · f · t · m {{ Zero of · & Identity of + }} m · s · f · q · f · f · a · b · q · f · t · m {{ Hypotheses: a · b · q = q · a · b, a · b · f = f · a and a · t = t }} m · s · f · q · f · f · q · f · t · m {{ Equations (6) and (8) }} m · s · f · (|f [ m m ] τ , [ m s · f · f · t ] τ , [f q ]τ |)τ · f τ τ τ τ · f · (|f [ m m ] , [ m s · f · f · t ] , [f q ] |) · f · t · m
38
C. Bolduc and B. Ktari
int main(void) { int x, result ;
}
int main(void) { int x, result ;
do { /* Ask a number. */ scanf(”%d”, &x); } while(x < 0)
do { /* Ask a number. */ scanf (”%d”, &x); } while(x < 0)
result = fact(x ); printf (”%d”, result );
result = fact(x ); printf (”%d”, result );
return 0; /* Success. */
int fact ( int n) { int prec fact ; if (n < 0) return −1; /*Error. */ else { if (n == 0) return 1; else { prec fact = fact(n−1); return n ∗ prec fact ; } } } (a) Unoptimized program
}
return 0; /* Success. */
int fact ( int n) { int prec fact ;
if (n == 0) return 1; else { prec fact = fact(n−1); return n ∗ prec fact ; } } (b) Optimized program
Fig. 2. Complex recursive program example
=
{{ Metablocks & Equations (6) and (8), multiple times }} τ τ τ τ m · (|m [ m m ] , [ m s · f · f · t ] , [f q ] |) · m {{ Metablock & Equations (6) and (8) }} (|m [ m m ] τ , [ m s · f · f · t ] τ , [f q ]τ |)τ
=
Complex Recursive Example. The verification of interprocedural dead code elimination works also in presence of recursive functions and user’s inputs when considering only halting cases. For example, take the C program of Fig. 2a that calculates the factorial of a number. In that program, the test n < 0 in the function fact cannot be true. So, it can be removed as in Fig. 2b. We can prove that these two programs are equivalent using visibly pushdown Kleene algebra. If we are only interested in a bounded value for n, then the proof follows a pattern similar to the proof of page 37. However, the unbounded case for n is also provable if we add some axioms to VPKA. See [2] for the proof.
Verification of Common Interprocedural Compiler Optimizations
39
Dead Function Elimination. A special case of interprocedural dead code elimination is the elimination of unused functions7 in the source code. This elimination is trivial using VPKA since it is an easy theorem of this algebraic system: any unreachable block can be removed from the list of blocks. One can see intuitively why by recalling the function F∗B of axiom (4) of page 33. 4.2
Inlining of Functions
The inlining of functions is an optimization that replaces a function call site with the body of the callee. This may improve the time performance of the program, but may also increase the size of the resulting program. Note that not every function can be inlined. For example, recursive functions may not always be inlined. The function increment of the program of Fig. 1b can be inlined. There are different ways to handle the arguments of a function while inlining it. One easy way is to add an explicit assignment instruction before the body of the function. For the example program of Fig. 1b, inlining would give the program of Fig. 3.
int main(void) { int x = 2; int∗ n; /* Inlining the first call to increment. */ n = &x; ∗n += 1; /* Inlining the second call to increment. */ n = &x; ∗n += 1; }
return 0; /* Success. */
Fig. 3. Inlining of increment in the simple non-recursive program example
Let u be the internal action n = &x. Using the alphabets defined on page 36, the control flow of the previous program can be represented by the VPKA expression: τ
ττ
(| [ m ] , [ s · u · q · u · q · t ] |) .
m m
m
The formalization must rely on a compiler’s trusted functionality that tells if a particular function can be inlined. When knowing if a function can be inlined, it 7
An unused function is a function that is never called by the program when started in the main function.
40
C. Bolduc and B. Ktari
becomes an assumption that can be expressed in VPKA for this particular case. Then, the verification can be done for the entire program. So, the assumption that the function increment can be inlined in the program of Fig. 1b is just the equational hypothesis:
f · q · f = u · q .
So, using the preceding hypothesis, we can prove that τ
τ
ττ
τ
ττ
(| [ m ] , [ s · f · f · t ] , [ q ] |) = (| [ m ] , [ s · u · q · u · q · t ] |)
m m
m
m m
f
m
by doing τ
τ
τ
τ
(|m [ m m ] , [ m s · f · f · t ] , [f q ] |) {{ Metablock & Equation (6) & Same property as (8) m · (|m [ m m ] τ , [ m s · f · f · t ] τ , [f q ]τ |)τ · m {{ Metablock & Equation (6) & Same property as (8) τ τ τ τ m · s · f · (|f [ m m ] , [ m s · f · f · t ] , [f q ] |) · f τ τ τ τ · f · (|f [ m m ] , [ m s · f · f · t ] , [f q ] |) · f · t · m {{ Metablock & Equation (6) & Same property as (8) m · s · f · q · f · f · q · f · t · m {{ Inlining hypothesis }} m · s · u · q · u · q · t · m {{ Metablock & Equation (6) & Same property as (8) τ τ τ m · (|m [ m m ] , [ m s · u · q · u · q · t ] |) · m {{ Metablock & Equation (6) & Same property as (8) τ τ τ (|m [ m m ] , [ m s · u · q · u · q · t ] |) .
= =
= = = =
4.3
}} }} }}
}} }}
Tail-Recursion Elimination
Tail-recursion elimination is an optimization that allows a compiler to rewrite a tail-recursive function8 as a non-recursive function having an iterative form. Like for inlining of functions, the formalization of a program in this situation relies on a compiler’s trusted functionality that tells if a particular function is in tail-recursive form. When knowing it, it becomes an assumption for this particular case. The assumption is simply an equality between the body of the metablock representing the tail-recursive function and the body of its iterative form. Then, the verification can be done for the entire program. 4.4
Procedure Reordering
A common and easy interprocedural optimization is to reorder functions based on their call relationship. The central idea of this optimization is to minimize 8
A function is tail-recursive if the only recursive calls it contains are tail-recursive. A function call is tail-recursive if there is nothing to do after the function returns except returning its value.
Verification of Common Interprocedural Compiler Optimizations
41
instruction cache thrashing by having the caller and the callee near each other in the program file. Procedure reordering is easy to deal with VPKA since it is only an application of the commutativity of blocks. We gave hints of this law, in the examples of page 31, when saying that the list of blocks works like a set. 4.5
Function Cloning
At first, cloning a function may seem a waste of memory. However, cloning a function allows a compiler to specialize a function (and thus optimize it) for a specific subset of the call sites of this function. For example, it can be useful to remove unnecessary tests in a function for certain call sites (recall the interprocedural dead code elimination of Sect. 4.1). So, function cloning is useful in presence of other optimizations. Correctness of function cloning is easy to show in VPKA. See [2] for the proof.
5
Discussion
In this paper, each verification is made only for a specified optimizing transformation. However, our reasoning can be used through a bigger verification (involving several optimizing transformations) to make sure the optimized program still behaves the same as the unoptimized program. It allows, for example, to verify that the program of Fig. 1a can be optimized to the program of Fig. 4. This is done by using the verifications of Sect. 4.1 and Sect. 4.2, and the intraprocedural verification of copy propagation of [8].
int main(void) { int x = 2; x += 1; x += 1; }
return 0; /* Success. */
Fig. 4. Optimized version of the simple non-recursive program example of Fig. 1a
Also, we presented how to verify several interprocedural optimizations in VPKA. However, sometimes, we had to rely on a compiler’s trusted functionality to generate equational hypotheses apart from the standard hypotheses coming from the semantics of the programming language. Such dependence on some compiler’s trusted functionalities may seem a flaw at first, but it appears to us more like a winning situation: VPKA does not require to reinvent the wheel; VPKA may benefit from already trusted analyzers that can merge well in our
42
C. Bolduc and B. Ktari
verification process. Verifications in VPKA are done only in situations where it is useful (where other analyzers cannot succeed). Note that the verification of compiler optimizations in pure Kleene algebra has the same dependence [8]. One may ask why VPKA must rely on equational hypotheses to encode inlining assumptions or tail-recursion elimination. The reason is linked with the notion of equality in VPKA: it characterizes the equality of the languages denoted by two visibly pushdown regular expressions. In other words, equality represents trace equivalence and, in this equality, each call action and return action is important. Those actions cannot be “deleted” or “forgotten” from the expression. So, equational hypotheses are necessary in VPKA to extend the notion of equality around specified call actions and return actions. Also, one may ask if some of our reasonings in this paper may lead to the dead variable paradox of Kozen and Patron [8]. In particular, for the reasoning of Sect. 4.1, the test a seems a dangerous one since it is linked to the existence of a variable. However, we do not have such a paradox here. The test that verifies if the address of the variable x exists is a property of the local state of the computation unlike the proposition “x is a dead variable”. In particular, the test a commutes with any test involving x that is a property of the local state of the computation because its existence cannot be changed by such a test.
6
Conclusion
We showed how to verify several interprocedural compiler optimizations using an algebraic formalism called visibly pushdown Kleene algebra extended with tests. This formalism is lightweight compared to first-order predicate logic: it uses only propositional reasoning while still being able to represent well-known constructs of programming languages (sequences, alternatives, loops and code blocks) in a natural way. Indeed, the generated proofs are equational and machine-verifiable. Our goal is that these proofs should be understandable by programmers. Visibly pushdown Kleene algebra is a versatile formalism. For example, it has been shown that VPKA can be used for the interprocedural analysis of (mutually) recursive programs [3]. So, it would be possible to use, within a compiler, the same formalism (VPKA) to do the interprocedural analysis of a program and the verification of its optimization. For ongoing work, one idea is to extend VPKA with “infinite” operators to represent visibly pushdown ω-languages as defined in [1]. This would allow us to handle non-halting programs. Moreover, other applications for VPKA need to be investigated, like the representation of programs having nonlocal transfer of control (for example, goto statements). Kozen already did it for modular programs [7], but the proof is very involved. We think that using continuation (the ending label of a block is like a continuation) will help to reduce the complexity of the representation and will be easier to read. Acknowledgements. We are grateful to Jules Desharnais and the anonymous referees for detailed comments. Special thanks go to one of the anonymous referees
Verification of Common Interprocedural Compiler Optimizations
43
who defined well-matched visibly pushdown grammars and suggested to use them instead of the original definition of (| |)-expressions [3]. This research is supported by NSERC (Natural Sciences and Engineering Research Council of Canada) and FQRNT (Fond Qu´eb´ecois de la Recherche sur la Nature et les Technologies).
References 1. Alur, R., Madhusudan, P.: Visibly pushdown languages. In: Proc. of the 36th ACM symp. on Theory of computing, New York, USA, pp. 202–211 (2004) 2. Bolduc, C., Ktari, B.: Verification of common interprocedural compiler optimizations using visibly pushdown Kleene algebra (extended version). Technical Report DIUL-RR-1001, Universit´e Laval, QC, Canada (2010) 3. Bolduc, C., Ktari, B.: Visibly pushdown Kleene algebra and its use in interprocedural analysis of (mutually) recursive programs. In: Berghammer, R., Jaoua, A.M., M¨ oller, B. (eds.) RelMiCS 2009. LNCS, vol. 5827, pp. 44–58. Springer, Heidelberg (2009) 4. Kozen, D.: A completeness theorem for Kleene algebras and the algebra of regular events. Information and Computation 110(2), 366–390 (1994) 5. Kozen, D.: Efficient code certification. Technical Report 98-1661, Cornell University, Ithaca, USA (1998) 6. Kozen, D.: Kleene algebra with tests. Transactions on Programming Languages and Systems 19(3), 427–443 (1997) 7. Kozen, D.: Nonlocal flow of control and Kleene algebra with tests. In: Proc. 23rd IEEE Symp. Logic in Computer Science, pp. 105–117 (June 2008) 8. Kozen, D., Patron, M.C.: Certification of compiler optimizations using Kleene algebra with tests. In: Palamidessi, C., et al. (eds.) CL 2000. LNCS (LNAI), vol. 1861, pp. 568–582. Springer, Heidelberg (2000) 9. Morrisett, G., Walker, D., Crary, K., Glew, N.: From system F to typed assembly language. ACM Transactions on Programming Languages and Systems 21(3), 527–568 (1999) 10. Necula, G.C.: Proof-carrying code. In: Proc. of the 24th ACM SIGPLAN-SIGACT symp. on Principles of programming languages, New York, USA, pp. 106–119 (1997) 11. Necula, G.C., Lee, P.: Proof generation in the Touchstone theorem prover. In: McAllester, D. (ed.) CADE 2000. LNCS, vol. 1831, pp. 25–44. Springer, Heidelberg (2000) 12. Zaks, A., Pnueli, A.: Program analysis for compiler validation. In: Proc. of the 8th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, New York, USA, pp. 1–7 (2008)
On the Expressiveness of the π-Calculus and the Mobile Ambients Linda Brodo Dipartimento di Scienze dei Linguaggi, via Tempio, 9 - 07100 - Sassari University of Sassari, Italy
Abstract. We investigate the expressivity of two classical distributed paradigms by defining an encoding of the pure mobile ambient calculus into the synchronous π-calculus. We show that the encoding is complete and ‘weakly’ sound, since it may introduce loops. For this purpose we introduce the notions of simulating trace and of aborting trace. Keywords: mobile ambients, π-calculus, encoding, labeled transition systems.
1
Introduction
The Mobile Ambients [4] were introduced to model distributed systems where complete computing environments (i.e. programs being executed) may change their location. In contrast, the π-calculus [12] allows only names or, in its higher order version, pieces of code to be sent along communication channels. The two languages use different paradigms for distributed computing, with a different expressive power, although both of them are Turing-equivalent. Our work may help in understanding the different levels of abstraction offered by the two languages, shedding new light on the expressiveness of the π-calculus. The mobile ambient calculus is characterized by the ambient construct, n[. . . ], that defines computing environments. Actions on ambients, called capabilities, can destroy an ambient (open n), or make an ambient to enter a second one (in n), or make an ambient to leave the ambient that contains it (out n). We define an encoding that allows π-processes to mimic the execution of a capability (corresponding to a mobile ambient transition) by performing a number of π-calculus transitions. These series of transitions are required to guarantee that all the conditions for a correct capability simulation are satisfied. Since we do not have a one to one operational correspondence between the mobile ambient and the π-calculus transitions, we collect all the π processes that have a corresponding mobile ambient process in a set Aπop . We will show that given P , a mobile ambient process, its π-calculus encoding Q is in Aπop , and also if P exists such that P → P , then there exist Q ∈ Aπop and Q1 , . . . , Qn ∈ / Aπop such that Q → Q1 · · · → Qn → Q .
This work has been partially supported by the Italian MUR under grant RBIN04M8S8, FIRB project, internazionalization 2004.
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 44–59, 2011. c Springer-Verlag Berlin Heidelberg 2011
On the Expressiveness of the π-Calculus and the Mobile Ambients
45
We prove that our encoding is complete with respect to the set of capabilities that mobile ambient processes may execute, i.e. all the capabilities of the source program can be simulated in the target one. Our notion of soundness states that whenever a translating π-calculus process performs a transition, this is part of a computation trying to simulate a capability. When the capability simulation is not succeeding, our encoding introduces loops, thus we prove this somewhat ‘weak’ soundness. For proving the main properties of our encoding we introduce the notions of simulating traces and of aborting traces. A simulating trace corresponds to a series of transitions simulating a capability execution. When the conditions required for the execution of a capability do not hold, we have an aborting trace that leaves no side-effects of its execution. Thus, the final state of a simulating trace records the effects of a capability execution, while the final state of an aborting trace is congruent to the initial one. We prove that our encoding is compositional with respect to the parallel operator. Section 2 introduces the mobile ambient calculus, Section 3 presents the πcalculus. In Section 4, we formally define our encoding. In Section 5 we present our conclusions. The Appendix A shows the code of the auxiliary encoding functions. Related work. In [4], an encoding of the π-calculus into the pure Mobile Ambients is presented, and two encodings of the π-calculus into Safe Ambients are presented in [11,15]. We recall that some implementations of mobile ambients in other formalisms for distributed computations have been proposed. There are some works presenting graphical implementations, e.g. [5,9], that differ from the one we propose, due to the different target formalisms. The only similarity is that we generate different subprocesses for ambients and their contents, that will be linked by a pair of channels (that we call coordinates), in a compositional style similar to the one adopted in the graphical encoding. The work in [8] presents an encoding of mobile ambients into the Join calculus which provides constructs for defining local environments, possibly nested, equipped by local rules. This kind of constructs allows the encoding of the ambient nesting by means of upper and down links between nested environments. The execution of a capability is then simulated by changing the link structure between environments. Our target calculus does not have constructs for environments, thus our encoding is based on a different notion of links (coordinates) from children to parent ambients. To the best of our knowledge, the only encoding of mobile ambients into the π-calculus is proposed in [6]. Here, the basic idea is that each ambient, and its content, is encoded in a π-calculus subprocess; the simulation of capabilities is ruled by a distinguished π-calculus process that randomly chooses two subprocesses, checks the necessary conditions for a capability to be executed and, if they hold, the two subprocesses reproduce the execution of a capability. Our encoding does not rely on a process acting as a capability execution controller. An encoding of mobile ambients in P Systems [14] is presented in [1].
46
L. Brodo
In [3], we have proposed an encoding of the pure Mobile Ambient calculus into a version of the π-calculus where each inference rule is equipped with side conditions. Roughly, the idea is to use the congruence rules for reproducing the effects of capabilities executions. The main result in [3] was then to prove that the mobile ambient transition system is a proper subset of the π-calculus one. The work in [13] shows that the Mobile Ambients without communication, restriction, and the open capability can solve the leader election problem, a consequence is that the Mobile Ambients can not be encoded into the π-calculus with separated choice (i.e. input and output can not be mixed in the same choice). In our encoding we use the separated choice. Gorla in [10] gives a set of requirements for an encoding from Mobile Ambients to the π-calculus. He requires that the encoding is sound and complete (namely, that the computations of a source term must correspond to the computations in the encoded term, and vice versa), compositional (the encoding of a compound term must be expressed in terms of the encoding of its components), name invariant, and that does not introduce divergences. Our encoding is compositional with respect to the parallel operator. It may also become name invariant, by a few suitable modifications. However, our translation is complete, but not sound in the sense of a precise computational correspondence. As we will see in the following, we consider a notion of soundness which is ‘weaker’. One of the main results in [10] states that there is not an encoding of Mobile Ambients to the asynchronous -calculus which satisfies the above mentioned requirements. Our work does not contradict such result. It rather represents a step in the direction of defining the most possible precise translation.
2
The Mobile Ambients
We briefly recall the syntax and the semantics of the pure Mobile Ambients (MA for short), i.e. the version without communication primitives and variables. Following [2], we modify the classical syntax by introducing the recursive expressions of the form μ X.P for the replication operator !P . Definition 1. Let n range over a numerable set of names N . Let VMA (with metavariable X) be the set of process variables. The set of mobile ambient processes PMA (with metavariable P, P , . . . ) and the set of capabilities Cap (with metavariable M) are defined below: P ::= 0 | X | M.P | (νn)P | P |P | μX.P | n[P ] M ::= in n | out n | open n Intuitively, the null process 0 does nothing. X is the process variable. The process M.P executes the capability M and then behaves as P ; (νn)P defines P to be the scope of the name n; P | P may alternatively behaves as P or as P and the two subprocesses may also interact; the recursive expression μX.P behaves as P where all the occurrences of the process variable X have been substituted with μX.P , up to α-conversion. n[P ] denotes the ambient n containing process
On the Expressiveness of the π-Calculus and the Mobile Ambients
47
P . The capability in n allows an ambient to enter ambient n; out n allows an ambient to exit ambient n; open n destroys ambient n. The semantics of the Mobile Ambients is given by the reduction relation defined by the rules in Table 1, and by the smallest congruence relation ≡ satisfying rules in Table 2, up to α-congruence to identify processes that differ in the choice of bound names. Please note that we do not provide a reduction rule for the recursive expression, as we introduce the rule μX.P ≡ P [μX.P/X] in Table 2. The relation →∗ is the reflexive and transitive closure of →. We adopt the classical notion of free and bound names of process P , denoted as f n(P ) and bn(P ), respectively. Table 1. Semantics rules for the Mobile Ambients n[in m.P |P ]|m[R] → m[n[P |P ]|R] open n.P |n[P ] → P |P P → P ⇒ n[P ] → n[P ] P ≡ P, P → R, R ≡ R ⇒ P → R
In Open Amb ≡
m[n[out m.P |P ]|R] → n[P |P ]|m[R] P → P ⇒ (ν n)P → (ν n)P P → P ⇒ P |P → P |P
Out Res P ar
Table 2. Structural congruence rules for the Mobile Ambients P ≡P P |P ≡ P | P (ν n)0 ≡ 0 P |0 ≡ P (P | P ) | R ≡ P | (P | R)
3
P ≡ P ⇒ P ≡ P (ν n)(ν m)P ≡ (ν m)(ν n)P (ν n)(P | Q) ≡ P | (ν n)Q, if n ∈ / f n(P ) (ν n)(m[P ]) ≡ m[(ν n)P ], if n = m μX.P ≡ P [μX.P/X], up to α-conversion
P P P P P
≡ P , P ≡ R ⇒ P ≡ R ≡ P ⇒ P |R ≡ P |R ≡ P ⇒ (ν n)P ≡ (ν n)P ≡ P ⇒ n[P ] ≡ n[P ] ≡ P ⇒ M.P ≡ M.P
The π-Calculus
We consider the polyadic version of the π-calculus (π for short), where processes have no free names. Definition 2. Let a, b range over a numerable set of names N , and x, y range over the set of variables V. The set of processes Pπ (with metavariables Q, Q , . . .) and the set of prefixes A (with metavariable π) are defined below: Q ::= 0 | π.Q | (νa)Q | Q|Q | Q + Q | [x = a]Q | [x = a]Q | Ide(˜ x) = Q, where f n(Q) = x ˜ ˜ π ::= a(˜ x) | a b where ˜b and x ˜ stand for tuples of variables and names pairwise different.
48
L. Brodo
Roughly, the null process 0 cannot perform any action. The process π.Q executes action π and then behaves as Q. Possible actions are: the input a(˜ x), along channel a a tuple of names is received and these names will be orderly substituted for the tuple of placeholders x˜; the output a ˜b, along channel a the name tuple ˜b is sent. Process (νa)Q behaves as Q, where name a is local. Process Q|Q independently executes Q and Q , and the two processes may also communicate. Process Q + Q nondeterministically behaves as Q or as Q . The match process [x = a]Q behaves as Q only if placeholder x will be substituted by name a, whereas [x = a]Q behaves as Q only if placeholder x will be substituted by a name different from a. The identifier definition Ide(˜ x) = Q defines a process Ide(˜ a), whose parameters x ˜ will be substituted by names, that behaves as Q where all the occurrences of placeholder in x ˜ will be substituted by names in a ˜. We use a labelled version of the π-calculus semantics inspired by the proved π-calculus [7] to record the prefixes involved in the transition. Definition 3. The set Θ of enhanced labels, with metavariable θ, is defined as: θ ::= π | (π0 , π1 ); with π0 = a(x) and π1 = ab or viceversa. Function : Θ → A × A is defined by cases: (π) = π; (π0 , π1 ) = τ. The operational semantics is defined by the transition rules in Table 3, and by the smallest congruence relation satisfying rules in Table 4, that also identifies α-equivalent processes, i.e. processes that differ in the choice of bound names. We will write θ˜ to refer to an ordered sequence of transition labels. The relation θ˜
θ
− →∗ is the reflexive and transitive closure of − →. The labelled semantics allows us to simplify the formalization of the notion of traces that we introduce later. Functions f n() and bn(), computing free and bound names, are defined as usual, by letting them work on the results of function , see rule (res). Table 3. Labelled semantics for the π-calculus θ
π
π.Q − →Q
Q− → Q , bn((θ)) ∩ f n(R) = ∅
(prefix )
θ
Q|R − → Q |R
θ
Q− → Q , a ∈ / f n((θ)) θ
(ν a)Q − → (ν a)Q
(res)
a˜ b Q −→ Q , {˜b} ∩ f n(Q) = ∅ a˜ b
θ
(par)
a˜ b
θ
Q+R − → Q
(sum)
θ
(bound out )
(ν ˜b)Q −−→ Q
Q ≡ Q − → R ≡ R θ
Q− →R
(cong)
a(˜ x)
R −−→ R , Q −−−→ Q (a˜ b,a(x))
Q− → Q
(bound com)
R|Q −−−−−−−→ (ν ˜b)(R |Q [˜b/˜ x])
4
The Encoding
The translation is preceded by a preprocessing phase in order to assign a unique name to each ambient. This helps to easily denote the position of an ambient
On the Expressiveness of the π-Calculus and the Mobile Ambients
49
Table 4. Congruence rules for the π-calculus Q|0 ≡ Q Q1 |Q2 ≡ Q2 |Q1 (Q1 |Q2 )|Q3 ≡ Q1 |(Q2 |Q3 ) [x = x]Q ≡ Q Q + 0 ≡ Q Q1 + Q2 ≡ Q2 + Q1 (Q1 + Q2 ) + Q3 ≡ Q1 + (Q2 + Q3 ) [x = y]Q ≡ 0, if x = y (ν n)(ν m)Q ≡ (ν m)(ν n)Q (ν n)Q ≡ Q, if n ∈ / f n(Q) Q≡Q (ν n)(Q1 |Q2 ) ≡ (ν n)Q1 |Q2 , if n ∈ / f n(Q2 ) Q2 ≡ Q1 Q1 ≡ Q2 , Q 2 ≡ Q 3 (ν n)(Q1 + Q2 ) ≡ (ν n)Q1 + Q2 , if n ∈ / f n(Q2 ) Q1 ≡ Q2 Q1 ≡ Q3 Q(˜ a) ≡ R[˜ a/˜ x], if Q(˜ x) = R
within the ambient hierarchy. For example, in the configuration m[. . . | n[. . . ]], we assign the name ”a” (for ambient) to the ambient n and the name ”f ” (for father) to the ambient m. The pair (a, f ) associated to ambient n, says that it is univocally identified by a, and it lays within the ambient m, univocally identified by f . We will assign to each ambient a pair (a, f ) that we will call the coordinates of the ambient: the first component records the information who I am, the second one records the information where I am. Our translating function will also introduce a special name, top, to identify the topmost ambient that contains all the other ones. To give a graphical idea of our encoding, let us consider process A ∈ Pπ in Table 5 (1), where we also depict the tree structure of the nested ambients: we assign the unique names a, b, c, d to the four ambients in process A, so that their complete coordinates result (a, top), (b, top), (c, top), (d, a), respectively. Note that all the subprocesses laying within an ambient share the same coordinates. We then let process A evolve to B by executing the in n capability, see Table 5 (2), where the coordinates of the components involved in the capability execution has been modified consequently. When the capability out n fires, B evolves to C and correspondently, the coordinates of our encoding change as in Table 5 (3). Process C evolves to D by executing the open m capability that dissolves ambient m, and the coordinates change as in Table 5 (4). Definition 4. Given P ∈ PMA , the encoding function T : PMA → Pπ is defined as T (P ) = (ν m)T ˜ a (P, top, top)|Amb(top), where m ˜ = f n(Ta (P, top, top)), and Ta : PMA × N × N → Pπ is defined inductively as follows: Ta (in n.P, a, f ) = Inn(a, f )@Ta (P, a, f ) Ta (out n.P, a, f ) = Outn(a, f )@Ta (P, a, f ) Ta (open n.P, a, f ) = Openn(a, f )@Ta (P, a, f ) Ta ((ν n)P, a, f ) = (ν inn, outn, openn) Ta (P, a, f ) Ta (0, a, f ) =0 Ta (P | Q, a, f ) = Ta (P, a, f ) | Ta (P, a, f ) Ta (μX.P, a, f ) = Ide(˜ n), where Ide(˜ n) = Ta (P [Ide(˜ n)/X], a, f ) ≡ Q ∧ n ˜ = f n(Q) ∧ Ide is a new name 8 Ta (n[P ], a, f ) = (ν b) (Ambn(b, a) | Ta (P, b, a)) 9. Ta (Ide(˜ n), a, f ) = Ide(˜ n). 1 2 3 4 5 6 7
50
L. Brodo Table 5. Graphical representation of the encoding A = n[in n.P1 |m[out n.P2 ]] | n[open m.P3 ] | n[P4 ]
in n.P
top Q lll QQQQQ lll u ( l n[](a,top) n[](b,top) n[](c,top) o o o wooo (b,top) (c,top) (a,top) (d,a)
1
open m.P
m[]
P 4
3
B = n[n[P1 |m[out n.P2 ]] | open m.P3 ] | n[P4 ]
QQQ QQQ ( (c,top) (b,top) top
vv vv z v (a,b) (d,a) n[]
P1
(d,a) out n.P 2
(2)
top mm QQQQQ mmm Q( m v n[](b,top) n[](c,top) w OOOOO w w O' {ww (d,b) (b,top) (c,top) (a,b)
P1
m[]
open m.P3
(d,b)
P4
D = n[n[P1 ]|P2 |P3 ] | n[P4 ]
n[]
P
P2
(b,top) (c,top) open m.P3 P4
m[]
C = n[n[P1 ]|m[P2 ]|open m.P3 ] | n[P4 ]
(a,b)
n[]
(d,a) out n.P2
(1)
n[]
n[] m mmm m m m v (a,b)
stopKKKK K% sss s y n[](b,top) n[](c,top) HHH w w HH w $ {ww (c,top) (a,b) (b,top) (b,top) P 2
P 3
P
4
(a,b)
1
(4)
(3)
The definition of the process constants Inn(a, f ), Outn(a, f ), Openn(a, f ), Ambn(n, a), and Amb(top) are given in Table 7 and in Table 8. The definition of the operator @ : Pπ × Pπ → Pπ is in Table 6. For the sake of readability we prefer not to show the complete list of parameters of process identifiers Inn(a, f ), Outn(a, f ), Openn(a, f ), Ambn(n, a) and we only consider their coordinates. The full parameter list is given by adding the free names of the process Q which appears in the process identifier definition: Ide(˜ x) = Q. We also use the bound output notation to make clear the list of sent names, i.e. ab, instead of ab. It is easy to show that our encoding is compositional with respect to the parallel operator. The @ operator takes Q1 , Q2 ∈ Pπ as arguments, and returns a process that behaves as Q1 and when it stops, behaves as Q2 , in other words @ substitutes all the occurrences of 0 in Q1 with Q2 . Table 6. Definition of the @ operator 0@Q =Q π.R@Q = π.(R@Q) (ν˜ a)R@Q = (ν˜ a)(R@Q)
R|R @Q = R@Q|R @Q R + R @Q = R@Q + R @Q Ide(˜ n)@Q = Ide(˜ n)
[a = x]R@Q = [a = x](R@Q) [a = x]R@Q = [a = x](R@Q)
On the Expressiveness of the π-Calculus and the Mobile Ambients
51
In MA processes, the ambient names carry out two different roles: denoting delimited spaces, the ambients, and allowing capabilities to act on them. Our encoding keeps separated the two roles: the original ambient name allows interactions to happen and we introduce a new name to univocally identify each ambient, in Definition 4, Rule 8 introduces the unique name b and keeps the original name n as the suffix of the identifier Ambn(b, a). The construction of the names of the identifiers encoding the capabilities follows the same idea, see Rules 1, 2, 3 in Definition 4. Hereafter, when we want to refer to a generic ambient name ’x’ we will write Inx(), Openx(), Outx(), Ambx(). We will prove that a translating π process can simulate a capability execution in a finite number of steps (completeness), and that all the steps that a translating π process may execute belong either to the simulation of some MA capability (that we will call simulating trace) or to a series of transitions returning to an existing state (that we will call aborting trace) (soundness). Note that our notion of soundness is somewhat ’weaker’ than the classical one, since we may introduce loops. Example 1. Let P = open n.0|n[out n.0]|m[open n.0] be a MA process that may execute one of the two open capabilities. We assign unique identifiers to ambients: ”a” to ambient ”n” and ”b” to ”m”. The translation works as follows: (1) for each ambient x a process Ambx(. . . ) is generated; (2) for each sequential MA process a process identifier encoding all the sequential capabilities is generated. Applying our encoding we get: T (P, top) ≡ Q = (ν top, a, b, . . . ) Amb(top)|Openn(top, top, openn) ambient top and top level capability | Ambn(a, top)|Outn(a, top) ambient n and its content | Ambm(b, top) | Openn(b, top) ambient m and its content
Graphically: π−Calculus
Mobile Ambients
top open n.0 n out n.0 m open n.0
Amb(top)
top
Openn(top,top)
top top
Ambn(a,top)
Outn(a,top) a
top
Ambm(b,top)
top b
Openn(b,top)
In Table 7, the main routines of our encoding are higtlighted. The simulation of a capability begins with the Start() routine that contacts the process Ambx() over which the capability acts. Then, processes LinkN (), (with N ∈ {1, 2, 3}) contact the other subprocesses involved in the capability simulation and processes CheckN (), (with N ∈ {1, 2, 3}) checks if the actual conditions are compatible with the capability simulation. If it is not the case, a recovery routine is executed: ReleaseN (), (with N ∈ {1, 2, 3}); otherwise the computation ends with the final steps of the simulation in SimulateInx(), or SimulateOutx(), or SimulateOpenx(). The definitions of the auxiliary process identifiers are in Appendix A. We say that our encoding processes are in standard form if they are
52
L. Brodo Table 7. Definitions Inn(a, f ), Outn(a, f ), and Openn(a, f )
Inn(a, f, inn)
= Start(a, f , inn) @ Link1(a, f , pc, ax , fx )@ ( [ax1 = busy]Release1(a, f, pc)@Inn(a, f) + [ax1 = busy]Check1(a, f, pc, ax , fx )@Link2(a, f , pc, ax , fx , pc1))@ ( [ax2 = busy]Release2(a, f, pc, pc1)@Inn(a, f ) + [ax2 = busy]Check2(a, f, pc, ax , fx , pc1)@Link3(a, f , pc, ax , fx , pc1, pc2))@ ( [ax3 = busy]Release3(a, f, pc, pc1, pc2)@Inn(a, f ) + [ax3 = busy]Check3(a, f, pc, ax , fx , pc1, pc2, pc3, ax3 , fx3 ))@ SimulateInn(a, f , pc, ax , fx , pc1, pc3, ax3 , fx3 )
Outn(a, f, outn)
= Start(a, f , outn)@Link1(a, f , pc, ax , fx )@ ( [ax1 = busy]Release1(a, f, pc)@Outn(a, f ) + [ax1 = busy]Check1(a, f, pc, ax , fx )@Link3(a, f , pc, ax , fx , pc1, pc1))@ ( [ax3 = busy]Release3(a, f, pc, pc1, pc1)@Outn(a, f) + [ax3 = busy]Check3(a, f, pc, ax , fx , pc1, pc2, pc2))@ SimulateOutn(a, f , pc, pc1, ax3 , fx3 ))
Openn(a, f, openn)= Start(a, f , openn)@Link1(a, f, pc, ax , fx )@ ( [ax1 = busy]Release1(a, f, pc)@Openn(a, f) + [ax1 = busy]Check1(a, f, pc, ax , fx )@Link3(a, f , pc, ax , fx , pc1, pc1))@ ( [ax3 = busy]Release3(a, f, pc1, pc1)@Openn(a, f ) + [ax3 = busy]0)@ ( [ax3 = nomatch]Release3(a, f, pc1, pc1)@Openn(a, f) + [ax3 = nomatch]Check3(a, f , pc, ax , fx , pc1, pc2, pc2, ax3 , fx3 ))@ SimulateOpenn(a, f, pc, pc1) Start(a, f, chcap )
= (ν pc)chcap pc .pc(ax , fx ).0
SimulateInn(a, f, pc, ax , fx , pc1, pc3, ax3 , fx3 ) = pc ax , fx .pc1 a, ax .pc3 ax3 , fx3 .0 SimulateOutn(a, f, pc, pc1, ax3 , fx3 )
= pc ax3 , fx3 .pc1 a, fx3 .0
SimulateOpenn(a, f, pc, pc1)
= pc a, f .pc1 a, f .0
congruent to a parallel composition of the following process identifiers Ambx(), Inx(), Outx(), Openx(), Amb(top), and Opened(). It is easy to show that encoding processes directly generated by our encoding function do not contain Opened() identifiers, whereas their derivatives in normal form may do(see the code of the Simulationn identifier in Table 8). Please note that the first argument of Opened(a , a, f ) is the unique identifier of the dissolved ambient. We identify two subsets of the π encoding processes that we will use for defining the notion of traces. Definition 5. We define the set of processes directly generated by our encoding as Aπ = {Q |∃P ∈ PMA s.t. Q ≡ T (P, top) ∧ Q is in standard form}. Then, we identify the set of their derivatives in standard form: Aπop = {Q | Q →∗ Q with Q ∈ Aπ ∧ Q is in standard form}. It follows that if Q ∈ Aπ then it has the following syntactic structure: M1 M2 M3 (ν n ˜ )Amb(top)|Πi=1 Inxi (ai , bi )|Πj=1 Outxj (cj , dj )|Πh=1 Openxh (eh , fh ))
On the Expressiveness of the π-Calculus and the Mobile Ambients
53
Table 8. Definitions of Ambn(n, a), and Opened(a, a , f )
Ambn(a, f )
= inn(pcx ).pcx a, f .Busyn(a, f, pcx , in) + outn(pcx ).pcx a, f .Busyn(a, f, pcx , out) + openn(pcx ).pcx a, f .Busyn(a, f, pcx , open) + a(pcx ).pcx a, f .Busyn(a, f, pcx , nocap) + a(pcx , pcx ).pcx nomatch, nomatch .Ambn(a, f )
Opened(a, a , f )
= a(pcx ).pcx a, f .Opened(a, a , f ) + a(pcx , pcx ).pcx a , f .Opened(a, a , f )
Busyn(a, f, pc, cap) = pc(anew , fnew ).[anew = release]Ambn(a, f ) + [anew = release]Simulationn (a, anew , fnew , cap) + a(pcx ).pcx busy, busy .Busyn(a, f, pc, cap) + a(pcx , pcx ).[pc = pcx ]pcx a, f .Busyn(a, f, pc, cap) + [pc = pcx ]pcx nomatch, nomatch .Busyn(a, f, pc, cap) Simulationn (a, anew , fnew , cap) = [cap = open]Opened(a, anew , fnew ) + [cap = open]Ambn(anew , fnew )
N |Πl=1 Ambxl (gl , tl ) where M1 , M2 , M3 , N ≥ 0, {ai , bi , cj , dj , eh , fh , gl , tl }
⊆ {˜ n}. i ≤ M1 , j ≤ M2 h ≤ M3 , l ≤ N Processes in Aπop have a similar syntactic structure, but they may also contain Opened() subprocesses generated by mimicking an open capability. As an example, let P ∈ PMA execute an open n capability: P → P and let Q1 , Q2 ∈ Aπ be the respective translations, i.e. T (P, top) ≡ Q1 and T (P , top) ≡ Q2 . Now, Q1 may mimic the same open n capability executed by P : Q1 →∗ Q2 , with Q2 ∈ Aπop as it contains an Opened() subprocess, nevertheless Q2 and Q2 can simulate the same set of capabilities. For technical reasons, we order the parallel composition of Opened() subprocesses by following the order of their temporal creation: the lower is the index i, the sooner the respective open capability has been executed. Then, given M Πj=1 Openedj (aj , aj , fj ), if ∃ k, h ∈ [1, . . . , M ] s.t. ak = ah or ak = fh , then k > h, i.e. ambient ah has been dissolved before ak ambient, and at that moment
54
L. Brodo
ak replaced it (please recall that the first argument of Opened() is the unique name of the dissolved ambient). Now, we need to define an equivalence relation to relate processes in Aπ and in Aπop that mimic the same MA process, despite the presence or not of the Opened() processes. To obtain our goal we proceed in two steps: (1) in Definition 7 we define a pure syntactical relation ∼ =; (2) we use it to define our equivalence relation in Definition 8. We will prove the completeness and a weak notion of soundness of our encoding with respect to . Before defining the relation ∼ = we introduce the pair replacement operator ϕ to replace a pair of names in Aπ processes, corresponding to an updating of the coordinates. Definition 6. The pair replacement operator ϕ : Aπ ×(N ×N )×(N ×N ) → Aπ, written as ϕ = [(a2 , f2 )/(a1 , f1 )], is defined as: (Q1 | Q2 )ϕ = Q1 ϕ | Q2 ϕ Amb(top)ϕ = Amb(top)
(ν n)Q ϕ = (ν n)(Q ϕ)
Ambx(a, f2 ) if f1 = f Ambx(a, f ) otherwise Capx(a2 , f2 , capx) if a1 = a Capx(a, f, capx)[(a2 , f2 )/(a1 , f1 )] = Capx(a, f, capx) otherwise Ambx(a, f )[(a2 , f2 )/(a1 , f1 )]
=
where Capx ∈ {Inx, Outx, Openx}. Intuitively, ϕ forces an updating of the coordinates of those subprocesses laying in an ambient that has been dissolved: it modifies the father component of the Ambx() processes and the coordinates of the Capx() processes both laying in the dissolved ambient.By the notion of Qϕϕ we mean (Qϕ)ϕ . Now, we can give the definition of relation ∼ = ⊆ Aπ × Aπop , which intuitively relates those processes that differ for the presence of Opened() subprocesses, and still can simulate the same capabilities. As said before, an Opened() process is generated during the mimicking of an open capability. When this happens, roughly, all the processes that were laying within the dissolved ambient are then laying on the father ambient of the dissolved one. The role of the Opened() process is to create this new link. If we force the updating of the coordinates of the processes were laying within the dissolved one by the pair replacements ϕ, then Opened() is no more necessary and may be removed. ∼ Let (ν n Definition 7 (relation =). ˜ )Amb(top)|OP EN |CAP |AM B = R ∈ n}, Aπop , with f n(AM B) ∪ f n(CAP ) ∪ f n(OP EN ) ⊆ {˜ where AM B and CAP stand for the parallel composition of Ambx() processes, and of Inx(), Outx(), Openx() processes, respectively, N and OP EN = Πi=1 Openedi (ai , ai , fi ), with N ≥ 0. ∀i ∈ [1 . . . N ] we define ϕi = [(ai , fi )/(ai , f )] and ϕi = [(a, ai )/(a, ai )]. If Q = (ν n ˜ )Amb(top)|AM B|CAP ϕ1 ϕ1 . . . ϕN ϕN , then Q ∼ = R. We clarify the above definition with an example.
On the Expressiveness of the π-Calculus and the Mobile Ambients
55
Example 2. Continuing Example 1, let P perform an ‘open n’: P → P ≡ out n.0|m[open n.0]. As we will prove later, the translation of P may mimic the capability: θ˜
→∗ Q ≡ (ν top, a, m )Amb(top) T (P, top) − | Opened(a, top, top)|Outn(a, top) | Ambm(m , top)|Openn(m , top) Process Opened(a, top, top) has substituted the dissolved Ambn(a, top); Outn(a, top) still has the old coordinates (that will be updated as soon as it will try to mimic the ‘out n’ capability). By Definition 7, we define ϕ = [(top, top)/(a, f )], ϕ = [(a1 , top)/(a2 , a)] and we build S = ((ν n ˜ )Amb(top) | Ambm(m , top) | Openn(m , top) | Outn(a, top))ϕϕ obtain ∼ ing Q = S. In the next proposition we prove that if Q ∈ Aπ, and R ∈ Aπop such that θ˜r Q ∼ = R, then whenever R simulates a capability, R −→∗ R , also Q may simulate θ˜q
the same capability Q −→∗ Q (i.e. they may simulate the capability of the same MA process), and Q and R may still simulate the same set of capabilities. Please note that due to the use of restricted names, the labels in θ˜r and in θ˜q are always different. Proposition 1. Let Q ∈ Aπ and R ∈ Aπop be such that Q ∼ = R. If ∃ R ∈ Aπop ˜ ˜ θ θ such that R − →∗ R , then ∃ S ∈ Aπ, Q ∈ Aπop such that Q −→∗ Q with S ∼ = R and ∼ S=Q. Now we introduce the equivalence relation to easily manage the presence of the identifiers Opened() in the derivatives of translating processes. Definition 8. If there exists a process S such that S ∼ = R and S ∼ = Q, then we write R Q. Example 3. Continuing Example 2, we apply the encoding function to process P , and we get T (P , top) = Q ≡ (ν top, b, outn, openn) Amb(top) | Outn(top, top) | Ambm(b, top)|Openn(b, top) Q differs from Q as it contains the subprocess Opened(a, top, top) and different coordinates for Outn(a, top). Consider S, now we have that S ≡ Q from which we may derive S ∼ = Q , then by Definition 8 we have Q Q . In the following we prove the subject reduction up to . θ˜q
Proposition 2. Let R and Q be two π processes such that Q R. If Q −→∗ Q , θ˜
r∗ R and Q R . then it exists R ∈ Aπop such that R −→
56
L. Brodo
We will use the classical notion of context: C{.} is any process with a hole’ that will be filled by process P when we write C{P }. We write C{.} both for MA and π processes. It will be clear by the context whether we refer to a MA or a π process. To prove our notion of soundness up to , we introduce the notion of simulating trace. Given a MA process P , and its translation Q = T (P, top), a simulating trace of Q is a series of transitions simulating the execution of a capability of P . The formal definition follows. Definition 9 (simulating trace). Let P ∈ PMA and let Q = T (P, top) be its θ
θ
θ
1 2 n Q1 −→ Q2 . . . −→ Qn be a computation, then we say that ξ = encoding. Let Q −→ θ1 , θ2 , . . . , θn is a simulating trace of Q if ∃ P such that P → P , T (P , top) Qn , and ∀i < n it holds that T (P , top) Qi .
When the conditions to complete the capability simulation do not hold, then the trace aborts and we have an aborting trace. The final state of an aborting trace is congruent to its initial one. The formal definition follows. Definition 10 (aborting trace). Let P ∈ PMA and let Q = T (P, top) be its θ
θ
θ
1 2 3 Q1 −→ Q2 −→ . . . Qn be a computation, then we say that encoding. Let Q −→ ξ = θ1 , θ2 , . . . , θn is an aborting trace of Q if Q ≡ Qn , and ∀i < n it holds that Q Qi
θ˜
We refer to both simulating traces and aborting traces as traces. We will write Q − →ξ ∗ Q to denote the execution of all and only the transitions corresponding to trace θ ξ; with Q − →ξn Q we identify the n-th transition of the trace ξ, and with {ξ} we denote the set of the labels of the trace ξ. Proposition 3. Let P ∈ PMA and let T (P, top) = Q be its encoding. Let us asθ
θ
θ˜
1 1 2 1 sume that there are two traces ξ1 and ξ2 such that Q −→ → →∗ Q is ξ1 and Q − ξ2. If Q − a computation where all the transitions of the two traces and only them are executed, in any order, then Q ∈ Aπop .
The above property can be easily generalized to a number of traces greater than two, i.e. when more traces interleave their transitions no deadlock occurs, and the system can always reach a configuration in Aπop . Now, we prove that whenever a derivative of process Q ∈ Aπ executes a transition, then that transition is part of a trace, either a simulating trace or an aborting trace. θ˜
a Proposition 4. Let P ∈ PMA and let T (P, top) = Q be its encoding. If Q −→
∗
θ
Q1 − → Q2 , then there exist n traces, ξ1 , . . . , ξn , and Q ∈ Aπop , s. t. ˜ θb ∗ Q , and ξ1 = θ˜a1 θ˜b1 , . . . , ξj = θ˜aj θθ˜bj , . . . , ξn = θ˜an θ˜bn Q2 −→ with ∪n {θ˜i } = {θ˜a }, and ∪n {θ˜i } = {θ˜b } i=1
a
i=1
b
Proposition 3 and Proposition 4 help us to prove the next theorem that states a weak soundness up to .
On the Expressiveness of the π-Calculus and the Mobile Ambients
57
Theorem 1 (weak soundness). Let P be a MA process, θ˜
θ˜
a∗ b∗ Q , then ∃ P , Q such that Q −→ Q , Q T (P , top) with if T (P, top) = Q −→ ∗ P → P .
Next theorem proves the our encoding is complete up to . Theorem 2 (completeness). Let P be a MA process, if P →∗ P then there θ˜
exists a process Q such that T (P, top) − →∗ Q and T (P , top) Q .
5
Conclusion and Future Work
We have proposed an encoding of the pure Mobile Ambients into the polyadic πcalculus with match and mismatch. We have encoded each ambient with a process identifier, which shares a private channel with the π-processes encoding its content. Each π-process encoding an ambient also keeps trace of its position within the original ambient hierarchy by sharing a second private name with its father ambient. The two names keep separated the two basic notions of an ambient: Who I am, and Where I a,m. The simulation of a capability execution involves three processes: the one encoding the capability, the one encoding the ambient over which the capability acts, and the one encoding the ambient where the capability is laying on. If all the conditions required for the execution hold, the simulation ends successfully, otherwise the simulation aborts and no side effects are produced. In the last case, our encoding introduces loops. Our proposal represents an attempt to define an encoding of MA in the πcalculus as precise as possible. The main limitation is due to the possible divergence it introduces. Nevertheless, the divergence is given by loops, and if we assume to execute a complete trace (simulating or aborting) at a time, then it is easy to see that our encoding does not introduce infinitely many states. As future work we aim at studying a notion of behavioral correspondence. Also, the deep investigation on the capability mechanisms may be exploited to define a suitable labeled semantics for MA. Acknowledgments. We thank Pierpaolo Degano for the useful comments on a preliminary draft of this work. We also thank the anonymous referees for their suggestions.
References 1. Aman, B., Ciobanu, G.: Translating Mobile Ambients into P Systems. ENTCS 171(2), 11–23 (2007) 2. Aranda, J., Di Giusto, C., Palamidessi, C., Valencia, F.D.: On Recursion, Replication and Scope Mechanisms in Process Calculi. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2006. LNCS, vol. 4709, pp. 185–206. Springer, Heidelberg (2007)
58
L. Brodo
3. Brodo, L., Degano, P., Priami, C.: Reflecting Mobile Ambients into the π-Calculus. In: Priami, C. (ed.) GC 2003. LNCS, vol. 2874, pp. 25–56. Springer, Heidelberg (2003) 4. Cardelli, L., Gordon, A.: Mobile Ambients. Theoretical Computer Science 240(1), 177–213 (2000) 5. Cenciarelli, P., Talamo, I., Tiberi, A.: Ambient Graph Rewriting. ENTCS 117, 335– 351 (2005) 6. Ciobanu, G., Zakharov, V.A.: Encoding mobile ambients into the π-calculus. In: Virbitskaite, I., Voronkov, A. (eds.) PSI 2006. LNCS, vol. 4378, pp. 148–165. Springer, Heidelberg (2007) 7. Degano, P., Priami, C.: Enhanced Operational Semantics: A Tool for Describing and Analysing Concurrent Systems. ACM Computing Surveys 33(2), 135–176 (2001) 8. Fournet, C., L´evy, J.-J., Schmitt, A.: An Asynchronous, Distributed Implementation of Mobile Ambients. In: Watanabe, O., et al. (eds.) TCS 2000. LNCS, vol. 1872, pp. 348–364. Springer, Heidelberg (2000) 9. Gadducci, F., Monreale, G.V.: A Decentralized Implementation of Mobile Ambients. In: Ehrig, H., et al. (eds.) ICGT 2008. LNCS, vol. 5214, pp. 115–130. Springer, Heidelberg (2008) 10. Gorla, D.: On the Relative Expressive Power of Calculi for Mobility. ENTCS 249, 269–286 (2009) 11. Levi, F., Sangiorgi, D.: Controlling Interference in Ambients. In: Proceedings of the Symposium on Principles of Programming Languages (POPL 2000), pp. 352–364. ACM Press, New York (2000) 12. Milner, R., Parrow, J., Walker, D.: A Calculus of Mobile Processes, Part 1-2. Information and Computation 100(1), 1–77 (1992) 13. Phillips, I., Vigliotti, M.G.: Symmetric Electoral Systems for Ambient Calculi. Information and Computation 206(1), 34–72 (2008) 14. Pˇ aun, G.: Computing with membranes. Computer and System Sciences 61(1), 108– 143 (2000) 15. Zimmer, P.: On the expressiveness of the Pure Safe Ambients. Mathematical Structure in Computer Science 13(5), 721–770 (2003)
On the Expressiveness of the π-Calculus and the Mobile Ambients
A
59
Auxiliary Encoding Functions
Table 9. Definitions of Link1(), Check1(), Link2(), Check2(), Link3(), Check3() Link1(a, f, pc, ax , fx ) Check1(a, f, pc, ax , fx , ax1 , fx1 )
= (ν pc1)a pc1 .pc1(ax1 , fx1 ).0 = [a = ax1 ]0 + [a = ax1 ]Link1(ax1 , f, pc, ax , fx )
Link2(a, f, pc, ax , fx , pc1) Check2(a, f, pc, ax , fx , pc1)
= (ν pc2)f pc2 .pc2(ax2 , fx2 ).0 = [f = ax2 ]0 + [f = ax2 ] Link2(a, ax2 , pc, ax , fx , pc1)
Link3(a, f, pc, ax , fx , pc1, pc2) = (ν pc3)fx pc3, pc2 .pc3(ax3 , fx3 ).0 Check3(a, f, pc, ax , fx , pc1, pc2, pc3, ax3 , fx3 ) = [ax3 = fx ] 0 + [ax3 = fx ]Link3(a, f, pc, ax , ax3 , pc1, pc2)
Table 10. Definitions of Release1(), Release2(), Release3() Release1(a, f, pc) = pc release, release .0 Release2(a, f, pc, pc1) = pc release, release .pc1 release, release .0 Release3(a, f, pc, pc1, pc2) = pc release, release .pc1 release, release .pc2 release, release .0
Table 11. Definition of Amb()
Amb(top)
= top(pc).pc top, top .Busy(top, pc) + top(pc, pc ).pc nomatch, nomatch .Amb(top) Busy(top, pc) = pc(anew , fnew ).Amb(top) + top(pc).pc busy, busy .Busy(a, f, pc, cap) + top(pc, pc ).[pc = pc ]pc top, top .Busy(top, pc) + [pc = pc ]pc nomatch, nomatch .Busy(top, pc)
Integrating Maude into Hets Mihai Codescu1 , Till Mossakowski1, Adri´ an Riesco2 , and Christian Maeder1 2
1 DFKI GmbH Bremen and University of Bremen, Germany Facultad de Inform´ atica, Universidad Complutense de Madrid, Spain
Abstract. Maude modules can be understood as models that can be formally analyzed and verified with respect to different properties expressing various formal requirements. However, Maude lacks the formal tools to perform some of these analyses and thus they can only be done by hand. The Heterogeneous Tool Set Hets is an institution-based combination of different logics and corresponding rewriting, model checking and proof tools. We present in this paper an integration of Maude into Hets that allows to use the logics and tools already integrated in Hets with Maude specifications. To achieve such integration we have defined an institution for Maude based on preordered algebras and a comorphism between Maude and Casl, the central logic in Hets. Keywords: Heterogeneous specifications, rewriting logic, institution, Maude, Casl.
1
Introduction
Maude [3] is a high-level language and high-performance system supporting both equational and rewriting logic computation for a wide range of applications. Maude modules correspond to specifications in rewriting logic, a simple and expressive logic which allows the representation of many models of concurrent and distributed systems. The key point is that there are three different uses of Maude modules: 1. As programs, to implement some application. We may have chosen Maude because its features make the programming task easier and simpler than other languages. 2. As formal executable specifications, that provide a rigorous mathematical model of an algorithm, a system, a language, or a formalism. Because of the agreement between operational and mathematical semantics, this mathematical model is at the same time executable. 3. As models that can be formally analyzed and verified with respect to different properties expressing various formal requirements. For example, we may want to prove that our Maude module terminates; or that a given function, equationally defined in the module, satisfies some properties expressed as first-order formulas. However, when we follow this last approach we find that, although Maude can automatically perform analyses like model checking of temporal formulas or M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 60–75, 2011. c Springer-Verlag Berlin Heidelberg 2011
Integrating Maude into Hets
61
verification of invariants, other formal analyses have to be done “by hand,” thus disconnecting the real Maude code from its logical meaning. Although some efforts, like the Inductive Theorem Prover [4], have been dedicated to palliate this problem, they are restricted to inductive proofs in Church-Rosser equational theories, and they lack the generality to deal with all the features of Maude. With our approach, we cover arbitrary first-order properties (also written in logics different from Maude), and open the door to automated induction strategies such as those of ISAplanner [7]. The Heterogeneous Tool Set, Hets [16] is an institution-based combination of different logics and corresponding rewriting, model checking and proof tools. Tools that have been integrated into Hets include the SAT solvers zChaff and MiniSat, the automated provers SPASS, Vampire and Darwin, and the interactive provers Isabelle and VSE. In this paper, we describe an integration of Maude into Hets from which we expect several benefits: On the one hand, Maude will be the first dedicated rewriting engine that is integrated into Hets (so far, only the rewriting engine of Isabelle is integrated, which however is quite specialized towards higher-order proofs). On the other hand, certain features of the Maude module system like views lead to proof obligations that cannot be checked with Maude—Hets will be the suitable framework to prove them, using the above mentioned proof tools. The rest of the paper is organized as follows: after briefly introducing Hets in Section 2 and Maude in Section 3, Section 4 describes the institution we have defined for Maude and the comorphism from this institution to Casl. Section 5 shows how development graphs for Maude specifications are built, and then how they are normalized to deal with freeness constraints. Section 6 illustrates the integration of Maude into Hets with the help of an example, while Section 7 concludes and outlines the future work.
2
Hets
The central idea of Hets is to provide a general logic integration and proof management framework. One can think of Hets acting like a motherboard where different expansion cards can be plugged in, the expansion cards here being individual logics (with their analysis and proof tools) as well as logic translations. The benefit of plugging in a new logic and tool such as Maude into the Hets motherboard is the gained interoperability with the other logics and tools available in Hets. The work that needs to be done for such an integration is to prepare both the Maude logic and tool so that it can act as an expansion card for Hets. On the side of the semantics, this means that the logic needs to be organized as an institution [12]. Institutions capture in a very abstract and flexible way the notion of a logical system, by leaving open the details of signatures, models, sentences (axioms) and satisfaction (of sentences in models). The only condition governing the behavior of institutions is the satisfaction condition, stating that truth is invariant under change of notation (or enlargement of context),
62
M. Codescu et al.
which is captured by the notion of signature morphism (which leads to translations of sentences and reductions of models), see [12] for formal details. Indeed, Hets has interfaces for plugging in the different components of an institution: signatures, signature morphisms, sentences, and their translation along signature morphisms. Recently, even (some) models and model reducts have been covered, although this is not needed here. Note, however, that the model theory of an institution (including model reducts and the satisfaction condition) is essential when relating different logics via institution comorphisms. The logical correctness of their use in multi-logic proofs is ensured by modeltheoretic means. For proof management, Hets uses development graphs [15]. They can be defined over an arbitrary institution, and they are used to encode structured specifications in various phases of the development. Roughly speaking, each node of the graph represents a theory. The links of the graph define how theories can make use of other theories. Definition 1. A development graph is an acyclic, directed graph DG = N , L. N is a set of nodes. Each node N ∈ N is a tuple (Σ N , ΦN ) such that Σ N is a signature and ΦN ⊆ Sen(Σ N ) is the set of local axioms of N . L is a set of directed links, so-called definition links, between elements of N . Each definition link from a node M to a node N is either – global (denoted M Σ M → Σ N , or – local (denoted M σ : Σ M → Σ N , or – hiding (denoted M
σ
σ
+3 N ), annotated with a signature morphism σ : / N ), again annotated with a signature morphism
σ hide
+3 N ), annotated with a signature morphism σ :
Σ N → Σ M going against the direction of the link, or σ +3 – free (denoted M N ), annotated with a signature morphism σ : Σ → free Σ M where Σ is a subsignature of Σ M . Definition 2. Given a node M in a development graph DG, its associated class ModDG (M ) of models (or M -models for short) is inductively defined to consist of those Σ M -models m for which 1. m satisfies the local axioms ΦM , σ +3 2. for each N M ∈ DG, m|σ is an N -model, σ / 3. for each N M ∈ DG, m|σ satisfies the local axioms ΦN , σ +3 4. for each N M ∈ DG, m has a σ-expansion m (i.e. m |σ = m) that is hide
an N -model, and σ 3+ 5. for each N M ∈ DG, m is an N -model that is persistently σ-free free in Mod(N ). The latter means that for each N -model m and each model morphism h : m|σ → m |σ , there exists a unique model morphism h# : m → m with h# |σ = h.
Integrating Maude into Hets
63
Complementary to definition links, which define the theories of related nodes, we introduce the notion of a theorem link with the help of which we are able to postulate relations between different theories. Theorem links are the central data structure to represent proof obligations arising in formal developments. Again, σ we distinguish between local and global theorem links (denoted by N __ __ __ +3 M σ σ and N _ _ _/ M respectively). We also need theorem links N __ __ __ +3 M (where hide θ
for some Σ, θ : Σ → Σ N and σ : Σ → Σ M ) involving hiding. The semantics of global theorem links is given by the next definition; the others do not occur in our examples and we omit them.
Definition 3. Let DG be a development graph and N , M nodes in DG. σ σ DG implies a global theorem link N __ __ __ +3 M (denoted DG |= N __ __ __ +3 M ) iff for all m ∈ Mod(M ), m|σ ∈ Mod(N ).
3
Rewriting Logic and Maude
Maude is an efficient tool for equational reasoning and rewriting. Methodologically, Maude specifications are divided into a specification of the data objects and a specification of some concurrent transition system, the states of which are given by the data part. Indeed, at least in specifications with initial semantics, the states can be thought of as equivalence classes of terms. The data part is written in a variant of subsorted conditional equational logic. The transition system is expressed in terms of a binary rewriting relation, and also may be specified using conditional Horn axioms. Two corresponding logics have been introduced and studied in the literature: rewriting logic and preordered algebra [13]. They essentially differ in the treatment of rewrites: whereas in rewriting logic, rewrites are named, and different rewrites between two given states (terms) can be distinguished (which corresponds to equipping each carrier set with a category of rewrites), in preordered algebra, only the existence of a rewrite does matter (which corresponds to equipping each carrier set with a preorder of rewritability). Rewriting logic has been announced as the logic underlying Maude [3]. Maude modules lead to rewriting logic theories, which can be equipped with loose semantics (fth/th modules) or initial/free semantics (fmod/mod modules). Although rewriting logic is not given as an institution [6], a so-called specification frame (collapsing signatures and sentences into theories) would be sufficient for our purposes. However, after a closer look at Maude and rewriting logic, we found out that de facto, the logic underlying Maude differs from the rewriting logic as defined in [13]. The reasons are: 1. In Maude, labels of rewrites cannot (and need not) be translated along signature morphisms. This means that e.g. Maude views do not lead to theory morphisms in rewriting logic!
64
M. Codescu et al.
2. Although labels of rewrites are used in traces of counterexamples, they play a subsidiary role, because they cannot be used in the linear temporal logic of the Maude model checker. Specially the first reason completely rules out a rewriting logic-based integration of Maude into Hets: if a view between two modules is specified, Hets definitely needs a theory morphism underlying the view.1 However, the Maude user does not need to provide the action of the signature morphism on labeled rewrites, and generally, there is more than one possibility to specify this action. The conclusion is that the most appropriate logic to use for Maude is preordered algebra [10]. In this logic, rewrites are neither labeled nor distinguished, only their existence is important. This implies that Maude views lead to theory morphisms in the institution of preordered algebras. Moreover, this setting also is in accordance with the above observation that in Maude, rewrite labels are not first-class citizens, but are mere names of sentences that are convenient for decorating tool output (e.g. traces of the model checker). Labels of sentences play a similar role in Hets, which perfectly fits here. Actually, the switch from rewriting logic to preordered algebras has effects on the consequence relation, contrary to what is said in [13]. Consider the following Maude theory: th A is sorts S T . op a : -> S . eq X:S = a . ops h k : S -> T . rl [r] : a => a . rl [s] : h(a) => k(a) . endfth This logically implies h(x) ⇒ k(x) in preordered algebra, but not in rewriting logic, since in the latter logic it is easy to construct models in which the naturality condition r; k(r) = h(r); s fails to hold. Before describing how to encode Maude into Hets we briefly outline the structuring mechanisms used in Maude specifications: Module importation. In Maude, a module can be imported in three different modes, each of them stating different semantic constraints: Importing a module in protecting mode intuitively means that no junk and no confusion are added; importing a module in extending mode indicates that junk is allowed, but confusion is forbidden; finally, importing a module in including mode indicates that no requirements are assumed. Module summation. The summation module operation creates a new module that includes all the information in its summands. 1
If the Maude designers would let (and force) users to specify the action of signature morphisms on rewrite labels, it would not be difficult to switch the Hets integration of Maude to being based on rewriting logic.
Integrating Maude into Hets
65
Renaming. The renaming expression allows to rename sorts, operators (that can be distinguished by their profiles), and labels. Theories. Theories are used to specify the requirements that the parameters used in parameterized modules must fulfill. Functional theories are membership equational specifications with loose semantics. Since the statements specified in theories are not expected to be executed in general, they do not need to satisfy the executability requirements. Views. A view indicates how a particular module satisfies a theory, by mapping sorts and operations in the theory to those in the target module, in such a way that the induced translations on equations and membership axioms are provable in the module. Note that Maude does not provide a syntax for mapping rewrite rules; however, the existence of rewrites between terms must be preserved by views.
4
Relating the Maude and CASL Logics
In this section, we will relate Maude and Casl at the level of logical systems. The structuring level will be considered in the next section. 4.1
Maude
As already motivated in Section 3, we will work with preordered algebra semantics for Maude. We will define an institution, that we will denote Maude pre , which can be, like in the case of Maude’s logic, parametric over the underlying equational logic. Following the Maude implementation, we have used membership equational logic [14]. Notice that the resulting institution M audepre is very similar to the one defined in the context of CafeOBJ [10,6] for preordered algebra (the differences are mainly given by the discussion about operation profiles below, but this is only a matter of representation). This allows us to make use of some results without giving detailed proofs. Signatures of Maude pre are tuples (K, F, kind : (S, ≤) → K), where K is a set (of kinds), kind is a function assigning a kind to each sort in the poset (S, ≤), and F is a set of function symbols of the form F = {Fk1 ...kn →x | ki , k ∈ K} ∪ {Fs1 ...sn →s | si , s ∈ S} such that if f ∈ Fs1 ...sn →s , there is a symbol f ∈ Fkind(s1 )...kind(sn )→kind(s) . Notice that there is actually no essential difference between our putting operation profiles on sorts into the signatures and Meseguer’s original formulation putting them into the sentences. Given two signatures Σi = (Ki , Fi , kindi ), i ∈ {1, 2}, a signature morphism φ : Σ1 → Σ2 consists of a function φkind : K1 → K2 which preserves ≤1 , a function between the sorts φsort : S1 → S2 such that φsort ; kind2 = kind1 ; φkind and the subsorts are preserved, and a function φop : F1 → F2 which maps operation symbols compatibly with the types. Moreover, the overloading of symbol names must be preserved, i.e. the name of φop (σ) must be the same both when mapping the operation symbol σ on sorts and on kinds. With composition defined component-wise, we get the category of signatures.
66
M. Codescu et al.
For a signature Σ, a model M interprets each kind k as a preorder (Mk , ≤), each sort s as a subset Ms of Mkind(s) that is equipped with the induced preorder, with Ms a subset of Ms if s < s , and each operation symbol f ∈ Fk1 ...kn ,k as a function Mf : Mk1 × . . . × Mkn → Mk which has to be monotonic and such that for each function symbol f on sorts, its interpretation must be a restriction of the interpretation of the corresponding function on kinds. For two Σ-models A and B, a homomorphism of models is a family {hk : Ak → Bk }k∈K of preorder-preserving functions which is also an algebra homomorphism and such that hkind(s) (As ) ⊆ Bs for each sort s. The sentences of a signature Σ are Horn clauses built with three types of atoms: equational atoms t = t , membership atoms t : s, and rewrite atoms t ⇒ t , where t, t are F -terms and s is a sort in S. Given a Σ-model M , an equational atom t = t holds in M if Mt = Mt , a membership atom t : s holds when Mt is an element of Ms , and a rewrite atom t ⇒ t holds when Mt ≤ Mt . Notice that the set of variables X used for quantification is K-sorted. The satisfaction of sentences extends the satisfaction of atoms in the obvious way. 4.2
CASL
Casl, the Common Algebraic Specification Language [1,5], has been designed by CoFI, the international Common Framework Initiative for algebraic specification and development. Its underlying logic combines first-order logic and induction (the latter is expressed using so-called sort generation constraints, which express term-generatedness of a part of a model; this is needed for the specification of the usual inductive datatypes) with subsorts and partial functions. The institution underlying Casl is introduced in two steps: first, many-sorted partial first-order logic with sort generation constraints and equality (PCFOL= ) is introduced, and then, subsorted partial first-order logic with sort generation constraints and equality (SubPCFOL= ) is described in terms of PCFOL= . In contrast to Maude, Casl’s subsort relations may be interpreted by arbitrary injections injs,t , not only by subsets. We refer to [5] for details. We will only need the Horn Clause fragment of first-order logic. For freeness (see Sect. 5.1), we will also need sort generation constraints, as well as the second-order extension of Casl with quantification over predicates. 4.3
Encoding Maude in CASL
We now present an encoding of Maude into Casl. It can be formalized as a so-called institution comorphism [11]. The idea of the encoding of M audepre in Casl is that we represent rewriting as a binary predicate and we axiomatize it as a preorder compatible with operations. Every Maude signature (K, F, kind : (S, ≤) → K) is translated to the Casl theory ((S , ≤ , F, P ), E), where S is the disjoint union of K and S, ≤ extends the relation ≤ on sorts with pairs (s, kind(s)), for each s ∈ S, rew ∈ Ps,s for any s ∈ S is a binary predicate and E contains axioms stating that for any kind
Integrating Maude into Hets
67
k, rew ∈ Pk,k is a preorder compatible with the operations. The latter means that for any f ∈ Fs1 ..sn ,s and any xi , yi of sort si ∈ S , i = 1, .., n, if rew(xi , yi ) holds, then rew(f (x1 , . . . , xn ), f (y1 , . . . , yn )) also holds. Let Σi , i = 1, 2 be two Maude signatures and let ϕ : Σ1 → Σ2 be a Maude signature morphism. Then its translation Φ(ϕ) : Φ(Σ1 ) → Φ(Σ2 ) denoted φ, is defined as follows: – for each s ∈ S, φ(s) := ϕsort (s) and for each k ∈ K, φ(k) := ϕkind (k). – the subsort preservation condition of φ follows from the similar condition for ϕ. – for each operation symbol σ, φ(σ) := ϕop (σ). – rew is mapped identically. The sentence translation map for each signature is obtained in two steps. While the equational atoms are translated as themselves, membership atoms t : s are translated to Casl memberships t in s and rewrite atoms of form t ⇒ t are translated as rew(t, t ). Then, any sentence of Maude of the form (∀xi : ki )H =⇒ C, where H is a conjunction of Maude atoms and C is an atom is translated as (∀xi : ki )H =⇒ C , where H and C are obtained by mapping all the Maude atoms as described before. Given a Maude signature Σ, a model M of its translated theory (Σ , E) is reduced to a Σ-model denoted M where: – for each kind k, define Mk = Mk and the preorder relation on Mk is rew; – for each sort s, define Ms to be the image of Ms under the injection injs,kind(s) generated by the subsort relation; – for each f on kinds, let Mf (x1 , .., xn ) = Mf (x1 , .., xn ) and for each f on sorts of result sort s, let Mf (x1 , .., xn ) = injs,kind(s) (Mf (x1 , .., xn )). Mf is monotone because axioms ensure that Mf is compatible with rew. The reduct of model homomorphisms is the expected one; the only thing worth noticing is that hkind(s) (Ms ) ⊆ Ns for each sort s follows from the Casl model homomorphism condition of h. Notice that the model reduct is an isomorphism of categories.
5
From Maude Modules to Development Graphs
We describe in this section how Maude structuring mechanisms described in Section 3 are translated into development graphs. Then, we explain how these development graphs are normalized to deal with freeness constraints. Signature morphisms are produced in different ways; explicitly, renaming of module expressions and views lead to signature morphisms; however, implicitly we also find other morphisms: the sorts defined in the theories are qualified with the parameter in order to distinguish sorts with the same name that will be instantiated later by different ones; moreover, sorts defined (not imported) in parameterized modules can be parameterized as well, so when the theory is
68
M. Codescu et al.
instantiated with a view these sorts are also renamed (e.g. the sort List{X} for generic lists can become List{Nat}). Each Maude module generates two nodes in the development graph. The first one contains the theory equipped with the usual loose semantics. The second one, linked to the first one with a free definition link (whose signature morphism is detailed below), contains the same signature but no local axioms and stands for the free models of the theory. Note that Maude theories only generate one node, since their initial semantics is not used by Maude specifications. When importing a module, we will select the node used depending on the chosen importation mode: – The protecting mode generates a non-persistent free link between the current node and the node standing for the free semantics of the included one. – The extending mode generates a global link with the annotation PCons?, that stands for proof-theoretic conservativity and that can be checked with a special conservativity checker that is integrated into Hets. – The including mode generates a global definition link between the current node and the node standing for the loose semantics of the included one. The summation module expression generates a new node that includes all the information in its summands. Note that this new node can also need a node with its free model if it is imported in protecting mode. The model class of parameterized modules consists of free extensions of the models of their parameters, that are persistent on sorts, but not on kinds. This notion of freeness has been studied in [2] under assumptions like existence of top sorts for kinds and sorted variables in formulas; our results hold under similar hypotheses. Thus, we use the same non-persistent free links described for protecting importation to link these modules with their corresponding theories. Views do not generate nodes in the development graph but theorem links between the node corresponding to the source theory and the node with the free model of the target. However, Maude views provide a special kind of mapping between terms, that can in general map functions of different arity. When this mapping is used we generate a new inner node extending the signature of the target to include functions of the adequate arity. We illustrate how to build the development graph with an example. Consider the following Maude specifications: fmod M1 is sort S1 . op _+_ : S1 S1 -> S1 [comm] . endfm
fmod M2 is sort S2 . endfm
th T is sort S1 . op _._ : S1 S1 -> S1 . eq V1:S1 . V2:S1 = V2:S1 . V1:S1 [nonexec] . endth
mod M3{X :: T} is sort S4 . endm
Integrating Maude into Hets
69
Fig. 1. Development Graph for Maude Specifications
mod M is ex M1 + M2 * (sort S2 to S) . endm
view V from T to M is op _._ to _+_ . endv
Hets builds the graph shown in Fig. 1, where the following steps take place: – Each module has generated a node with its name and another primed one that contains the initial model, while both of them are linked with a nonpersistent free link. Note that theory T did not generate this primed node. – The summation expression has created a new node that includes the theories of M1 and M2, importing the latter with a renaming; this new node, since it is imported in extending mode, uses a link with the PCons? annotation. – There is a theorem link between T and the free (here: initial) model of M. This link is labeled with the mapping defined in the view V. – The parameterized module M3 includes the theory of its parameter with a renaming, that qualifies the sort. Note that these nodes are connected by means of a non-persistent freeness link. It is straightforward to show: Theorem 1. The translation of Maude modules into development graphs is semantics-preserving. Once the development graph is built, we can apply the (logic independent) calculus rules that reduce global theorem links to local theorem links, which are in turn discharged by local theorem proving [15]. This can be used to prove Maude views, like e.g. “natural numbers are a total order.” We show in the next section how we deal with the freeness constraints imposed by free definition links. 5.1
Normalization of Free Definition Links
Maude uses initial and free semantics intensively. The semantics of freeness is, as mentioned, different from the one used in Casl in that the free extensions of models are required to be persistent only on sorts and new error elements
70
M. Codescu et al.
can be added on the interpretation of kinds. Attempts to design the translation to Casl in such a way that Maude free links would be translated to usual free definition links in Casl have been unsuccessful. We decided thus to introduce a special type of links to represent Maude’s freeness in Casl. In order not to break the development graph calculus, we need a way to normalize them. The idea is to replace them with a semantically equivalent development graph in Casl. The main idea is to make a free extension persistent by duplicating parameter sorts appropriately, such that the parameter is always explicitly included in the free extension. For any Maude signature Σ, let us define an extension Σ # = (S # , ≤# , F # , P # ) of the translation Φ(Σ) of Σ to Casl as follows: – S # unites with the sorts of Φ(Σ) the set {[s] | s ∈ Sorts(Σ)}; – ≤# extends the subsort relation ≤ with pairs (s, [s]) for each sort s and ([s], [s ]) for any sorts s ≤ s ; – F # adds the function symbols {f : [w] → [s]} for all function symbols on sorts f : w → s;2 – P # adds the predicate symbol rew on all new sorts. Now, we consider a Maude non-persistent free definition link and let σ : Σ → Σ be the morphism labeling it.3 We define a Casl signature morphism σ # : Φ(Σ) → Σ # : on sorts, σ # (s) := σ sort (s) and σ # ([s]) := [σ sort (s)]; on operation symbols, we can define σ # (f ) := σ op (f ) and this is correct because the operation symbols were introduced in Σ # ; rew is mapped identically. The normalization of Maude freeness is then illustrated in Fig.2. Given a free σ +3 non-persistent definition link M N , with σ : Σ → ΣN , we first take the free
translation of the nodes to Casl (nodes M and N ) and then introduce a new # node, K, labeled with ΣN , a global definition link from M to M labeled with # the inclusion ιN of ΣN in ΣN , a free definition link from M to K labeled with # σ and a hiding definition link from K to N labeled with the inclusion ιN .4 Notice that the models of N are Maude reducts of σ +3 N Casl models of K, reduced along the inclusion ιN . M n.p.f ree The next step is to eliminate Casl free definition links. The idea is to use then a transformation spe cific to the second-order extension of Casl to normal- M ize freeness. The intuition behind this construction is ιN that it mimics the quotient term algebra construction, # that is, the free model is specified as the homomorphic M σ +3 K ιn +3 N f ree hide image of an absolutely free model (i.e. term model). We are going to make use of the following known Fig. 2. Normalization of facts [18]: Maude free links 2 3 4
[x1 . . . xn ] is defined to be [x1 ] . . . [xn ]. In Maude, this would usually be an injective renaming. The arrows without labels in Fig.2 correspond to heterogeneous links from Maude to CASL.
Integrating Maude into Hets
71
Fact 1. Extensions of theories in Horn form admit free extensions of models. Fact 2. Extensions of theories in Horn form are monomorphic. Given a free definition link M
σ free
is in Horn form, replace it with M
+3 N , with σ : Σ → Σ N such that T h(M ) incl
+3 K
incl hide
+3 N , where N has the same
signature as N , incl denotes inclusions and the node K is constructed as follows. The signature Σ K consists of the signature Σ M disjointly united with a copy of Σ M , denoted ι(ΣM ) which makes all function symbols total (let us denote ι(x) the corresponding symbol in this copy for each symbol x from the signature Σ M ) and augmented with new operations h : ι(s) →?s, for any sort s of Σ M and makes : s → ι(s), for any sort s of the source signature Σ of the morphism σ labelling the free definition link. The axioms ψ K of the node K consist of: – sentences imposing the bijectivity of make; – axiomatization of the sorts in ι(ΣM ) as free types with all operations as constructors, including make for the sorts in ι(Σ); – homomorphism conditions for h: h(ι(f )(x1 , . . . , xn )) = f (h(x1 ), . . . , h(xn )) and ι(p)(t1 , . . . , tn ) ⇒ p(h(t1 ), . . . , h(tn )) – surjectivity of homomorphisms: e
∀y : s.∃x : ι(s).h(x) = y – a second-order formula saying that the kernel of h is the least partial predicative congruence5 satisfying T h(M ). This is done by quantifying over a predicate symbol for each sort for the binary relation and one predicate symbol for each relation symbol as follows: ∀{Ps : ι(s), ι(s)}s∈Sorts(ΣM ) , {Pp:w : ι(w)}p:w∈ΣM . symmetry ∧ transitivity ∧ congruence ∧ satThM =⇒ largerThenKerH where symmetry stands for ∀x : ι(s), y : ι(s).Ps (x, y) =⇒ Ps (y, x), s∈Sorts(Σ M ) 5
A partial predicative congruence consists of a symmetric and transitive binary relation for each sort and a relation of appropriate type for each predicate symbol.
72
M. Codescu et al.
transitivity stands for ∀x : ι(s), y : ι(s), z : ι(s).Ps (x, y) ∧ Ps (y, z) =⇒ Ps (x, z), s∈Sorts(Σ M )
congruence stands for fw→s ∈Σ M ∀x1 . . . xn : ι(w), y1 . . . yn : ι(w) . x)) ∧ D(ι(fw,s )(¯ y )) ∧ Pw (¯ x, y¯) =⇒ Ps (ι(fw,s )(¯ x), ι(fw,s )(¯ y )) D(ι(fw,s )(¯ and
pw ∈Σ M ∀x1 . . . xn : ι(w), y1 . . . yn : ι(w) . x)) ∧ D(ι(fw,s )(¯ y )) ∧ Pw (¯ x, y¯) =⇒ Pp:w (¯ x) ⇔ Pp:w (¯ y) D(ι(fw,s )(¯
where D indicates definedness. satThM stands for e
T h(M )[= /Ps ; p : w/Pp:w ; D(t)/Ps (t, t); t = u/Ps (t, u) ∨ (¬Ps (t, t) ∧ ¬Ps (u, u))]
where, for a set of formulas Ψ , Ψ [sy1 /sy1 ; . . . ; syn /syn ] denotes the simultaneous substitution of syi for syi in all formulas of Ψ (while possibly instantiating the meta-variables t and u). Finally largerThenKerH stands for e ∀x : ι(s), y : ι(s).h(x) = h(y) =⇒ Ps (x, y) s∈Sorts(Σ M ) ∧pw ∈Σ M ∀¯ x : ι(w).ι(p : w)(¯ x) =⇒ Pp:w (¯ x) Proposition 1. The models of the nodes N and N are the same.
6
An Example: Reversing Lists
The example we are going to present is a standard specification of lists with empty lists, composition and reversal. We want to prove that by reversing a list twice we obtain the original list. Since Maude syntax does not support marking sentences of a theory as theorems, in Maude we would normally write a view (PROVEIDEM in Fig. 3, left side) from a theory containing the theorem (REVIDEM) to the module with the axioms defining reverse (LISTREV). The first advantage the integration of Maude in Hets brings in is that we can use heterogeneous Casl structuring mechanisms and the %implies annotation to obtain the same development graph in a shorter way – see the right side of Fig. 3. Notice that we made the convention in Hets to have non-persistent freeness for Maude specifications, modifying thus the usual institution-independent semantics of the freeness construct. For our example, the development calculus rules are applied as follows. First, the library is translated to Casl; during this step, Maude non-persistent free links are normalized. The next step is to normalize Casl free links, using Freeness rule. We then apply the Normal-Form rule which introduces normal forms for the nodes with incoming hiding links (introduced at the previous step) and
Integrating Maude into Hets
73
fmod MYLIST is sorts Elt List . subsort Elt < List . op nil : -> List [ctor] . op __ : List List -> List [ctor assoc id: nil] . endfm fmod MYLISTREV is pr MYLIST . logic Maude op reverse : List -> List . spec PROVEIDEM = var L : List . free var E : Elt . {sorts Elt List . eq reverse(nil) = nil . subsort Elt < List . eq reverse(E L) = reverse(L) E . op nil : −> List [ctor] . endfm op : List List −> List fth REVIDEM is [ctor assoc id: nil] . pr MYLIST . } op reverse : List -> List . then {op reverse : List −> List . var L : List . var L : List . var E : Elt . eq reverse(reverse(L)) = L . eq reverse(nil) = nil . endfth eq reverse(E L) = reverse(L) E . view PROVEIDEM } then %implies from REVIDEM to MYLISTREV is {var L : List . sort List to List . eq reverse(reverse(L)) = L . op reverse to reverse . } endv Fig. 3. Lists with reverse, in Maude (left) and CASL (right) syntax
then Theorem-Hide-Shift rule which moves the target of any theorem link targeting a node with incoming hiding links to the normal form of the latter. Calling then Proofs/Automatic, the proof obligation is delegated to the normal form node. In this node, we now have a proof goal for a second-order theory. It can be discharged using the interactive theorem prover Isabelle/HOL [17]. We have set up a series of lemmas easing such proofs. First of all, normalization of freeness introduces sorts for the free model which are axiomatized to be the homomorphic image of a set of the absolutely free (i.e. term) model. A transfer lemma (that exploits surjectivity of the homomorphism) enables us to transfer any proof goal from the free model to the absolutely free model. Since the absolutely free model is term generated, we can use induction proofs here. For the case of datatypes with total constructors (like lists), we prove by induction that the homomorphism is total as well. Two further lemmas on lists are proved by induction: (1) associativity of concatenation and (2) the reverse of a concatenation is the concatenation (in reverse order) of the reversed lists. This infrastructure then allows us to demonstrate (again by induction) that reverse(reverse(L)) = L.
74
M. Codescu et al.
While proof goals in Horn clause form often can be proved with induction, other proof goals like the inequality of certain terms or extensionality of sets cannot. Here, we need to prove inequalities or equalities with more complex premises, and this calls for use of the special axiomatization of the kernel of the homomorphism. This axiomatization is rather complex, and we are currently setting up the infrastructure for easing such proofs in Isabelle/HOL.
7
Conclusions and Future Work
We have presented in this paper how Maude has been integrated into Hets, a parsing, static analysis and proof management tool that combines various tools for different specification languages. To achieve this integration, we consider preordered algebra semantics for Maude and define an institution comorphism from Maude to Casl. This integration allows to prove properties of Maude specifications like those expressed in Maude views. We have also implemented a normalization of the development graphs that allows us to prove freeness constraints. We have used this transformation to connect Maude to Isabelle [17], a Higher Order Logic prover, and have demonstrated a small example proof about reversal of lists. Moreover, this encoding is suited for proofs of e.g. extensionality of sets, which require first-order logic, going beyond the abilities of existing Maude provers like ITP. Since interactive proofs are often not easy to conduct, future work will make proving more efficient by adopting automated induction strategies like rippling [7]. We also have the idea to use the automatic first-order prover SPASS for induction proofs by integrating special induction strategies directly into Hets. We have also studied the possible comorphisms from Casl to Maude. We distinguish whether the formulas in the source theory are confluent and terminating or not. In the first case, that we plan to check with the Maude termination [8] and confluence checker [9], we map formulas to equations, whose execution in Maude is more efficient, while in the second case we map formulas to rules. Finally, we also plan to relate Hets’ Modal Logic and Maude models in order to use the Maude model checker [3, Chapter 13] for linear temporal logic. Acknowledgments. We wish to thank Francisco Dur´an for discussions regarding freeness in Maude, Martin K¨ uhl for cooperation on the implementation of the theory presented here, and Maksym Bortin for help with the Isabelle proofs. This work has been supported by the German Federal Ministry of Education and Research (Project 01 IW 07002 FormalSafe), the German Research Council (DFG) under grant KO-2428/9-1 and the MICINN Spanish project DESAFIOS10 (TIN2009-14599-C03-01).
References 1. Bidoit, M., Mosses, P.D.: Casl User Manual. LNCS (IFIP Series), vol. 2900. Springer, Heidelberg (2004) 2. Bouhoula, A., Jouannaud, J.-P., Meseguer, J.: Specification and proof in membership equational logic. Theoretical Computer Science 236(1-2), 35–132 (2000)
Integrating Maude into Hets
75
3. Clavel, M., Dur´ an, F., Eker, S., Lincoln, P., Mart´ı-Oliet, N., Meseguer, J., Talcott, C.: All About Maude - A High-Performance Logical Framework. LNCS, vol. 4350. Springer, Heidelberg (2007) 4. Clavel, M., Palomino, M., Riesco, A.: Introducing the ITP tool: a tutorial. Journal of Universal Computer Science 12(11), 1618–1650 (2006); Programming and Languages. Special Issue with Extended Versions of Selected Papers from PROLE 2005: The Fifth Spanish Conference on Programming and Languages 5. Mosses, P.D. (ed.): CASL Reference Manual. LNCS, vol. 2960. Springer, Heidelberg (2004) 6. Diaconescu, R.: Institution-independent Model Theory. Birkh¨ auser, Basel (2008) 7. Dixon, L., Fleuriot, J.D.: Higher order rippling in ISAplanner. In: Slind, K., Bunker, A., Gopalakrishnan, G. (eds.) TPHOLs 2004. LNCS, vol. 3223, pp. 83–98. Springer, Heidelberg (2004) 8. Dur´ an, F., Lucas, S., Meseguer, J.: MTT: The Maude Termination Tool (system description). In: Armando, A., Baumgartner, P., Dowek, G. (eds.) IJCAR 2008. LNCS (LNAI), vol. 5195, pp. 313–319. Springer, Heidelberg (2008) 9. Dur´ an, F., Meseguer, J.: A Church-Rosser checker tool for conditional order-sorted ¨ equational maude specifications. In: Olveczky, P.C. (ed.) WRLA 2010. LNCS, vol. 6381, pp. 69–85. Springer, Heidelberg (2010) 10. Futatsugi, K., Diaconescu, R.: CafeOBJ Report. AMAST Series. World Scientific, Singapore (1998) 11. Goguen, J., Ro¸su, G.: Institution morphisms. Formal Aspects of Computing 13, 274–307 (2002) 12. Goguen, J.A., Burstall, R.M.: Institutions: Abstract model theory for specification and programming. Journal of the Association for Computing Machinery 39, 95–146 (1992) 13. Meseguer, J.: Conditional rewriting logic as a unified model of concurrency. Theoretical Computer Science 96(1), 73–155 (1992) 14. Meseguer, J.: Membership algebra as a logical framework for equational specification. In: Parisi-Presicce, F. (ed.) WADT 1997. LNCS, vol. 1376, pp. 18–61. Springer, Heidelberg (1998) 15. Mossakowski, T., Autexier, S., Hutter, D.: Development graphs - proof management for structured specifications. Journal of Logic and Algebraic Programming, special issue on Algebraic Specification and Development Techniques 67(1-2), 114– 145 (2006) 16. Mossakowski, T., Maeder, C., L¨ uttich, K.: The Heterogeneous Tool Set. In: Grumberg, O., Huth, M. (eds.) TACAS 2007. LNCS, vol. 4424, pp. 519–522. Springer, Heidelberg (2007) 17. Nipkow, T., Paulson, L.C., Wenzel, M.: Isabelle/HOL. LNCS, vol. 2283. Springer, Heidelberg (2002) 18. Reichel, H.: Initial computability, algebraic specifications, and partial algebras. Oxford University Press, Inc, New York (1987)
Model Refinement Using Bisimulation Quotients Roland Gl¨ uck1 , Bernhard M¨ oller1 , and Michel Sintzoff2 2
1 Universit¨ at Augsburg, Germany Universit´e catholique de Louvain, Belgium
Abstract. The paper shows how to refine large-scale or even infinite transition systems so as to ensure certain desired properties. First, a given system is reduced into a smallish, finite bisimulation quotient. Second, the reduced system is refined in order to ensure a given property, using any known finite-state method. Third, the refined reduced system is expanded back into an adequate refinement of the system given initially. The proposed method is based on a Galois connection between systems and their quotients. It is applicable to various models and bisimulations and is illustrated with a few qualitative and quantitative properties.
1
Introduction
This paper extends the work in [8]. There a generic method for refining system models was presented informally. Here it is defined constructively and is then used to ensure optimality properties and temporal properties. Our aim is to refine large-scale system models so as to satisfy a required property. According to the Concise Oxford Dictionary, “to refine something” is “to free it from defects”. In our context, refinement is the elimination of unsuitable transitions and states, and is thus a form of control refinement or control synthesis. To realize it, we use a well known approach to problem solving: reduce –or abstract– the problem, solve the reduced problem, and expand the obtained solution. These three steps are implemented as follows. First, the given model is reduced to a finite bisimulation quotient [1]; we use bisimulations because they preserve many properties, including quantitative ones, and they can be computed efficiently. Second, the design problem is solved for the reduced model by some known finite-state technique. Third, the refined reduced model is expanded back into a satisfactory refinement of the large-scale model given initially. The expansion function is a right-inverse of the quotient function that yields largest refined models. It results from a Galois connection. Consider for instance the control problem of restricting a possibly infinite transition graph G in such a way that every path starting from any node is a path of shortest length leading into a set of given nodes. The reduced model is a smaller finite quotient graph of G. It is refined into an optimal sub-graph using a known finite-state algorithm. The small optimal sub-graph is then expanded back into a full-fledged optimal sub-graph of G. This problem has been treated by dedicated techniques [15]. Here we extend that work to a generic method that allows tackling different problems in a uniform way. The novelty lies in the M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 76–91, 2011. c Springer-Verlag Berlin Heidelberg 2011
Model Refinement Using Bisimulation Quotients
77
particular combination and adaptation of results known from control synthesis and abstraction-based verification to yield an abstraction-based technique for refinement that ensures qualitative as well as quantitative properties and works for a family of infinite-state systems. In the sequel, first we present generic definitions for classes of usual models and classes of related bisimulations. Second the proposed method is elaborated with the help of a Galois connection. Third a few applications are summarized. At the end we compare with related work and conclude.
2
Model Classes and Bisimulation Classes
We introduce a general pattern for typical model and bisimulation classes. The quotient operation, used to collapse models, is also defined generically. This preliminary section may be skimmed over in a first reading. 2.1
Prerequisites and Notational Conventions
Our technical development is based on binary and ternary relations; therefore we recall some definitions for them and fix some notation. The symbol ⊆ denotes non-strict inclusion between sets. Instead of bi-implication ⇔ we write ≡ . . ˙ , resp. Definitional equality and equivalence are denoted by = and ≡ – For relation R ⊆ Q × Q , x ∈ Q , x ∈ Q , and B, C ⊆ Q , B ⊆ 2Q , . converse of R, R◦ = {(y, x) | (x, y) ∈ R} is the . . R(x) = {x | xRx }, R(B) = {R(x) | x ∈ B}, ˙ xRx ≡ ˙ (x, x ) ∈ R, R(x, x ) ≡ . . Im(R) = R(Q), Dom(R) = R◦ (Q ); R is total iff Dom(R) = Q. . – Domain restriction is defined as R↓B = R ∩ (B × Q ). Two properties are )= (R↓B) . (1) (R↓B)↓C = R↓(B ∩ C) , R↓( B∈B
B∈B
– The composition of two relations R ⊆ Q × Q , R ⊆ Q × Q is R ; R where ˙ ∃x ∈ Q : xRx ∧ x R x . x(R ; R )x ≡ – Given sets S, S , we write S − S for the set difference of S, S . Moreover, by S 2 we mean S × S and χS is some predicate that characterizes S. – For the application of a function f to an argument e we may write f e instead of f (e). . – The set of Booleans is Bool = {false, true}. 2.2
Model Classes
We present model structures that describe discrete-time dynamical systems. They comprise graphs and auxiliary components. The graph nodes and edges represent system states and transitions, resp. The auxiliary components may
78
R. Gl¨ uck, B. M¨ oller, and M. Sintzoff
be maps labelling states or transitions. Quotients of models are defined in a canonical way given equivalences between states. Graphs. A graph is a pair (Q, T ) composed of a set Q and an edge relation T ⊆ Q2 . A graph (Q , T ) is a subgraph of a graph (Q, T ) if Q ⊆ Q ∧ T ⊆ T . We then write (Q , T ) ⊆ (Q, T ). All paths in a subgraph of G are paths in G. For a graph G let SubGr(G) denote the set of all subgraphs of G. The following result is straightforward from the definitions. It generalizes the . supremum definition (Q, T ) ∪ (Q , T ) = (Q ∪ Q , T ∪ T ) . Lemma 2.1 (Complete Lattice of Subgraphs). Given a graph G, the partial order (SubGr(G), ⊆) is a complete lattice where the supremum of a set of graphs is formed by taking componentwise union. Graph-Based Models. A (graph-based) model M is a structure G or (G, A) where G is a graph (Q, T ), denoted by graph M , and A is a tuple (A1 , . . . , An ). Each (auxiliary) component Ai is a mapping from Q or T into an external set Si defined independently of G. If Si = Bool then Ai represents a subset of Q or T . If n = 1 we omit the tuple parentheses around A and write M = (G, A1 ). A model M is infinite if Q is infinite. The considered models are relatively simple. The graph of a model can be seen as its skeleton. A subset-type component Ai may, e.g., represent a set of target nodes. Other mappings Ai can be used to describe edge labels, e.g. weights, or node labels, e.g. node inputs or outputs in an import network. Kripke structures are models which involve a mapping from nodes into a set of atomic propositions (e.g. [2]). Of course, a combination of both node and edge labels is possible. ˜ it is understood that all its For a decorated model identifier such as M or M constituents carry the same decoration. E.g., when talking about a model M it is understood that M = (G , A ) and similarly for G and A . Various operations Ψ on models M = ((Q, T ), (A1 , . . . , An )) are determined by the definitions of Ψ Q, Ψ T , and Ψ Ai for each component Ai , together with . the distribution rule Ψ M = ((Ψ Q, Ψ T ), (Ψ A1 , . . . , Ψ An )). A model class M is some set of models which have the same structure, i.e., the same number and types of components. An M-model is an element of M. Quotient Operation. Consider a model M = ((Q, T ), A) and an equivalence E ⊆ Q2 . The quotient operation “/” serves to construct quotient models and is defined first for states and transitions, while the auxiliary components Ai will be treated in the following paragraph. . . For any x, x ∈ Q, let x/E = E(x) and (x, x )/E = (x/E, x /E). For any . U ⊆ Q or U ⊆ T , let U/E = {u/E | u ∈ U }. Thus T /E = {(x/E, x /E) | (x, x ) ∈ T } ⊆ (Q/E)2 . We say that E is finitary if Q/E is finite. Model Compatibility and Canonical Quotient Models. An equivalence E on a model M = (G, A) is called M -compatible if each map Ai satisfies ∀u, u ∈ Dom(Ai ) : u/E = u /E ⇒ Ai (u) = Ai (u ). Relationally this can be expressed as E ∩ Dom(Ai )2 ⊆ Ai ; A◦i .
Model Refinement Using Bisimulation Quotients
79
Let Compat(M ) denote the set of M -compatible equivalences. For any E ∈ Compat(M ), we define the map Ai /E by ∀u ∈ Dom(Ai ) : (Ai /E)(u/E) = Ai (u). The (canonical) quotient (model) of M by E is M/E as defined by the above distribution rule. A model class may contain models M and quotients M/E. Labelled Transition Systems. The classical labelled transition systems, a.k.a. LTS, have the form S = (Q, H, TH ) where H is a finite set of transition labels and TH ⊆ Q × H × Q [1,5]. . For each h ∈ H, let Th = {(x, x ) | (x, h, x ) ∈ TH }. Then S is easily transformed into the model MS = ((Q, T ), L), where L : T → 2H and ∀t ∈ T : L(t) = {h | t ∈ Th }. Thus S amounts to a structured presentation of the model MS . 2.3
Bisimulation Classes
Bisimulations prove useful because they preserve many properties of system dynamics. Different classes of bisimulations are associated with different properties and different classes of models. Bisimulation equivalences may determine drastically reduced quotients and may even downsize some infinite models into finite ones. They can be computed in polynomial time in the case of finite models. Therefore we treat bisimulation equivalences in greater detail. 2.3.1 Bisimulations We briefly review the classical definition of bisimulations, e.g. [14], and recall an equivalent form that is more suitable for our purposes. A (partial) simulation between two models M, M is a relation R ⊆ Q × Q such that ∀x, y ∈ Q, x ∈ Q : x T y ∧ x R x ⇒ ∃y ∈ Q : x T y ∧ y R y . A bisimulation between models M, M is a relation R ⊆ Q × Q such that R and R◦ are simulations between M, M and M , M , resp. For actually computing bisimulations the following equivalent characterisation is fundamental (see the (M,M ) preface in [13]): R is a bisimulation between M, M iff R ⊆ Fbasic (R) where (M,M )
Fbasic
. = λR : Q × Q . { (x, x ) | (∀y : xT y ⇒ ∃y : x T y ∧ yRy ) ∧ (∀y : x T y ⇒ ∃y : xT y ∧ yRy ) } .
The above condition characterizes R as an expanded element, a.k.a. a post(M,M ) fixpoint, of the isotone function Fbasic . Since the set of all relations is a complete lattice under the ⊆ ordering, by the Knaster-Tarski theorem that function has a greatest fixpoint. The latter is called the coarsest bisimulation between M, M , since it coincides with the union of all bisimulations between M, M . It (M,M ) can be computed by iterating Fbasic , starting with the universal relation. 2.3.2 Bisimulation Classes and Generators of Bisimulation Maps We abstract a bit from the above definitions, since we aim at a generic model refinement technique. From now on, the bisimulations introduced as post-fixpoints (M,M ) in § 2.3.1 are called basic bisimulations and their defining function is Fbasic .
80
R. Gl¨ uck, B. M¨ oller, and M. Sintzoff
Families and Classes of Bisimulations. Let M, M be two models from an arbitrary model class. The set of all basic bisimulations for (M, M ) is denoted (M,M ) (M,M ) by Bbasic . A bisimulation family B (M,M ) is a subset of Bbasic that is closed under arbitrary unions. This subset may be defined by auxiliary constraints – e.g. equalities between node labels– which ensure the model compatibility of bisimulation equivalences (§ 2.2). Given a model class M and a definition of the family B(M,M ) for all M, M ∈ (M,M ) M, the (M-)bisimulation class B is the set M,M ∈M B . Bisimulation Maps. As recalled in § 2.3.1, basic bisimulation families can be defined using maps that generate basic bisimulations. A number of specialized bisimulation families can similarly be defined using functions between re lations over states. Namely, a B (M,M ) -bisimulation map is an isotone function (M,M ) (M,M ) FB : 2Q×Q → 2Q×Q such that B (M,M ) = {R | R ⊆ FB (R)}. (M,M ) may be defined by a term λR : Q×Q . G, where R, M, M The function FB may occur free in G. Moreover the type of the parameter R may be strengthened as follows. Assume G = W ∩G where W ⊆ Q×Q is defined independently of R. (M,M ) may be given as λR : W . G . This formulation allows simplifying Then FB (M,M ) the computation of the greatest fixpoint of FB . Iterative Construction of Coarsest Bisimulations. The coarsest basic (M,M ) bisimulation for M, M is the greatest fixpoint of the map Fbasic . It can be (M,M ) . (M,M ) (Fbasic )n (Q×Q ); see § 2.3.1 and [5,14]. Simobtained as GFP (Fbasic ) = (M,M ) FB
n∈N
. ilarly, if = λR : W . G is a B (M,M ) -bisimulation map then the coarsest (M,M ) (M,M ) n B (M,M ) -bisimulation is GPF (FB )= (FB ) (W ).
n∈N
Map Generators. A simple additional generalization abstracts bisimulation maps from particular models and thus determines a generic bisimulation-map for a whole set of models having the same structure. Namely, given a model class M and a bisimulation class B, an (M-bisimulation) map generator FB is (M,M ) a function that assigns a B-bisimulation map FB to any M, M ∈ M . . (M,M ) (M,M ) Hence FB = λM, M : M . FB , viz. FB (M,M ) = FB . So the char ˙ acteristic predicate χB can be given by χB (R) ≡ M,M ∈M R ⊆ FB (M, M )(R). Examples for map generators will be presented in § 2.4 and § 4. 2.3.3 Conventions We fix a few further notational conventions and abbreviations on the basis of classical notions. Let B be a bisimulation class and assume M, M belong to the same model class. – – – –
A relation R is a B-bisimulation between M and M if R ∈ B (M,M ) . The models M, M are B-bisimilar if ∃R : R ∈ B (M,M ) . (M) (M,M) , FB (M, M ), resp. We write B (M) , FB , FB (M ) for B (M,M) , FB A relation R ⊆ Q2 is a B-bisimulation for M if R ∈ B (M) .
Model Refinement Using Bisimulation Quotients
81
2.3.4 Bisimulation Equivalences and Model Reductions We recall two useful properties (see e.g. [1,2]) and introduce quotient models based on coarsest bisimulations. One fixed but arbitrary model class containing models and their quotients is understood. Lemma 2.2 (Bisimulation Equivalences). Consider a model M , a bisimulation class B, and E ∈ Compat(M ). 1. The coarsest B-bisimulation for M is an equivalence. 2. If E is a B-bisimulation for M then M/E and M are B-bisimilar. For a specific bisimulation class, these properties are established by the proofs of Lemma 7.8 and Theorem 7.14, resp., in [1]. These proofs can be reused for other auxiliary components and thus for other bisimulation classes. A bisimulation class B is M-compatible if, for all M ∈ M , the coarsest Bbisimulation for M is M -compatible; see Part 1 above and § 2.2. Reducers and Reductions of Models. The coarsest B-bisimulation for M determines the least number of equivalence classes of any bisimulation in B (M). It is called the (strongest B-)reducer of M and is denoted by RedB (M ). Accordingly the quotient model M/RedB (M ) is called the (strongest B-)reduction of M . For sufficiently regular infinite models, called well-structured [10], the reducers are finitary and hence the reductions are finite. 2.4
Algorithms Generating Bisimulation Equivalences
A bisimulation algorithm BisimAlgoB for an M-map-generator FB is an algorithm which, given M ∈ M , yields the B-reducer RedB (M ), if finitary. So it computes the function λM : M . GFP(FB M ); see § 2.3.2. A few such algorithms are briefly recalled hereafter; all are derived from a fundamental algorithm for labelled transition systems in which partitions represent equivalences. In this section models are finite unless indicated otherwise. 1. Labelled Transition Systems. Let MH be an LTS (Q, H, TH ) (§ 2.1). Recall LTS that xTh y ≡ (x, h, y) ∈ TH . The basic bisimulation map generator Fbasic is akin to Fbasic (§ 2.3.2) and is defined as follows [5,14]: LTS . Fbasic = λMH . λE : Q2 . {(x, x ) | ∀h ∈ H : (∀y : xTh y ⇒ ∃y : x Th y ∧ yEy ) ∧ (∀y : x Th y ⇒ ∃y : xTh y ∧ yEy ) }.
A bisimulation algorithm BisimAlgoLTS basic is found in [11]. 2. Basic Maps for Models. A bisimulation algorithm BisimAlgobasic is obtained in two steps. First, the graph (Q, T ) of any model M is transformed into the LTS MH = (Q, {h}, {(x, h, y) | xT y}), where H = {h} consists of an arbitrary single label h (§ 2.1). Thus the basic bisimulations for M are those for MH . Second, Case 1 above is applied. 3. Maps with an Independent Auxiliary Equivalence. Consider a model class . M and a map generator Fbasic,W = λM : M . λE : W . Fbasic (E) (§ 2.3.2). A
82
R. Gl¨ uck, B. M¨ oller, and M. Sintzoff
bisimulation algorithm BisimAlgobasic,W is obtained by applying BisimAlgobasic from Case 2 where the initial partition {Q} is replaced by Q/W [1,5]. 4. Maps with a Dependent Auxiliary Equivalence. Consider the class Mg of models ((Q, T ), g) where g : T → S for a given set S. Let the map generator be . Fg = λM : Mg . λE : Q2 . Wg (E) ∩ Fbasic (E) . Wg (E) = {(x, x ) | ∀y, y : xT y ∧ x T y ∧ yEy ⇒ g(x, y) = g(x , y )}. Case 3 is inapplicable because Wg depends on E . A bisimulation algorithm BisimAlgog can be obtained as follows. First, any Mg -model M is transformed into the LTS MH = (Q, H, TH ) such that H = Im(g) and TH = {(x, g(x, y), y) | xT y} (§ 2.1). Thus the bisimulations generated by Fg for M are the basic bisimulations for MH . Second, Case 1 is applied. Bisimulation algorithms can be defined similarly for other models which include mappings akin to g. 5. Well-Structured Infinite Models. Since their reducers are finitary equivalences (§ 2.3.4), a symbolic variant of Case 2, 3 or 4 can be used [10].
3
Generic Refinement Using Finite Abstract Models
As outlined in § 1, a given model is abstracted into a finite quotient model and the latter is refined into a model that must be expanded back to a full-fledged model refining the given one. In this main section, first an adequate expansion operation is constructed. Second, we define a class of formulae that are preserved under expansion. Third, a refinement algorithm is presented. As in § 2.3.4, one fixed but arbitrary model class M is understood. Likewise, an M-compatible bisimulation class B is understood. 3.1
Construction of an Adequate Expansion Operation
To ensure consistency, expansion needs to be an approximate inverse of quotient, which is not invertible (§ 2.2). Moreover refined models should include a maximum of useful states and transitions. Expansion should thus generate the largest possible models. Therefore, to ensure invertibility and maximality, the quotient and expansion operations must be adequately restricted. Fortunately, expansion, restricted quotient, and restricted expansion can easily be developed using a Galois connection between models and their quotients. 3.1.1 A Brief Reminder about Galois Connections A pair (F, G) of total functions F : A → B and G : B → A between pre-orders (A, ≤A ) and (B, ≤B ) is called a Galois connection between A and B iff ∀x ∈ A : ∀y ∈ B : F (x) ≤B y ≡ x ≤A G(y) .
(2)
Model Refinement Using Bisimulation Quotients
83
Then F is called the lower, G the upper adjoint of (F, G). In particular a pair (F, F ◦ ) of mutually inverse functions forms a Galois connection. We summarize the most important results about Galois connections for our purposes (see e.g. [4]), omitting the indices A , B and some symmetric properties. Proposition 3.1 (Closures and Adjoints) 1. F preserves all existing suprema and G preserves all existing infima. 2. G ◦ F and F ◦ G are a closure and a kernel operator, resp. The set of G ◦ F closed elements, i.e., the set of fixpoints of G ◦ F , is the image set G(B). 3. The adjoints of a Galois connection determine each other uniquely. By this latter fact, one adjoint allows defining the other. A sufficient condition for the existence of an upper adjoint is given in the following central proposition, which also characterizes the upper adjoint in terms of the lower one. Proposition 3.2 (Upper Adjoints and Image-Maximal Inverses) 1. Assume that (A, ≤A ) and (B, ≤B ) are complete lattices. Then every function F : A → B that preserves all suprema has an upper adjoint G given by ∀y ∈ B : G(y) = sup{x ∈ A | F (x) ≤ y} .
(3)
2. Consider a Galois connection (F, G) between preorders (A, ≤A ) and (B, ≤B ). . . Let f = F ↓ G(B) and g = G ↓ F (A) be the restrictions of F and G to the respective image sets. Then f and g are inverses of each other such that (Maximality)
∀y ∈ F (A) : g(y) = sup{x ∈ A | F (x) = y} .
(4)
The function inverse g is called image-maximal in order to highlight maximality. We will apply Prop. 3.2 for the case where F is the quotient operation. The details of the proofs to follow may be skipped in a first reading. 3.1.2 The Complete Lattice of Submodels of a Model An operation restricting models serves to define lattices of submodels on the basis of lattices of subgraphs (cf. § 2.2). Model Restriction and Canonical Submodels. Consider a model M and a graph G ⊆ graph M . The restriction operation “⇓ ” extends domain restriction ↓ of functions (§ 2.1) to the case of models. . . We first set P ⇓ G = P ∩ Q for P ⊆ Q and S⇓ G = S ∩ T for S ⊆ T . For . a map Ai , let Ai ⇓ G = Ai ↓(Dom(Ai )⇓ G ). Then the (canonical) submodel of a model M induced by G is M ⇓ G as defined by the distribution rule of § 2.2. We establish three useful properties of restriction. Lemma 3.3 (Composition of Model Restrictions) 1. graph (M ⇓ G ) = G and M = M ⇓ graph M . 2. If G ⊆ graph M then Dom(Ai ⇓ G ) = Dom(Ai )⇓ G . 3. If G ⊆ graph M and G ⊆ G then (M ⇓ G )⇓ G = M ⇓ G .
84
R. Gl¨ uck, B. M¨ oller, and M. Sintzoff
Proof. In calculations “ reason” can be read as “because (of ) reason”. 1. The first assertion is immediate from the definition. For the second one we observe that Ai ⇓ (graph M ) = Ai ↓(Dom(Ai )⇓ (Q, T )) = Ai ↓Dom(Ai ) = Ai . 2. Immediate from the definition. 3. First, (M ⇓ G )⇓ G = M ⇓ G is well defined, since by the assumption and Part 1 we have G ⊆ G = graph (M ⇓ G ). Second, given M = (G, A), we treat G = (Q, T ) and A in turn. For P ⊆ Q, we calculate (P ⇓ G )⇓ G = . (P ∩ Q ) ∩ Q = P ∩ Q = P ⇓ G . Likewise for S ⊆ T . Then, for Di = . . Dom(Ai ), Di = Di ⇓ G , and Ai = Ai ↓Di , we derive (Ai ⇓ G )⇓ G = Ai ⇓ G = Ai ↓(Di ⇓ G ) = Ai ↓((Di ⇓ G )⇓ G ) = Ai ↓(Di ⇓ G ) = Ai ↓((Di ⇓ G ) ∩ (Di ⇓ G )) = Ai ↓(Di ⇓ G ) = Ai ⇓ G
LHS of the thesis and def. of ⇓ def. of ⇓ and Part 2 above calculation for P or S def. of Ai and (1) G ⊆ G and def. of ⇓ .
Submodel Relation. Using restriction we define the submodel relation ⊆ by ˙ M ⊆ M ≡
∃G ⊆ graph M : M = M ⇓ G
where M, M ∈ M . If M ⊆ M we also say that M refines M . For a model M . let Sub(M ) = {M | M ⊆ M } . In case M is infinite, Sub(M ) is infinite too. The submodel –or refinement– relation has pleasant properties. Lemma 3.4 (Complete Lattices of Submodels). Consider any M ∈ M . 1. For M , M ∈ Sub(M ) we have M ⊆ M ≡ graph M ⊆ graph M . 2. The relation ⊆ between models in Sub(M ) is a partial order. 3. (Sub(M ), ⊆) is a complete lattice where the supremum of a set of submodels is componentwise union. Proof. 1. Assume M = M ⇓ G and M = M ⇓ G for some G , G ⊆ graph M . Then by Lemma 3.3.1 we have graph M = G and graph M = G . (⇒)
(⇐)
M ⊆ M ⇒ graph M = G ⇒ G ⊆ G
LHS of the thesis ⊆ G by def. of ⊆ for some G and Lemma 3.3.1 ⊆ G . G = graph M = G
G ⊆ G ⇒ M ⇓ G = (M ⇓ G )⇓ G = M ⇓ G = M ⇒ M ⊆ M
RHS of the thesis unfold M Lemma 3.3.3 and fold M def. of ⊆ .
2. This is immediate from Part 1, since ⊆ is a partial order on graphs. 3. By Parts 1 and 2 the partial orders (SubGr(graph M ), ⊆) and (Sub(M ), ⊆) are order-isomorphic. The former is a complete lattice by Lemma 2.1.
Model Refinement Using Bisimulation Quotients
85
3.1.3 Expansion as an Upper Adjoint of Quotient Expansion is specified using Prop. 3.2.1 and then is expressed constructively. Specification of the Expansion Operation. We first study the interaction between the submodels of a model M and those of a quotient M/E . The equivalence E need not be a bisimulation. Lemma 3.5 (Sub-Quotients). Let E ∈ Compat(M ). 1. ∀M ∈ Sub(M ) : E ∈ Compat(M ) , i.e., E ∩ Dom(Ai )2 ⊆ Ai ; (Ai )◦ . 2. The quotient operation /E : Sub(M ) → Sub(M/E) preserves all suprema. . . Proof. Let Di = Dom(Ai ) and Di = Dom(Ai ). 1. Consider an M ∈ Sub(M ), say M = M ⇓ G for some G ⊆ graph M . By definition of ⇓ we have Ai ⊆ Ai and thus Di ⊆ Di . Hence E ∩ (Di )2 = E ∩ (Di )2 ∩ (Di )2 ⊆ (Ai ; A◦i ) ∩ (Di )2 = (Ai ↓Di ) ; (Ai ↓Di )◦ = Ai ; (Ai )◦
LHS of the thesis and Di ⊆ Di E ∈ Compat(M ) relation algebra and def. of ⇓ .
2. By a straightforward calculation.
Now we can specify the expansion operation \ E as an upper adjoint, thanks to Part 2 and Prop. 3.2.1 where A = Sub(M ) , B = Sub(M/E) , F = /E , G = \ E . Namely, for any M ∈ M and E ∈ Compat(M ) , ∀M ∈ Sub(M ), N ∈ Sub(M/E) :
˙ M ⊆ N \ E ≡
M /E ⊆ N .
(5)
The set Sub(M/E)\ E of E-closed submodels of M (see Prop. 3.1.2) is denoted by ClSubE (M ) . If M ∈ ClSubE (M ) then ∀t ∈ T, t ∈ T : t /E = t/E ⇒ t ∈ T . Computable Form of the Expansion Operation. Equation (3) for G = \ E is brought into a form that for finite M/E is computable symbolically (§ 2.3.4). Lemma 3.6 (Constructive Expansion). Given any E ∈ Compat(M ) and for the graph ME ∈ Sub(M/E), we have ME \ E = M ⇓ G . = (Q, T) = (X × Y ) ∩ T ) . G ( QE , (X,Y )∈TE
= sup{M ∈ Sub(M ) | M /E ⊆ Proof. By (3) it suffices to show that M ⇓ G . = sup Z where Z = {P | ME }. Then Lemma 3.4.1 reduces this to proving Q that for T P/E ⊆ QE }, and similarly for T. Let us detail the calculation for Q; is analogous. ∈ Z. Second, for First, clearly Q/E = ( QE )/E = QE ⊆ QE and hence Q any P ∈ Z, we deduce P ⊆ Q : (P/E) ⊆ QE P/E ⊆ QE and isotony of . ⇒ P ⊆Q P = (P/E) and def. of Q
86
R. Gl¨ uck, B. M¨ oller, and M. Sintzoff
3.1.4 Restrictions of the Quotient and Expansion Operations Using Prop. 3.2.2 we first derive f and g from /E and \ E, resp. Second, we formalize M as a parameter and instantiate E to a coarsest bisimulation (cf. Lemma 2.2.1). Restricted Expansion as Maximal Inverse of Restricted Quotient. We want to obtain an isomorphism between certain classes of models. To this end we define the following restrictions of /E and \ E for any M ∈ M , E ∈ Compat(M ) : ∀M ∈ ClSubE (M ) : ShrinkM,E M = M /E , ∀N ∈ Sub(M/E) : GrowM,E N = N \ E , and ShrinkM,E : ClSubE (M ) → Sub(M/E) , GrowM,E : Sub(M/E) → ClSubE (M ) . Indeed \ E : B → A entails A ∩ (B\ E) = B\ E = ClSubE (M ) and M/E ∈ Sub(M )/E entails B ∩ (A/E) = Sub(M/E) ∩ (Sub(M )/E) = Sub(M/E) . The following maximality property follows from (5) and Prop. 3.2.2. Lemma 3.7. GrowM,E is the image-maximal inverse of ShrinkM,E . . Generic Forms of Restricted Quotient and Expansion. Let ClSub(M ) = . ClSubRed(M) (M ) and SubRed(M ) = Sub(M/Red(M )). Here Red(M ) stands for RedB (M ) (cf. § 2.3.4) given the understood bisimulation class B. We define ∀M ∈ M, M ∈ ClSub(M ) : (Shrink M ) M = M /Red(M ) ,
(6)
∀M ∈ M, N ∈ SubRed(M ) : (Grow M ) N = N \ Red(M ) .
(7)
The typing is Shrink : (M : M) → (ClSub(M ) → SubRed(M )) , Grow : (M : M) → (SubRed(M ) → ClSub(M )) . The reduction of any M ∈ M is (Shrink M ) M . Since Grow M = GrowM,Red(M) and Shrink M = ShrinkM,Red(M) , Lemma 3.7 entails a parametrized form of maximality. Proposition 3.8 (Maximal Inverse). For any M-model M, the function Grow M is the image-maximal inverse of the function Shrink M . Therefore (Grow M ) N = sup{M ∈ Sub(M ) | M /Red(M ) = N } ; see (4). Moreover M ∈ ClSub(M ) since M = (Grow M ◦ Shrink M ) M . 3.2
Preservation of Satisfactory Refinements under Expansion
Abstract Model Classes. We assume that M is partitioned into M↓ and M↑ such that M↑ = {M/E | M ∈ M↓ ∧ E ∈ Compat(M )} . Thus quotients of quotients are not considered. In this case M is called a two-level model class. Let . the class N ↑ ⊆ M↑ of bisimulation quotients be N ↑ = M∈M↓ SubRed(M ) . It is said to abstract the quotient-free class M↓ . Satisfactory Refinements. Let ϕ be a predicate over states in M-models. The refinement –or submodel– relation ⊆ : M2 (§ 3.1.2) is strengthened to the ϕ-(satisfactory) refinement relation ϕ : (M↓ )2 ∪ (N ↑ )2 given by ˙ (M ⊆ M ) ∧ (M |= ϕ) . M ϕ M ≡
(8)
Model Refinement Using Bisimulation Quotients
87
Certain satisfactory refinement relations can be expanded from (N ↑ )2 to (M↓ )2 . Admissibility. The predicate ϕ is M-admissible –see examples in § 4– if ∀M ∈ M↓ , E ∈ B (M) : M/E |= ϕ ⇒ M |= ϕ .
(9)
Lemma 3.9. For any M-admissible ϕ , we have ∀M ∈ M↓ , M ∈ ClSub(M ) : (Shrink M ) M |= ϕ ⇒ M |= ϕ . Proof. Let any M ∈ M↓ and M ∈ ClSubE (M ) (§3.1.3) where E = Red(M ) and M /E |= ϕ (6). First we deduce E ∈ B (M ) by proving ∀x, y, x ∈ Q : xT y ∧ xEx ⇒ (∃y ∈ Q : x T y ∧ yEy ) . Let any x, y, x ∈ Q and t = (x, y) such that t ∈ T ∧ xEx . For some y ∈ Q and t = (x , y ) , we have ⇒ ⇒
t ∈ T ∧ xEx t ∈ T ∧ xEx ∧ t ∈ T ∧ yEy t ∈ T ∧ yEy
hyp. about x, y, x , t M ⊆ M and E ∈ B (M) t /E = t /E and hyp. about M .
Now (9), Red(M ) ∈ B (M ) , and (6) entail (Shrink M ) M |= ϕ ⇒ M |= ϕ .
For M = (Grow M ) N , the thesis above becomes ∀M ∈ M↓ , N ∈ SubRed(M ) : N |= ϕ ⇒ (Grow M ) N |= ϕ . This yields a basic property by (8) and Lemma 3.5. Proposition 3.10 (Expanding Abstract Refinements). Given a two-level model class M , an abstract model class N ↑ , and an M-admissible formula ϕ , ∀M ∈ M↓ , N, N ∈ SubRed(M ) : N ϕ N ⇒ (Grow M ) N ϕ (Grow M ) N . The generalization to abstract model classes other than N ↑ should be studied. This issue is related to the use of diverse abstractions in abstract verification [3]. 3.3
A Generic Algorithm for Model Refinement by Abstraction
As a result we can present the specification and construction of Algorithm Refine. Preconditions. The context has to provide the following data: (H1 ) A two-level model class M, an M-compatible bisimulation class B, and a map generator FB . (H2 ) A finite or well-structured infinite M↓ -model M for which the finitary B-reducer is obtained by some known algorithm BisimAlgoB . (H3 ) A set Frml of M-admissible formulae. (H4 ) An algorithm FiniteRefine : Frml → (M → M) with lowest known complexity, which given any ϕ ∈ Frml and any finite M ∈ M, produces a model M that satisfies M ϕ M ∧ ((M |= ϕ) ≡ M = M ) . Postcondition. For parameters ϕ ∈ Frml and M ∈ M↓ , the result M satisfies M ϕ M ∧ (((Shrink M )M |= ϕ) ≡ M = M ) .
88
R. Gl¨ uck, B. M¨ oller, and M. Sintzoff
Algorithm Refine. The constructive function Refine : Frml → (M↓ → M↓ ) is defined as follows, given (6), (7), Lemma 3.6, and the Preconditions: ∀ϕ ∈ Frml, M ∈ M↓ : (Refine ϕ) M = (Grow M ◦ FiniteRefine ϕ ◦ Shrink M ) M . Correctness follows from Prop. 3.8 and Prop. 3.10 where N = (FiniteRefine ϕ)N , N = (Shrink M ) M . For finite models, the complexity of Refine is polynomial if that of FiniteRefine is polynomial. For well-structured infinite models, symbolic forms of the functions Shrink and Grow can be used; see § 2.4.5 and Lemma 3.6. Maximal Refinements. The image-maximal inverse Grow M yields the largest possible model given Prop. 3.8. Hence Refine generates a maximal correctly refined model if FiniteRefine does it. However various finite-state refinement algorithms do not yield maximal models, e.g., they may exclude legitimate nondeterminism. The latter issue is outside the scope of the present work. If Shrink and Grow are defined as bidirectional functions then Grow M is a right-inverse of Shrink M [6]. This does entail the Postcondition but not Prop. 3.8. Abstract Verification. Let Check : Frml → (M↓ → Bool) where (Check ϕ) M = (FiniteCheck ϕ ◦ Shrink M ) M assuming (FiniteCheck ϕ) M ≡ (M |= ϕ) . Then (Check ϕ) M ≡ (M |= ϕ) . So Check allows verifying models; see e.g. [1]. The preconditions of Check are those of Refine except for simple changes in (H4 ). The function FiniteCheck ϕ may have a lower complexity than FiniteRefine ϕ.
4
A Summary of Typical Applications
Due to limited space we merely sketch a few applications; proofs are omitted. Determining Minimum-Cost Paths. This example was considered in § 1. Given a model M = (G, A), an M -path is a path of G. For any Z ⊆ Q, the set of M -paths from x to some z ∈ Z is denoted by Paths(x, G, Z). Consider two models M and M , a bisimulation class B and a B-bisimulation relation R for (M, M ). Let x0 , x0 be any states of M, M such that x0 Rx0 . is an R-bisimilar M -path Then clearly for every M -path x0 . . . xi . . . xn , there x0 . . . xi . . . xn , i.e., i=0...n xi Rxi , and for any M -path from x0 there is a corresponding R-bisimilar M -path starting in x0 . See also Lemma 7.5 in [1]. Let Mmcp be the set of models (G, (Z, g, V )) where G = (Q, T ) may be infinite, Z : Q → Bool represents the subset Q−Dom(T ) (for brevity we therefore write Z = Q − Dom(T )), g : T → R+ is a total edge-cost function such that Im(g) is finite, and V : Q → R+ is the value function for M such that V is . total and ∀x ∈ Q : V (x) = min{cost(p) | p ∈ Paths(x, G, Z)} where cost(p) is the cumulative cost of p. Obviously, V (x) = 0 iff Z x = true. In this application refinement is thus expressed by the removal of edges but not of nodes. Indeed Z is reachable from every x ∈ Q since V (Q) ⊆ R+ . If needed, Q is first replaced by the set of Q-nodes from which Z is reachable.
Model Refinement Using Bisimulation Quotients
89
The Bmcp -bisimulation equivalences are generated by the map . Fg = λM : Mmcp . λE : Q2 . Wg (E) ∩ Fbasic (E), . Wg (E) = { (x, x ) | ∀y, y : xT y ∧ x T y ∧ yEy ⇒ g(x, y) = g(x , y )}. The bisimulation algorithms in § 2.4.4-5 can be used. The Mmcp -compatibility of Bmcp is proved using Lemma 2.2 and the above construction of bisimilar paths. Model quotients are independent of V since Fg is independent of V . ˙ A model M is optimal iff ∀x ∈ Q : ϕmcp x where ϕmcp is defined by ϕmcp x ≡ ∀p ∈ Paths(x, G, Z) : cost p = V (x). Regarding submodels, M ⊆ M if Q = Q, T ⊆ T, Z = Z. So g = g↓T and V = V ↓Q = V given Q = Q . By definition of Mmcp -models, V is the value function for M . Hence the optimality of M entails its optimality w.r.t. M since ∀x ∈ Q : V (x) = V (x). The Mmcp -admissibility of ϕmcp has been proved. So Refine ϕmcp is applicable. As well known, the complexity of FiniteRefine ϕmcp is polynomial. Illustration. We illustrate the above mcp-application. Here [a, b] stands for the closed interval {x | x ∈ R ∧ a ≤ x ≤ b} . Thus the set [a, b] is infinite when a < b. The (half-)open intervals ]a, b], [a, b[ and ]a, b[ are defined analogously. Consider M = ((Q, T ), (Z, g, V )) where Q = [0, 4], the map V may remain implicit, T = {(x, y) | x ∈ [0, 1] ∧ y = x + 3 ∨ x ∈ [0, 3[ ∧y = x + 1}, Z = [3, 4] and g = (T → {10, 5}) ∩ {((x, y), c) | y = x + 3 ∧ c = 10 ∨ y = x + 1 ∧ c = 5}. . First, M is reduced to N = (Shrink M ) M = M/Red(M ) using Fg . Let N = ((QN , TN ), (ZN , gN , VN )), X0 = {0}, X1 =]0, 1[, X2 = {1}, X3 =]1, 2[ and X4 = [2, 3[ . We obtain QN = {X0 , X1 , X2 , X3 , X4 , Z}, ZN = {Z} and TN = {(X0 ,X2 ),(X0 , Z),(X1 ,X3 ), (X1 , Z),(X2, X4 ),(X2 ,Z),(X3 ,X4 ),(X4 ,Z)} , gN = (TN → {10, 5}) ∩ {((X, Y ), c) | (X = X4 ∧ Y = Z) ≡ (c = 10)} , VN = {(X,c) | X = Z ∧ c = 0 ∨ X = X4 ∧ c = 5 ∨ X ∈ {X0 ,X1 ,X2 ,X3 } ∧ c = 10} . Second, FiniteRefine ϕmcp is applied to the finite quotient N . The result is N = ((QN , TN ), (ZN , gN , VN )) where TN = TN − {(X0 , X2 ), (X1 , X3 )} and gN = gN ↓TN . By construction we have N ϕmcp N . Third, the optimal N is expanded to (Grow M ) N = N \ Red(M ) . The result is M = ((Q, T ), (Z, g , V )) where T = {(x, y) | x ∈ [0, 1] ∧ y = x + 3 ∨ x ∈ [1, 3[ ∧ y = x + 1} and g = g↓T . Indeed X0 ∪ X1 ∪ X2 = [0, 1] and X2 ∪ X3 ∪ X4 = [1, 3[ . By construction, M ϕmcp M . Note T (1) = {4, 2} . Thanks to the above expression for VN it is easy to derive an expression of V , namely V = {(x, c) | x ∈ [3, 4] ∧ c = 0 ∨ x ∈ [2, 3[ ∧ c = 5 ∨ x ∈ [0, 2[ ∧ c = 10} . The bisimulation map Fg could be replaced by another one such that the equivalence of any nodes x and y entails V (x) = V (y) (see [7,15]). This could help in obtaining finitary equivalences. However the function V should then be computed symbolically before reducing any given infinite model. Generalization to a Family of Optimal Control Problems. A (selective and complete) dioid is a complete idempotent semiring (S, ⊕, ⊗, 0, 1), where S is a set of measures of some sort, ⊗ accumulates measures, ⊕ selects optimal measures, 0 and 1 are the neutral elements of ⊕ and ⊗ , ⊗ is distributive w.r.t. ˙ a ⊕ b = b is linear. Various optimization ⊕ , and the natural order a ≤ b ≡
90
R. Gl¨ uck, B. M¨ oller, and M. Sintzoff
problems are formalized using dioids D, e.g., the shortest-paths problem where D = (R+ 0 ∪ {∞}, min, +, ∞, 0), the maximum capacity problem where D = (R ∪ {±∞}, max, min, −∞, ∞), and related problems for Markov chains. See [9]. The mcp -example has been generalized to the class Moptim of dioid-based models and the corresponding adaptation ϕoptim of ϕmcp . Hence Refine ϕoptim is applicable. Incidentally, Check ϕoptim too is applicable; see § 3.3. The complexity of FiniteRefine ϕoptim is polynomial in the case of Moptim models based on dioids with 1 as greatest element wrt. ≤ , see [9] again. This is equivalent to a ≤ b ⇒ a ⊗ c ≤ b for all a, b and c, so extending a path cannot improve its cost. For instance the dioids (R+ 0 ∪ {∞}, min, +, ∞, 0) and (R ∪ {±∞}, max, min, −∞, ∞) shown above have this property, contrary to the dioid (R+ 0 ∪ {∞}, max, +, 0, 0) used in the longest-paths problem. However, this property is not a necessary condition for polynomial complexity. Application to a Family of Temporal Properties. We consider the temporal properties expressed in the logic CTL∗ . Let be given a finite set P of atomic propositions. The P -based class Mtemp is the set of models ((Q, T ), (Qinit , L)) where ∀x ∈ Q : T (x) = ∅, the map Qinit : Q → Bool characterizes a set of initial states, and the map L : Q → 2P is total [1,2]. The Btemp -bisimulation . equivalences are determined by the auxiliary equivalence Wtemp = Q2 ∩ {(x, x ) | L(x) = L(x )}. We assume that Qinit in the given M defines an E-closed set for (M) any equivalence E ∈ Btemp . The bisimulation algorithms in § 2.4.3 and § 2.4.5 can be used and the Mtemp -compatibility of Btemp is easily checked. The good news is that all CTL∗ -formulae are Mtemp -admissible (Thm. 14 in [2]). Hence for each CTL∗ -formula ϕ , Refine ϕ is applicable. The bad news is that the time complexity of FiniteRefine ϕ is at least exponential in the size of ϕ , like that of ϕ-satisfiability. This complexity remains exponential if ϕ belongs to the less general logics LTL or CTL. See e.g. [1].
5
Related Work and Conclusion
Related Work. In [7], a problem of optimal stochastic control is tackled with the help of a dedicated bisimulation. In [12], a design method for supervisory control is developed. It uses bisimulations, is applied to qualitative properties, and involves polynomials as symbolic representations of sets. To study the applicability of our approach to these problems would be worthwhile. Conclusion. The proposed method reduces the refinement of models to that of finite abstract ones. It involves a restricted expansion which maps refined abstract models back to maximal submodels. It can be used for quantitative or qualitative goals and for models which are very large but finite or infinite but well-structured. Its usefulness depends on various factors which need further examination: each considered design problem must be defined in terms of a model class M , an M-compatible bisimulation class, and an M-admissible goal formula; very large models must collapse to drastically smaller quotient models; we should know efficient algorithms for solving finite-state problem instances.
Model Refinement Using Bisimulation Quotients
91
References 1. Baier, C., Katoen, J.-P.: Principles of model checking. MIT Press, Cambridge (2008) 2. Clarke, E., Grumberg, O., Peled, D.: Model checking, 3rd edn. MIT Press, Cambridge (2001) 3. Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: 4th Symp. Principles of Programming Languages, pp. 238–252. ACM, New York (1977) 4. Ern´e, M., Koslowski, J., Melton, A., Strecker, G.: A primer on Galois connections. In: Andima, S., et al. (eds.) Papers on general topology and its applications. 7th Summer Conf. Wisconsin. Annals New York Acad. Sci., New York, 704th edn., pp. 103–125 (1994) 5. Fernandez, J.-C.: An implementation of an efficient algorithm for bisimulation equivalence. Sci. Computer Programming 13(2-3), 219–236 (1989) 6. Foster, J.N., Greenwald, M.B., Moore, J.T., Pierce, B.J., Schmitt, A.: Combinators for bidirectional tree transformations: a linguistic approach to the view-update problem. ACM Trans. Programming Languages and Systems 29(3), Article 17, 17–65 (2007) 7. Givan, R., Dean, T., Greig, M.: Equivalence notions and model minimization in Markov decision processes. Artificial Intell. J. 147(1-2), 163–223 (2003) 8. Gl¨ uck, R., M¨ oller, B., Sintzoff, M.: A semiring approach to equivalences, bisimulations and control. In: Berghammer, R., Jaoua, A.M., M¨ oller, B. (eds.) RelMiCS 2009. LNCS, vol. 5827, pp. 134–149. Springer, Heidelberg (2009) 9. Gondran, M., Minoux, M.: Graphs, dioids and semirings: new models and algorithms. Springer, Heidelberg (2008) 10. Henzinger, T.A., Majumdar, R., Raskin, J.-F.: A classification of symbolic transition systems. ACM Trans. Computational Logic 6, 1–32 (2005) 11. Kanellakis, P., Smolka, S.: CCS expressions, finite state processes, and three problems of equivalence. Information and Computation 86, 43–68 (1990) 12. Marchand, H., Pinchinat, S.: Supervisory control problem using symbolic bisimulation techniques. In: Proc. Amer. Control Conf., vol. 6, pp. 4067–4071 (2000) 13. Milner, R.: A calculus of communicating systems. Extended reprint of LNCS 92. University of Edinburgh, Laboratory for Foundations of Computer Science, Report ECS-LFCS-86-7 (1986) 14. Milner, R.: Operational and algebraic semantics of concurrent processes. In: van Leeuwen, J. (ed.) Formal models and semantics. Handbook of Theoretical Computer Sci., vol. B, pp. 1201–1242. Elsevier, Amsterdam (1990) 15. Sintzoff, M.: Synthesis of optimal control policies for some infinite-state transition systems. In: Audebaud, P., Paulin-Mohring, C. (eds.) MPC 2008. LNCS, vol. 5133, pp. 336–359. Springer, Heidelberg (2008)
Type Fusion Ralf Hinze Computing Laboratory, University of Oxford Wolfson Building, Parks Road, Oxford, OX1 3QD, UK
[email protected] http://www.comlab.ox.ac.uk/ralf.hinze/ Abstract. Fusion is an indispensable tool in the arsenal of techniques for program derivation. Less well-known, but equally valuable is type fusion, which states conditions for fusing an application of a functor with an initial algebra to form another initial algebra. We provide a novel proof of type fusion based on adjoint folds and discuss several applications: type firstification, type specialisation and tabulation. Keywords: initial algebra, fold, fusion, adjunction, tabulation.
1
Introduction
Fusion is an indispensable tool in the arsenal of techniques for program derivation and optimisation. The simplest instance of fusion states conditions for fusing an application of a function with a fixed point to form another fixed point: (μf ) = μg ⇐= · f = g · ,
(1)
where μ denotes the fixed point operator and f , g and are functions of suitable types. The usual mode of operation is from left to right: the two-stage process of forming the fixed point μf and then applying is optimised into the single operation of forming the fixed point μg. Applied from right to left, the law enables us to decompose a fixed point. In this paper we discuss lifting fusion to the realm of types and type constructors. Type fusion takes the form L (μF) ∼ = μG ⇐= L · F ∼ = G·L , where F, G and L are functors between suitable categories and μF denotes the initial algebra of the endofunctor F. Similar to function fusion, type fusion allows us to fuse an application of a functor with an initial algebra to form another initial algebra. Type fusion has, however, one further prerequisite: L has to be a left adjoint. We show that this condition arises naturally as a consequence of defining the arrows witnessing the isomorphism. Type fusion has been described before [1], but we believe that it deserves to be better known. We provide a novel proof of type fusion based on adjoint folds [2], which gives a simple formula for the aforementioned isomorphisms. We illustrate the versatility of type fusion through a variety of applications relevant to programming: M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 92–110, 2011. c Springer-Verlag Berlin Heidelberg 2011
Type Fusion
93
– type firstification: a fixed point of a higher-order functor is transformed to a fixed-point of a first-order one; – type specialisation: a nesting of types is fused into a single type that is more space-efficient; – tabulation: functions from initial algebras can be memoised using final coalgebras. This example is intriguing as the left adjoint is contravariant and higher-order. The rest of the paper is structured as follows. To keep the paper sufficiently selfcontained, Section 2 reviews initial algebras and adjoint folds. (The material is partly taken from [2], which introduces adjoint folds.) The central theorem, type fusion, is given in Section 3. Sections 4, 5 and 6 discuss applications. Finally, Section 7 reviews related work.
2 2.1
Background Initial Algebras and Final Coalgebras
We assume cartesian closed categories C, D and E that are ω-cocomplete and ωcomplete. Furthermore, we confine ourselves to ω-cocontinuous and ω-continuous functors. Let F : C → C be an endofunctor. An F-algebra is a pair A, f consisting of an object A ∈ C and an arrow f ∈ C(F A, A). An F-homomorphism between algebras A, f and B , g is an arrow h ∈ C(A, B ) such that h · f = g · F h. Identity is an F-homomorphism and F-homomorphisms compose. Consequently, the data defines a category. If C is ω-cocomplete and F is ω-cocontinuous, this category possesses an initial object, the so-called initial F-algebra μF, in. The import of initiality is that there is a unique arrow from μF, in to any other F-algebra A, f . This unique arrow is written f and is called a fold. Expressed in terms of the base category, it satisfies the following universal property. x = f ⇐⇒ x · in = f · F x
(2)
The universal property has several important consequences [3,4]. Setting x := id and f := in, we obtain the reflection law : in = id . Substituting the left-hand side into the right-hand side gives the computation law : f ·in = f ·F f . Finally and most importantly, it implies the fusion law for fusing an arrow with a fold to form another fold. h · f = g ⇐= h · f = g · F h
(3)
Initial algebras provide semantics for inductive or recursive datatypes as found, for instance, in Haskell [5]. The following example illustrates the approach. In fact, Haskell is expressive enough to replay the development within the language.
94
R. Hinze
Example 1. Consider the datatype Stack that models stacks of naturals. data Stack = Empty | Push (Nat × Stack ) The function total , which computes the sum of a stack of natural numbers, is a typical example of a fold. total : Stack → Nat total Empty =0 total (Push (n, s)) = n + total s Since Haskell features higher-kinded type constructors, initial algebras can be captured by a recursive datatype declaration. newtype μf = In {in ◦ : f (μf )} The definition uses Haskell’s record syntax to introduce the destructor in ◦ in addition to the constructor In. Using this definition, the type of stacks can be factored into a non-recursive base functor that describes the structure of the data and an application of μ that ties the recursive knot: data Stack stack = Empty | Push (Nat × stack ) instance Functor Stack where fmap f Empty = Empty fmap f (Push (n, s)) = Push (n, f s) type Stack = μStack . Folds can be defined generically, that is, for arbitrary base functors by taking the computation law as the defining equation. – : (Functor f ) ⇒ (f b → b) → (μf → b) f = f · fmap f · in ◦ Similar to the development on the type level, the function total can now be factored into a non-recursive algebra and an application of fold: total : Stack Nat → Nat total (Empty) =0 total (Push (n, s)) = n + s total = total . For emphasis, base functors, algebras and, later, base functions are typeset in this font.
To understand concepts in category theory it is helpful to look at a simple class of categories: preorders. Every preorder gives rise to a category whose objects are the elements of the preorder and whose arrows are given by the ordering relation. These categories are special as there is at most one arrow between
Type Fusion
95
two objects: a → b is inhabited if and only if a b. A functor between two preorders is a monotone function, which is a mapping on objects that respects the underlying ordering: a b =⇒ f a f b. A natural transformation between two ˙ g ⇐⇒ ∀x . f x monotone functions corresponds to a point-wise ordering: f g x . Throughout the section we shall specialise the development to preorders. For type fusion, in Section 3, we turn things upside down: we first develop the theory in the simple setting and then generalise to arbitrary categories. Let P be a preorder and let f : P → P be a monotone function. An f -algebra is an element a with f a a, a so-called prefix point. An initial f -algebra is the least prefix point of f . Since there is at most one arrow between two objects in a preorder, the theory simplifies considerably: all that matters is the type of an arrow. The type of in corresponds to the fixed-point inclusion law : f (μf ) μf ,
(4)
which expresses that μf is indeed a prefix point. The type of the fold operator, f ∈ C(μF, B ) ⇐= f ∈ C(F B , B ), translates to the fixed-point induction law : μf b ⇐= f b b .
(5)
It captures the property that μf is smaller than or equal to every other prefix point. To illustrate the laws, let us prove that μf is a fixed point of f . (Generalised to categories, this fact is known as Lambek’s Lemma [6].) The inclusion law states f (μf ) μf , for the other direction we reason (left column) μf f (μf )
F in ∈ C(μF, F (μF))
⇐=
{ induction (5) } f (f (μf )) f (μf )
⇐=
{ type of fold } F in ∈ C(F (F (μF)), F (μF))
⇐=
{ f monotone } f (μf ) μf
⇐=
{ F functor } in ∈ C(F (μF), μF)
⇐⇒ { type of in } { inclusion (4) } The proof involves the type of fold, the fact that f is a functor and the type of in. In other words, it can be seen as a typing derivation of F in, the inverse of in. This is made explicit above on the right. The proof on the left is given as a top-down backward implication, with the initial goal at top. While this style is natural for order-theoretic arguments, it is less so for typing derivations, as the witness, here F in, appears out of thin air. To follow the term construction, it is advisable to read typing derivations from bottom to top. Summing up, to interpret a category-theoretic result in the setting of preorders, we only consider the types of the arrows. Conversely, an order-theoretic proof can be seen as a typing derivation — we only have to provide witnesses for the types. Category theory has been characterised as coherently constructive lattice theory [7], and to generalise an order-theoretic result we additionally have to establish the required coherence conditions. Continuing the example above, to prove F (μF) ∼ = μF we must show that in · F in = id , ⇐⇒
96
R. Hinze
in · F in = id ⇐⇒
{ reflection }
in · F in = in ⇐= { fusion (3) } in · F in = in · F in , and F in · in = id ,
=
F in · in { computation }
F in · F F in = { F functor } F (in · F in) = { see above } F id = { F functor } id . Finally, let us remark that the development nicely dualises to F-coalgebras and unfolds, which give a semantics to coinductive types. The final F-coalgebra is denoted νF, out . 2.2
Adjoint Folds and Unfolds
Folds and unfolds are at the heart of the algebra of programming. However, most programs require some tweaking to be given the form of a fold or an unfold, and thus make them amenable to formal manipulation. Example 2. Consider the function shunt, which pushes the elements of the first onto the second stack. shunt : μStack × Stack → Stack shunt (In Empty, y) = y shunt (In (Push (a, x )), y) = shunt (x , In (Push (a, y))) The function is not a fold, simply because it does not have the right type.
Practical considerations dictate the introduction of a more general (co-) recursion scheme, christened adjoint folds and unfolds [2] for reasons to become clear in a moment. The central idea is to allow the initial algebra or the final coalgebra to be embedded in a context, where the context is modelled by a functor (L and R below). The functor is subject to a certain condition, which we discuss shortly. The adjoint fold Ψ L ∈ C(L (μF), B ) is then the unique solution of the equation x · L in = Ψ x ,
(6)
Type Fusion
97
where the base function Ψ has type Ψ : ∀X ∈ D . C(L X , B) → C(L (F X ), B). It is important that Ψ is natural in X . Informally speaking, naturality ensures termination of Ψ L : the first argument of Ψ , the recursive call of x , can only be applied to proper sub-terms of x ’s argument — each embedded in the context L. Dually, the adjoint unfold [(Ψ )]R ∈ C(A, R (νF)) is the unique solution of the equation R out · x = Ψ x ,
(7)
where the base function Ψ has type Ψ : ∀X ∈ C . D(A, R X ) → D(A, R (F X )). Again, the base function has to be natural in X , which now ensures productivity. We have already indicated that the functors L and R cannot be arbitrary. For instance, for L := K A, where K : C → CD is the constant functor and Ψ := id : C(A, B ) → C(A, B), Equation (6) simplifies to the trivial x = x . One approach for ensuring uniqueness is to express x in terms of a standard fold. This is where adjunctions enter the scene: we require L and R to be adjoint. Briefly, let C and D be categories. The functors L and R are adjoint, L R, C
≺
L ⊥ R
D
if and only if there is a bijection φ : ∀A B . C(L A, B ) ∼ = D(A, R B)
(8)
that is natural both in A and B. The isomorphism φ is called the adjoint transposition. It allows us to trade L in the source for R in the target, which enables us to reduce an adjoint fold to a standard fold, for the proof see [2]. The standard fold φ Ψ L ∈ C(μF, R B ) is called the transpose of Ψ L . Example 3. In the case of shunt, the adjoint functor is pairing defined L X = X × Stack and L f = f × id Stack . Its right adjoint is exponentiation defined R Y = Y Stack and R f = f id Stack . This adjunction captures currying: a function of two arguments can be treated as a function of the first argument whose values are functions of the second argument. To see that shunt is an adjoint fold we factor the definition into a non-recursive base function shunt that abstracts away from the recursive call and an adjoint equation that ties the recursive knot. shunt : ∀x . (L x → Stack ) → (L (Stack x ) → Stack ) shunt shunt (Empty, y) = y shunt shunt (Push (a, x ), y) = shunt (x , In (Push (a, y))) shunt : L (μStack) → Stack shunt (In x , y) = shunt shunt (x , y) The last equation is the point-wise variant of shunt · L in = shunt shunt. The transposed fold is simply the curried variant of shunt.
98
R. Hinze
Let us specialise the result to preorders. An adjunction is a pair of monotone functions : Q → P and r : P → Q such that a b ⇐⇒ a r b .
(9)
The type of the adjoint fold Ψ L ∈ C(L (μF), B ) ⇐= Ψ ∈ ∀X ∈ D . C(L X , B) → C(L (F X ), B) translates to the adjoint induction law. (μf ) b ⇐= (∀x ∈ Q . x b =⇒ (f x ) b)
(10)
As usual, the development dualises to final coalgebras. We leave the details to the reader.
3
Type Fusion
Turning to the heart of the matter, the aim of this section is to lift the fusion law (1) to the realm of objects and functors. L (μF) ∼ = μG ⇐= L · F ∼ = G·L To this end we have to construct two arrows τ : L (μF) → μG and τ ◦ : μG → L (μF) that are inverses. The type of τ ◦ suggests that the arrow is an ordinary fold. In contrast, τ looks suspiciously like an adjoint fold. Thus, we shall require that L has a right adjoint. The diagram below summarises the type information. C
≺
G G
C
≺
L ⊥ R
D
≺
F F
D
As a preparatory step, we establish type fusion in the setting of preorders. The proof of the equivalence (μf ) ∼ = μg consists of two parts. We show first ˙ · f and second that (μf ) μg ⇐= · f ˙ g · . that μg (μf ) ⇐= g ·
⇐=
μg (μf ) { induction (5) }
⇐=
g ( (μf )) (μf ) { transitivity }
⇐=
g ( (μf )) (f (μf )) and (f (μf )) (μf ) ˙ ·f } { assumption g ·
⇐=
(f (μf )) (μf ) { monotone } f (μf ) μf
⇐⇒
{ inclusion (4) }
Type Fusion
99
For part two, we apply adjoint induction (10), which leaves us with the obligation ∀x ∈ Q . x μg =⇒ (f x ) μg. (f x ) μg ⇐=
{ transitivity } (f x ) g ( x ) and g ( x ) μg ˙ g ·} ⇐= { assumption · f g ( x ) μg ⇐=
{ transitivity } g ( x ) g (μg) and g (μg) μg
⇐=
{ g monotone and inclusion (4) } x μg
In the previous section we have seen that an order-theoretic proof can be interpreted constructively as a typing derivation. The first proof above defines the arrow τ ◦ . (The natural isomorphism witnessing L · F ∼ = G · L is called swap.) ⇐= ⇐=
L in · swap ◦ ∈ C(μG, L (μF)) { type of fold } L in · swap ◦ ∈ C(G (L (μF)), L (μF)) { composition }
⇐=
swap ◦ ∈ C(G (L (μF)), L (F (μF))) and L in ∈ C(L (F (μF)), L (μF)) { assumption swap ◦ : G · L → ˙ L·F}
⇐=
L in ∈ C(L (F (μF)), L (μF)) { L functor }
⇐⇒
in ∈ D(F (μF), μF) { type of in }
Conversely, the arrow τ is the adjoint fold Ψ L whose base function Ψ is given by the second proof. in · G x · swap ∈ C(L (F X ), μG) ⇐=
{ composition } swap ∈ C(L (F X ), G (L X )) and in · G x ∈ C(G (L X ), μG)
⇐=
{ assumption swap : L · F → ˙ G·L} in · G x ∈ C(G (L X ), μG)
⇐=
{ composition } G x ∈ C(G (L X ), G (μG)) and in ∈ C(G (μG), μG)
⇐=
{ G functor and type of in } x ∈ C(L X , μG)
100
R. Hinze
We may conclude that τ = λ x . in · G x · swapL and τ ◦ = L in · swap ◦ are the desired arrows. The diagram below visualises the type information.
≺ L (F (μF))
p swa ◦ p swa
G (L (μF)) ≺
L in L in L (μF) ≺
Gτ Gτ ◦
G (μG)
in in μG
◦
τ τ
All that remains is to establish that they are inverses. Theorem 1 (Type fusion). Let C and D be categories, let L R be an adjoint pair of functors L : D → C and R : C → D, and let F : D → D and G : C → C be two endofunctors. Then L (μF) ∼ = μG ⇐= L · F ∼ =G·L ; ∼ R (νG) ⇐= F · R ∼ νF = =R·G .
(11) (12)
Proof. We show type fusion for initial algebras (11), the corresponding statement for final coalgebras (12) follows by duality. The isomorphisms τ and τ ◦ are given as solutions of adjoint fixed point equations: τ · L in = in · G τ · swap
and
τ ◦ · in = L in · swap ◦ · G τ ◦ .
Proof of τ · τ ◦ = id μG : (τ · τ ◦ ) · in =
{ definition of τ ◦ and τ } in · G τ · swap · swap ◦ · G τ ◦
=
{ inverses } in · G τ · G τ ◦
=
{ G functor } in · G (τ · τ ◦ ) .
The equation x · in = in · G x has a unique solution — the base function Ψ x = in · G x is a natural transformation of type ∀X ∈ C . C(X , μG) → C(G X , μG). Since id is also a solution, the result follows. Proof of τ ◦ · τ = id L (μF) : (τ ◦ · τ ) · L in = =
{ definition of τ and τ ◦ } L in · swap ◦ · G τ ◦ · G τ · swap { G functor } L in · swap ◦ · G (τ ◦ · τ ) · swap .
Type Fusion
101
Again, x · L in = L in · swap ◦ · G x · swap enjoys a unique solution — the base function Ψ x = L in · swap ◦ · G x · swap has type ∀X ∈ D . C(L X , L (μF)) → C(L (F X ), L (μF)). And again, id is also solution, which implies the result.
Note that in order to apply type fusion for initial algebras (11) it is sufficient to know that the functor L is part of an adjoint situation — there is no need to make R explicit. Likewise, for fusing final coalgebras (12) it is sufficient to know that R has a left adjoint.
4
Application: Firstification
Abstraction by parametrisation is a central concept in programming. A program can be made more general by abstracting away from a constant. Likewise, a type can be generalised by abstracting away from a type constant. Example 4. Recall the type of stacks of natural numbers. data Stack = Empty | Push (Nat × Stack ) The type of stack elements, Nat , is somewhat arbitrary. Abstracting away from it yields the type of parametric lists data List a = Nil | Cons (a × List a) . To avoid name clashes, we have renamed data and type constructors.
The inverse of abstraction is application or instantiation. We regain the original concept by instantiating the parameter to the constant. Continuing Example 4, we expect that List Nat ∼ = Stack .
(13)
The isomorphism is non-trivial, as both types are recursively defined. The transformation of List Nat into Stack can be seen as an instance of firstification [8] or λ-dropping [9] on the type level: a fixed point of a higher-order functor is reduced to a fixed-point of a first-order functor. Perhaps surprisingly, we can fit the isomorphism into the framework of type fusion. To this end, we have to view type application as a functor: given an object T ∈ D define AppT : CD → C by AppT F = F T and AppT α = α T . Using type application we can rephrase Equation (13) as AppNat (μList) ∼ = μStack, where List is the higher-order base functor of List defined data List list a = Nil | Cons (a, list a) . In order to apply type fusion, we have to check that AppT is part of an adjunction. It turns out that it has both a left and a right adjoint [2]. Consequently, we can firstify both inductive and coinductive parametric types. Generalising the original problem, Equation (13), the second-order type μF and the first-order type μG are related by AppT (μF) ∼ = μG if AppT · F ∼ = G · AppT . Despite the somewhat complicated type, the natural isomorphism swap is usually straightforward to define: it simply renames the constructors, as illustrated below.
102
R. Hinze
Example 5. Let us show that List Nat ∼ = Stack . We have to discharge the obligation AppNat · List ∼ = Stack · AppNat . ∼ = ∼ = ∼ = ∼ =
AppNat · List { composition of functors and definition of App } Λ X . List X Nat { definition of List } Λ X . 1 + Nat × X Nat { definition of Stack } Λ X . Stack (X Nat) { composition of functors and definition of App } Stack · AppNat
The proof above is entirely straightforward. The isomorphisms are swap : ∀x . List x Nat → Stack (x Nat) swap Nil = Empty swap (Cons (n, x )) = Push (n, x ) swap ◦ : ∀x . Stack (x Nat) → List x Nat swap ◦ Empty = Nil swap ◦ (Push (n, x )) = Cons (n, x ) . The transformations rename Nil to Empty and Cons to Push, and vice versa. Finally, τ and τ ◦ implement Λ-lifting and Λ-dropping. Λ-lift : μStack → μList Nat Λ-lift (In x ) = In (swap ◦ (fmap Λ-lift x )) Λ-drop : μList Nat → μStack Λ-drop (In x ) = In (fmap Λ-drop (swap x )) Since type application is invisible in Haskell, the adjoint fold Λ-drop deceptively resembles a standard fold.
Transforming a higher-order fixed point into a first-order fixed point works for so-called regular datatypes. The type of lists is regular; the type of perfect trees [10] defined data Perfect a = Zero a | Succ (Perfect (a × a)) is not because the recursive call of Perfect involves a change of argument. Firstification is not applicable, as there is no first-order base functor Base such that AppT · Perfect = Base · AppT . The class of regular datatypes is usually defined syntactically. Drawing from the development above, we can provide an alternative semantic characterisation. Definition 1. Let F : CD → CD be a higher-order functor. The parametric datatype μF : D → C is regular if and only if there exists a functor G : C → C such that AppT · F ∼
= G · AppT for all objects T : D.
Type Fusion
103
The regularity condition can be simplified to F H T ∼ = G (H T ), which makes explicit that all occurrences of ‘the recursive call’ H are applied to the same argument.
5
Application: Type Specialisation
Firstification can be seen as an instance of type specialisation: a nesting of types is fused to a single type that allows for a more compact and space-efficient representation. Let us illustrate the idea by means of an example. Example 6. Lists of optional values, List · Maybe, where Maybe is given by data Maybe a = Nothing | Just a , can be represented more compactly using data Seq a = Done | Skip (Seq a) | Yield (a × Seq a) . Assuming that the constructor application C (v1 , . . . , vn ) requires n + 1 cells of storage, the compact representation saves 2n cells for a list of length n.
The goal of this section is to prove that List · Maybe ∼ = Seq ,
(14)
or, more generally, μF · J ∼ = μG for suitably related base functors F and G. The application of Section 4 is an instance of this problem as the relation H A ∼ =B between objects can be lifted to a relation H · K A ∼ = K B between functors. To fit Equation (14) under the umbrella of type fusion, we have to view precomposition as a functor. Given a functor J : C → D, define the higher-order functor PreJ : ED → EC by PreJ F = F · J and PreJ α = α · J. Using the functor we can rephrase Equation (14) as PreMaybe (μList) ∼ = μSeq. Of course, we first have to verify that PreJ has a adjoint. It turns out that this is a well-studied problem in category theory [11, X.3]. Similar to the situation of the previous section, PreJ has both a left and a right adjoint, the so-called left and right Kan extension. Instantiating Theorem 1, the parametric types μF and μG are related by PreJ (μF) ∼ = μG if PreJ · F ∼ = G · PreJ . The natural isomorphism swap realises the space-saving transformation as illustrated below. Example 7. Continuing Example 6 we demonstrate that PreMaybe · List ∼ = Seq · PreMaybe . ∼ = ∼ = ∼ =
List X · Maybe { definition of List and Maybe } Λ A . 1 + (1 + A) × X (Maybe A)
{ × distributes over + and 1 × B ∼ =B } Λ A . 1 + X (Maybe A) + A × X (Maybe A) { definition of Seq } Seq (X · Maybe)
104
R. Hinze
The central step is the application of distributivity: the law (A + B) × C ∼ = A × C + B × C turns the nested type on the left into a ‘flat’ sum, which can be represented space-efficiently in Haskell — swap’s definition makes this explicit. swap : ∀x . ∀a swap swap swap
. List x (Maybe a) → Seq (x · Maybe) a (Nil) = Done (Cons (Nothing, x )) = Skip x (Cons (Just a, x )) = Yield (a, x )
The function swap is a natural transformation, whose components are again natural transformations, hence the nesting of universal quantifiers.
6
Application: Tabulation
In this section we look at an intriguing application of type fusion: tabulation. Tabulation means to put something into tabular form. Here, the “something” is a function. For example, it is well-known that functions from the naturals can be memoised or tabulated using streams: X Nat ∼ = Stream X , where Nat = μNat and Stream = νStream with data Nat nat = Zero | Succ nat data Stream stream a = Next (a, stream a) newtype νf a = Out ◦ {out : f (νf ) a } . The last definition introduces second-order final coalgebras, which model parametric coinductive datatypes. The isomorphism X Nat ∼ = Stream X holds for every return type X , so it can be generalised to an isomorphism between functors: (−)Nat ∼ = Stream .
(15)
Tabulations abound. We routinely use tabulation to represent or to visualise functions from small finite domains. Probably every textbook on computer architecture includes truth tables for the logical connectives. (∧) : Bool Bool×Bool
False False False True
A function from a pair of Booleans can be represented by a two-by-two table. Again, the construction is parametric: ˙ Id) × ˙ (Id × ˙ Id) , (−)Bool×Bool ∼ = (Id × ˙ is the lifted product defined (F × ˙ G) X = where Id is the identity functor and × FX × GX. For finite argument types such as Bool × Bool , where Bool = 1 + 1, tabulation rests on the well-known laws of exponentials: X0 ∼ =1 ,
X1 ∼ =X ,
X A+B ∼ = XA × XB ,
X A×B ∼ = (X B )A .
Type Fusion
105
Things become interesting when the types involved are recursive as in the introductory example, and this is where type fusion enters the scene. To be able to apply the framework, we first have to identify the left adjoint functor. Quite intriguingly, the underlying functor is a curried version of exponentiation: Exp : C → (CC )op with Exp K = Λ V . V K and Exp f = Λ V . V f . Using the functor Exp, Equation (15) can be rephrased as Exp Nat ∼ = Stream. This is the first example where the left adjoint is a contravariant functor and this will have consequences when it comes to specialising swap and τ . Before we spell out the details, let us first determine the right adjoint of Exp, which exists if the underlying category has so-called ends. ∼ =
(CC )op (Exp A, B) { definition of (−)op } C (B, Exp A) C
∼ = ∼ =
{ natural transformation as an end } ∀X ∈ C . C(B X , Exp A X ) { definition of Exp }
∀X ∈ C . C(B X , X A ) ∼ = { − × Y (−)Y and Y × Z ∼ =Z ×Y } BX ) ∀X ∈ C . C(A, X ∼ = { the functor C(A, −) preserves ends } C(A, ∀X ∈ C . X B X ) ∼ = { define Sel B := ∀X ∈ C . X B X } C(A, Sel B) The universally quantified object introduced in the second step is an end, which corresponds to a polymorphic type in Haskell. We refer the interested reader to Mac Lane’s textbook [11, IX.5] for further information. The derivation shows that the right adjoint of Exp is a higher-order functor that maps a functor B, a type of tables, to the type of selectors Sel B, polymorphic functions that select some entry from a given table. Since Exp is a contravariant functor, swap and τ live in an opposite category. Moreover, μG in (CC )op is a final coalgebra in CC . Formulated in terms of arrows in CC , type fusion takes the following form τ : νG ∼ = Exp (μF) ⇐= swap : G · Exp ∼ = Exp · F , and the isomorphisms τ and τ ◦ are defined Exp in · τ = swap · G τ · out ; out · τ
◦
◦
◦
= G τ · swap · Exp in .
(16) (17)
Both arrows are natural in the return type of the exponential. The arrow τ : νG→ ˙ Exp (μF) is a curried look-up function that maps a memo table to an exponential,
106
R. Hinze
which in turn maps an index, an element of μF, to the corresponding entry in the table. Its inverse, τ ◦ : Exp (μF) → ˙ νG tabulates a given exponential. Tabulation is a standard unfold, whereas look-up is an adjoint fold, whose transposed fold maps an index to a selector function. Before we look at a Haskell example, let us specialise the defining equations of τ and τ ◦ to the category Set, so that we can see the correspondence to the Haskell code more clearly. lookup (out ◦ t ) (in i) = swap (G lookup t ) i tabulate f = out ◦ (G tabulate (swap ◦ (f · in)))
(18) (19)
Example 8. Let us instantiate tabulation to natural numbers and streams. The natural isomorphism swap is defined swap : ∀x . ∀v . Stream (Exp x ) v → (Nat x → v ) swap (Next (v , t )) (Zero) = v swap (Next (v , t )) (Succ n) = t n . It implements V × V X ∼ = V 1+X . Inlining swap into Equation (18) yields the look-up function lookup : ∀v . νStream v → (μNat → v) lookup (Out ◦ (Next (v , t ))) (In Zero) =v lookup (Out ◦ (Next (v , t ))) (In (Succ n)) = lookup t n that accesses the nth element of a sequence. Its transpose →v lookup : μNat → ∀v . νStream v lookup (In Zero) (Out ◦ (Next (v , t ))) = v lookup (In (Succ n)) (Out ◦ (Next (v , t ))) = lookup n t simply swaps the two value arguments: given a natural number n, it returns a tailor-made, polymorphic access function that extracts the nth element. The inverse of swap implements V 1+X ∼ = V × V X and is defined swap ◦ : ∀x . ∀v . (Nat x → v ) → Stream (Exp x ) v swap ◦ f = Next (f Zero, f · Succ) . If we inline swap ◦ into Equation (19), we obtain tabulate : ∀v . (μNat → v ) → νStream v tabulate f = Out ◦ (Next (f (In Zero), tabulate (f · In · Succ))) that memoises a function from the naturals. By construction, lookup and tabulate are inverses.
The definitions of look-up and tabulate are generic: the same code works for any suitable combination of F and G. The natural transformation swap on the other hand depends on the particulars of F and G. The best we can hope for is a
Type Fusion
107
polytypic definition that covers a large class of functors. The laws of exponentials provide the basis for the simple class of so-called polynomial functors. Exp 0
∼ = K1 ∼ = Id
(20)
Exp 1 ˙ Exp B Exp (A + B ) ∼ = Exp A × ∼ Exp (A × B ) = Exp A · Exp B
(21) (22) (23)
Throughout the paper we have used λ-notation to denote functors. We can extend tabulation to a much larger class of objects if we make this precise. The central idea is to interpret λ-notation using the cartesian closed structure on Alg, the category of ω-cocomplete categories and ω-cocontinuous functors. The resulting calculus is dubbed Λ-calculus. The type constructors 0, 1, +, × and μ are given as constants in this language. Naturally, the constants 0 and 1 are interpreted as initial and final objects; the constructors + and × are mapped to (curried versions of) the coproduct and the product functor. The interpretation of μ, however, is less obvious. It turns out that the fixed point operator, which maps an endofunctor to its initial algebra, defines a higher-order functor of type μ : CC → C, whose action on arrows is given by μα = in · α [12]. For reasons of space, we only sketch the syntax and semantics of the Λcalculus, see [12] for a more leisurely exposition. Its raw syntax is given below. κ ::= ∗ | κ → κ T ::= C | X | T T | Λ X . T C ::= 0 | 1 | + | × | μ The so-called kind ∗ represents types that contain values. The kind κ1 → κ2 represents type constructors that map type constructors of kind κ1 to those of kind κ2 . The kinds of the constants in C are fixed as follows. 0, 1 : ∗
+, × : ∗ → (∗ → ∗)
μ : (∗ → ∗) → ∗
The interpretation of this calculus in a cartesian closed category is completely standard [13]. We provide, in fact, two interpretations, one for the types of keys and one for the types of memo tables, and then relate the two, showing that the latter interpretation is a tabulation of the former. For keys, we specialise the standard interpretation to the category Alg, fixing a category C as the interpretation of ∗. For memo tables, the category of ω-complete categories and ω-continuous functors serves as the target. The semantics of ∗ is given by (CC )op , which is ω-complete since C is ω-cocomplete. In other words, ∗ is interpreted by the domain and the codomain of the adjoint functor Exp, respectively. The semantics of types is determined by the interpretation of the constants. (We use the same names both for the syntactic and the semantic entities.) K 0 = 0 T 0 = K 1
K 1 = 1 T 1 = Id
K + = + ˙ T + = ×
K × = × K μ = μ T × = · T μ = ν
108
R. Hinze
Finally, to relate the types of keys and memo tables, we set up a kind-indexed logical relation. ⇐⇒ Exp A ∼ (A, F) ∈ R∗ =F (A, F) ∈ Rκ1 →κ2 ⇐⇒ ∀X Y . (X , Y ) ∈ Rκ1 =⇒ (A X , F Y ) ∈ Rκ2 The first clause expresses the relation between key types and memo-table functors. The second closes the logical relation under application and abstraction. Theorem 2 (Tabulation). For closed type terms T : κ, (K T , T T ) ∈ Rκ . Proof. We show that R relates the interpretation of constants. The statement then follows from the ‘basic lemma’ of logical relations. Equations (20)–(23) imply (K C , T C ) ∈ Rκ for C = 0, 1, + and ×. By definition, (μ, ν) ∈ R(∗→∗)→∗ iff ∀X Y . (X , Y ) ∈ R∗→∗ =⇒ (μX , νY ) ∈ R∗ . Since the precondition is equivalent to Exp · X ∼ = Y · Exp, Theorem 1 is applicable and implies Exp (μX ) ∼
= νY , as desired. Note that the functors T T contain only products, no coproducts, hence the terms memo table and tabulation.
7
Related Work
The initial algebra approach to the semantics of datatypes originates in the work of Lambek on fixed points in categories [6]. Lambek suggests that lattice theory provides a fruitful source of inspiration for results in category theory. This viewpoint was taken up by Backhouse et al. [1], who generalise a number of lattice-theoretic fixed point rules to category theory, type fusion being one of them. (The paper contains no proofs; these are provided in an unpublished manuscript [7]. Type fusion is established by showing that L R induces an adjunction between the categories of F- and G-algebras.) The rules are illustrated by deriving isomorphisms between list types (cons and snoc lists) — currying is the only adjunction considered. Finite versions of memo tables are known as tries or digital search trees. Knuth [14] attributes the idea of a trie to Thue [15]. Connelly and Morris [16] formalised the concept of a trie in a categorical setting: they showed that a trie is a functor and that the corresponding look-up function is a natural transformation. The author gave a polytypic definition of memo tables using type-indexed datatypes [17,18], which Section 6 puts on a sound theoretical footing. The insight that a function from an inductive type is tabulated by a coinductive type is due to Altenkirch [19]. He also mentions fusion as a way of proving tabulation correct, but does not spell out the details. (Altenkirch attributes the idea to Backhouse.) Adjoint folds and unfolds were introduced in a recent paper by the author [2], which in turn was inspired by Bird and Paterson’s work on generalised folds [20]. The fact that μ is a higher-order functor seems to be folklore, see [12] for a recent
Type Fusion
109
reference. That paper also introduces the λ-calculus for types that we adopted for Theorem 2.
Acknowledgement Thanks are due to the anonymous referees of AMAST 2010 for helpful suggestions and for pointing out some presentational problems.
References 1. Backhouse, R., Bijsterveld, M., van Geldrop, R., van der Woude, J.: Categorical fixed point calculus. In: Johnstone, P.T., Rydeheard, D.E., Pitt, D.H. (eds.) CTCS 1995. LNCS, vol. 953, pp. 159–179. Springer, Heidelberg (1995) 2. Hinze, R.: Adjoint folds and unfolds. In: Bolduc, C., Desharnais, J., Ktari, B. (eds.) Mathematics of Program Construction. LNCS, vol. 6120, pp. 195–228. Springer, Heidelberg (2010) 3. Bird, R., de Moor, O.: Algebra of Programming. Prentice Hall Europe, London (1997) 4. Backhouse, R., Jansson, P., Jeuring, J., Meertens, L.: Generic Programming: An Introduction. In: Swierstra, S.D., Henriques, P.R., Oliveira, J.N. (eds.) AFP 1998. LNCS, vol. 1608, pp. 28–115. Springer, Heidelberg (1999) 5. Peyton Jones, S.: Haskell 98 Language and Libraries. Cambridge University Press, Cambridge (2003) 6. Lambek, J.: A fixpoint theorem for complete categories. Math. Zeitschr. 103, 151–161 (1968) 7. Backhouse, R., Bijsterveld, M., van Geldrop, R., van der Woude, J.: Category theory as coherently constructive lattice theory (1994), http://www.cs.nott.ac.uk/~ rcb/MPC/CatTheory.ps.gz 8. Hughes, J.: Type specialisation for the λ-calculus; or, A new paradigm for partial evaluation based on type inference. In: Danvy, O., Gl¨ uck, R., Thiemann, P. (eds.) Dagstuhl Seminar 1996. LNCS, vol. 1110, pp. 183–215. Springer, Heidelberg (1996) 9. Danvy, O.: An extensional characterization of lambda-lifting and lambda-dropping. In: Middeldorp, A., Sato, T. (eds.) FLOPS 1999. LNCS, vol. 1722, pp. 241–250. Springer, Heidelberg (1999) 10. Hinze, R.: Functional Pearl: Perfect trees and bit-reversal permutations. Journal of Functional Programming 10(3), 305–317 (2000) 11. Mac Lane, S.: Categories for the Working Mathematician, 2nd edn. Graduate Texts in Mathematics. Springer, Berlin (1998) 12. Gibbons, J., Paterson, R.: Parametric datatype-genericity. In: Jansson, P. (ed.) Proceedings of the 2009 ACM SIGPLAN Workshop on Generic programming, pp. 85–93. ACM Press, New York (2009) 13. Crole, R.L.: Categories for Types. Cambridge University Press, Cambridge (1994) 14. Knuth, D.E.: The Art of Computer Programming, 2nd edn. Sorting and Searching, vol. 3. Addison-Wesley Publishing Company, Reading (1998) ¨ 15. Thue, A.: Uber die gegenseitige Lage gleicher Teile gewisser Zeichenreihen. Skrifter udgivne af Videnskaps-Selskabet i Christiania, Mathematisk-Naturvidenskabelig Klasse 1, 1–67 (1912); Reprinted in Thue’s “Selected Mathematical Papers” (Oslo: Universitetsforlaget, 413–477 (1977) 16. Connelly, R.H., Morris, F.L.: A generalization of the trie data structure. Mathematical Structures in Computer Science 5(3), 381–418 (1995)
110
R. Hinze
17. Hinze, R.: Generalizing generalized tries. Journal of Functional Programming 10(4), 327–351 (2000) 18. Hinze, R.: Memo functions, polytypically!. In: Jeuring, J., ed.: Proceedings of the 2nd Workshop on Generic Programming, Ponte de Lima, Portugal, pp.17–32 (July 2000); The proceedings appeared as a technical report of Universiteit Utrecht, UU-CS-2000-19 19. Altenkirch, T.: Representations of first order function types as terminal coalgebras. In: Abramsky, S. (ed.) TLCA 2001. LNCS, vol. 2044, pp. 62–78. Springer, Heidelberg (2001) 20. Bird, R., Paterson, R.: Generalised folds for nested datatypes. Formal Aspects of Computing 11(2), 200–222 (1999)
Coalgebraic Semantics for Parallel Derivation Strategies in Logic Programming Ekaterina Komendantskaya1, Guy McCusker2 , and John Power2 1
2
Department of Computing, University of Dundee, UK Department of Computer Science, University of Bath, UK
Abstract. Logic programming, a class of programming languages based on first-order logic, provides simple and efficient tools for goal-oriented proof-search. Logic programming supports recursive computations, and some logic programs resemble the inductive or coinductive definitions written in functional programming languages. In this paper, we give a coalgebraic semantics to logic programming. We show that ground logic programs can be modelled by either Pf Pf -coalgebras or Pf Listcoalgebras on Set. We analyse different kinds of derivation strategies and derivation trees (proof-trees, SLD-trees, and-or parallel trees) used in logic programming, and show how they can be modelled coalgebraically. Keywords: Logic programming, SLD-resolution, Parallel Logic programming, Coalgebra, Coinduction.
1
Introduction
In the standard formulations of logic programming, such as in Lloyd’s book [19], a first-order logic program consists of a finite set of clauses of the form A ← A 1 , . . . , An where A and the Ai ’s are atomic formulae, typically containing free variables; and A1 , . . . , An is to mean the conjunction of the Ai ’s. Note that n may be 0. The central algorithm for logic programming, called SLD-resolution, takes a goal G = ← B1 , . . . , Bn , which is also meant as a conjunction of atomic formulae typically containing free variables, and constructs a proof for an instantiation of G from substitution instances of the clauses in a given logic program P . The algorithm uses Horn-clause logic, with variable substitution determined universally to make the first atom in G agree with the head of a clause in P , then proceeding inductively.
The work was supported by the Engineering and Physical Sciences Research Council, UK; Postdoctoral Fellow research grant EP/F044046/1. This document is an output from the PMI2 Project funded by the UK Department for Innovation, Universities and Skills (DIUS) for the benefit of the Japanese Higher Education Sector and the UK Higher Education Sector. The views expressed are not necessarily those of DIUS, nor British Council.
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 111–127, 2011. c Springer-Verlag Berlin Heidelberg 2011
112
E. Komendantskaya, G. McCusker, and J. Power
Despite its minimal syntax, logic programming can express recursive and even corecursive computations; it is adaptable to natural language processing; and it allows implicit parallel execution of programs. These features led to its various applications in computer science, such as constraint logic programming, DATALOG, and parallel logic programming. Logic programming has had direct and indirect impact on various disciplines of computer science. There have been several successful attempts to give a categorical characterisation of logic programs and computations by logic programs. Among the earliest results was the characterisation of the first-order language as a Lawvere theory [2,5,6,17], and most general unifiers (mgus) as equalisers [4] and pullbacks [6,2]. There were several approaches to operational semantics of logic programming, notably, built upon the observation that logic programs resemble transition systems or rewriting systems; [5,7]. The coalgebraic semantics we propose will ultimately rely upon Lawvere theories, mgus as equalisers and operational behavior of SLD-derivations given by state transitions. However, our original contribution is a coalgebraic characterisation of different derivation strategies in logic programming. As we show in Section 3, the algebraic semantics for logic programming [2,6,17] fails to give an account of the possibly infinite derivations that arise in the practice of logic programming. The coalgebraic semantics we propose fills this gap. Among the major advantages of this semantics is that it neatly models different strategies for parallel execution of logic programs, as we explain in Section 5. In order to concentrate on derivations, rather than Lawvere theories, we ignore variables here. Ground logic programs have the advantage of yielding a variety of parallel derivation strategies [12,23], as opposed to the general case, for which the algorithms of unification and SLD resolution are P-complete [27,15]. The variable-free setting is more general than it may first appear: for some logic programs, only finitely many “ground” substitutions are possible; the exception being logic programs that describe potentially infinite data. Therefore, one often can emulate an arbitrary logic program by a variable-free one. For the rest of the paper we assume that all clauses are ground, and that there are only finitely many predicate symbols, as is implicit in the definition of logic program. Then an atomic formula A may be the head of one clause, or no clause, or many clauses, but only ever finitely many. Each clause with head A has a finite number of atomic formulae A1 , . . . , An in its antecedent. So one can see a logic program without variables as a coalgebra of the form p : At −→ Pf (Pf (At)), where At is the set of atomic formulae: p sends each atomic formula A to the set of the sets of atomic formulae in each antecedent of each clause for which A is the head. Thus we can identify a variable-free logic program with a Pf Pf -coalgebra on Set. In fact, we can go further. If we let C(Pf Pf ) be the cofree comonad on Pf Pf , then given a logic program qua Pf Pf -coalgebra, the corresponding C(Pf Pf )-coalgebra structure describes the and-or parallel derivation trees of the logic program yielding the basic computational construct of logic programming in the variable-free setting. We explain this in Section 5.
Coalgebraic Semantics for Parallel Derivation Strategies
113
A similar analysis of SLD-derivations in logic programming can be given in terms of Pf List-coalgebras. Such an analysis would be suitable for logic programming applications that treat conjunctions ← B1 , B2 , . . . , Bn as lists. The main difference between the models based on Pf Pf - and Pf List-coalgebras, is that they model different strategies of parallel SLD-derivations. The paper is organised as follows. In Section 2, we discuss the operational semantics for logic programs given by SLD-resolution. In Section 3 we analyse the finite and infinite SLD derivations that arise in the practice of logic programming, and briefly outline the existing approaches to coinductive logic programming. In Section 4, we show how to model variable-free logic programs as coalgebras and exhibit the role of the cofree comonad C(Pf Pf ); we discuss the difference between SLD-refutations modelled by C(Pf Pf )- and C(Pf List)coalgebras. In Section 5, we show that the C(Pf List)-coalgebra is a sound and complete semantics for and-or parallel SLD derivations. We prove that semantics given by C(Pf List)-coalgebra is correct with respect to the Theory of Observables [7] for logic programming. In Section 6, we conclude the paper and discuss future work to be done.
2
Logic Programs and SLD Derivations
In this section, we recall the essential constructs of logic programming, as described, for instance, in Lloyd’s standard text [19]. We start by describing the syntax of logic programs, after which we outline approaches to declarative and operational semantics of logic programs. Definition 1. A signature Σ consists of a set of function symbols f, g, . . . each equipped with a fixed arity. The arity of a function symbol is a natural number indicating the number of arguments it is supposed to have. Nullary (0-ary) function symbols are allowed: these are called constants. Given a countably infinite set V ar of variables, terms are defined as follows. Definition 2. The set T er(Σ) of terms over Σ is defined inductively: – x ∈ T er(Σ) for every x ∈ V ar. – If f is an n-ary function symbol (n ≥ 0) and t1 , . . . , tn ∈ T er(Σ), then f (t1 , . . . , tn ) ∈ T er(Σ). Variables will be denoted x, y, z, sometimes with indices x1 , x2 , x3 , . . .. Definition 3. A substitution is a map θ : T er(Σ) → T er(Σ) which satisfies θ(f (t1 , . . . , tn )) ≡ F (θ(t1 ), . . . , θ(tn )) for every n-ary function symbol f .
114
E. Komendantskaya, G. McCusker, and J. Power
We define an alphabet to consist of a signature Σ, the set V ar, and a set of predicate symbols P, P1 , P2 , . . ., each assigned an arity. Let P be a predicate symbol of arity n and t1 , . . . , tn be terms. Then P (t1 , . . . , tn ) is a formula (also called an atomic formula or an atom). The first-order language L given by an alphabet consists of the set of all formulae constructed from the symbols of the alphabet. Definition 4. Given a first-order language L, a logic program consists of a finite set of clauses of the form A ← A1 , . . . , An , where A, A1 , . . . An ( n ≥ 0) are atoms. The atom A is called the head of a clause, and A1 , . . . , An is called its body. Clauses with empty bodies are called unit clauses. A goal is given by ← A1 , . . . An , where A1 , . . . An ( n ≥ 0) are atoms. Definition 5. A term, an atom, or a clause is called ground if it contains no variables. A term t is an instance of a term t1 if there exists a substitution σ such that σ(t1 ) = t; additionally, if t is ground, it is called a ground instance of t1 ; similarly for atoms and clauses. Various implementations of logic programs require different restrictions to the first-order signature. One possible restriction is to remove function symbols of arity n > 0 from the signature, and the programming language based on such syntax is called DATALOG. The advantages of DATALOG are easier implementations and a greater capacity for parallelisation [27,15]. From the point of view of model theory, DATALOG programs always have finite models. Another possible restriction to the syntax of logic programs is ground logic programs, that is, there is no restriction to the arity of function symbols, but variables are not allowed. Such a restriction yields easier implementations, because there is no need for unification algorithms, and such programs also can be implemented using parallel algorithms. We will return to this discussion in Section 4. Remark 1. There are different approaches to the meaning of goals and bodies given by A, A1 , . . . An . One approach arises from first-order logic semantics, and treats them as finite conjunctions, in which case the order of atoms is not important, and repetitions of atoms are not considered. Another — practical — approach is to treat bodies as sequences of atoms, in which case repetitions and order can play a role in computations. In Section 4, we show that both approaches are equally sound in case of ground logic programs, however, the first-order case requires the use of lists; and the majority of PROLOG interpreters treat the goals as lists. One of our running examples will be the following program. Example 1. Let GC (for graph connectivity) denote the logic program connected(x,x) ← connected(x,y) ← edge(x,z), connected(z,y).
Coalgebraic Semantics for Parallel Derivation Strategies
115
Here, we used predicates “connected” and “edge”, to make the intended meaning of the program clear. Additionally, there may be clauses that describe the data base, in our case the edges of a particular graph, e.g. edge(a,b) ← , edge(b,c) ← . The latter two clauses are ground, and the atoms in them are ground instances of the atom edge(x,z). A typical goal would be ← connected(a,x). Traditionally, logic programming has been given least fixed point semantics [19]. Given a logic program P , one lets BP (also called a Herbrand base) denote the set of atomic ground formulae generated by the syntax of P , and one defines the TP operator on 2BP by sending I to the set {A ∈ BP : A ← A1 , ..., An is a ground instance of a clause in P with {A1 , ..., An } ⊆ I}. The least fixed point of TP is called the least Herbrand model of P and duly satisfies modeltheoretic properties that justify that expression [19]. A non-ground alternative to the ground fixed point semantics has been given in [9]; and further developed in terms of categorical logic [2,6]. The fact that logic programs can be naturally represented via fixed point semantics has led to the development of logic programs as inductive definitions, [22,14], as opposed to the view of logic programs as first-order logic. Operational semantics for logic programs is given by SLD-resolution, a goaloriented proof-search procedure. Given a substitution θ as in Definition 3, and an atom A, we write Aθ for the atom given by applying the substitution θ to the variables appearing in A. Moreover, given a substitution θ and a list of atoms (A1 , ...Ak ), we write (A1 , ..., Ak )θ for the simultaneous substitution of θ in each Am . Definition 6. Let S be a finite set of atoms. A substitution θ is called a unifier for S if, for any pair of atoms A1 and A2 in S, applying the substitution θ yields A1 θ = A2 θ. A unifier θ for S is called a most general unifier (mgu) for S if, for each unifier σ of S, there exists a substitution γ such that σ = θγ. Definition 7. Let a goal G be ← A1 , . . . , Am , . . . , Ak and a clause C be A ← B1 , . . . , Bq . Then G is derived from G and C using mgu θ if the following conditions hold: • Am is an atom, called the selected atom, in G. • θ is an mgu of Am and A. • G is the goal ← (A1 , . . . , Am−1 , B1 , . . . , Bq , Am+1 , . . . , Ak )θ. A clause Ci∗ is a variant of the corresponding clause Ci , if Ci∗ = Ci θ, with θ being a variable renaming substitution, such that variables in Ci∗ do not appear in the derivation up to Gi−1 . This process of renaming variables is called standardising the variables apart. Definition 8. An SLD-derivation of P ∪ {G} consists of a sequence of goals G = G0 , G1 , . . . called resolvents, a sequence C1 , C2 , . . . of variants of program clauses of P and a sequence θ1 , θ2 , . . . of mgus such that each Gi+1 is derived from Gi and Ci+1 using θi+1 . An SLD-refutation of P ∪ {G} is a finite SLDderivation of P ∪ {G} which has the empty clause 2 as its last goal. If Gn = 2, we say that the refutation has length n.
116
E. Komendantskaya, G. McCusker, and J. Power
Operationally, SLD-derivations can be characterised by two kinds of trees — called SLD-trees and proof-trees, the latter called proof-trees for their close relation to proof-trees in e.g. sequent calculus. Definition 9. Let P be a logic program and G be a goal. An SLD-tree for P ∪ {G} is a tree T satisfying the following: 1. Each node of the tree is a (possibly empty) goal. 2. The root node is G. 3. If ← A1 , . . . , Am , m > 0 is a node in T ; and it has n children, then there exists Ak ∈ A1 , . . . , Am such that Ak is unifiable with exactly n distinct clauses C1 = A1 ← B11 , . . . , Bq1 , ..., Cn = An ← B1n , . . . , Brn in P via mgus θ1 , . . . θn , and, for every i ∈ {1, . . . n}, the ith child node is given by the goal ← (A1 , . . . , Ak−1 , B1i , . . . , Bqi , Ak+1 , . . . , Am )θi 4. Nodes which are the empty clause have no children. Each SLD-derivation, or, equivalently, each branch of an SLD-tree, can be represented by a proof-tree, defined as follows. Definition 10. Let P be a logic program and G =← A be an atomic goal. A proof-tree for A is a possibly infinite tree T such that – Each node in T is an atom. – A is the root of T . – For every node A occurring in T , if A has children C1 , . . . , Cm , then there exists a clause B ← B1 , . . . , Bm in P such that B and A are unifiable with mgu θ, and B1 θ = C1 , ... ,Bm θ = Cm . As pointed out in [26], the relationship between proof-trees and SLD-trees is the relationship between deterministic and nondeterministic computations. Whether the complexity classes defined via proof-trees are equivalent to complexity classes defined via search trees is a reformulation of the classic P=NP problem in terms of logic programming. We will illustrate the two kinds of trees in the following example. Example 2. Consider the following simple ground logic program. q(b,a) ← s(a,b) ← p(a) ← q(b,a), s(a,b) q(b,a) ← s(a,b)
Figure 1 shows a proof-tree and an SLD-tree for this program. The proof-tree corresponds to the left-hand side branch of the SLD-tree.
Coalgebraic Semantics for Parallel Derivation Strategies
117
← p(a)
← p(a) ← q(b, a)
← s(a, b)
2
2
← q(b, a), s(a, b) ← s(a, b)
← s(a, b), s(a, b)
2
← s(a, b) 2
Fig. 1. A proof tree (left) and an SLD-tree (right) for a logic program of Example 2
SLD-resolution is sound and complete with respect to the least fixed point semantics. The classical theorems of soundness and completeness of this operational semantics [19,9] show that every atom in the set computed by the least fixed point of TP has a finite SLD-refutation, and vice versa.
3
Finite and Infinite Computations by Logic Programs
The analysis afforded by least fixed point operators focuses solely on finite SLD derivations. But infinite SLD derivations are nonetheless common in the practice of programming. Two kinds of infinite SLD derivations are possible: computing finite or infinite objects. Example 3. Consider the logic program from Example 1. It is easy to facilitate infinite SLD-derivations, by simply adding a clause that makes the graph cyclic: edge(c,a) ← . Taking a query ← connected(a,z) as a goal would lead to an infinite SLD-derivation corresponding to an infinite path starting from a in the cycle. However, the object that is described by this program, the cyclic graph with three nodes, is finite. Unlike the derivations above, some derivations compute infinite objects. Example 4. The following program stream defines the infinite stream of binary bits: bit(0) ← bit(1) ← stream(cons (x,y)) ← bit(x), stream(y) Programs like stream can be given declarative semantics via the greatest fixed point of the semantic operator TP . However the fixed point semantics is incomplete in general [19]: it fails for some infinite derivations.
118
E. Komendantskaya, G. McCusker, and J. Power
Example 5. The program below will be characterised by the greatest fixed point of the TP operator, which contains R(f ω (a)); whereas no infinite term will be computed via SLD-resolution. R(x) ← R(f (x)) There have been numerous attempts to resolve the mismatch between infinite derivations and greatest fixed point semantics, [14,19,22,24]. But, the infinite SLD derivations of both finite and infinite objects have not yet received a uniform semantics, see Figure 2. In [17,18] we described algebraic fibrational semantics and proved soundness and completeness result for it with respect to finite SLD-refutations, see Figure 2. Alternative algebraic semantics for logic programming were given in [2,6]. In this paper, we give a coalgebraic treatment of both finite and infinite SLD derivations, and prove soundness and completeness of this semantics for ground logic programs.
?>
=<
89
:; [88 88 88 8
Least fixed point of TP
?>
=<
Algebraic fibrational 89 semantics :;
C
Finite SLD-derivations
?>
=<
89
:; [77 77 77 7
?>
=<
Coalgebraic fibrational 89 semantics :;
Greatest fixed point of TP
C
Finite and Infinite SLD-derivations
Fig. 2. Alternative declarative semantics for finite and infinite SLD-derivations The solid arrows ↔ show the semantics that are both sound and complete, and the solid arrow → indicates sound incomplete semantics, and the dotted arrow indicates the sound and complete semantics for ground logic programs we propose here
4
Coalgebraic Semantics for Ground Logic Programs
In this section, we consider a coalgebraic semantics for ground logic programs. This semantics is intended to give meaning to both finite and infinite SLDderivations. Example 6. A ground logic program equivalent to the program from Example 1 can be obtained by taking all ground instances of non-ground clauses, such as, for example
Coalgebraic Semantics for Parallel Derivation Strategies
119
connected(a, a) ← connected(b, b) ← .. . connected(a, b) ← edge(a, c), connected(c, b) connected(a, c) ← edge(a, b), connected(b, c) .. . When a non-ground program contains no function symbols (cf. Example 1), there always exists a finite ground program equivalent to it, such as the one in the example above. In this section, we will work only with finite ground logic programs. Finite ground logic programs can still give rise to infinite SLDderivations. If the graph described by a logic program above is cyclic, it can give rise to infinite SLD-derivations, see Example 3. Variable-free logic programs and SLD derivations over such programs bear a strong resemblance to finitely branching transition systems (see also [5,7]): atoms play the role of states, and the implication arrow ← used to define clauses plays the role of the transition relation. The main difference between logic programs and transition systems is that each atom is sent to the subset of subsets of atoms appearing in the program, and thus the finite powerset functor Pf is iterated twice. Proposition 1. For any set At, there is a bijection between the set of variablefree logic programs over the set of atoms At and the set of Pf Pf -coalgebra structures on At. Proof. Given a variable-free logic program P , let At be the set of all atoms appearing in P . Then P can be identified with a Pf Pf -coalgebra (At, p), where p : At −→ Pf (Pf (At)) sends an atom A to the set of bodies of those clauses in P with head A, each body being viewed as the set of atoms that appear in it. Remark 2. One can alternatively view the bodies of clauses as lists of atoms. In this case, Proposition 1 can alternatively be given by Pf List-coalgebra. That is, a ground logic program P can be identified with a Pf List-coalgebra (At, p), where p : At −→ Pf (List(At)) sends an atom A to the set of bodies of those clauses in P with head A, viewed as lists of atoms. This coalgebraic representation of a logic program affords not only a representation of the program itself, but also of derivation trees for the program. As we now show, one can use the cofree comonad C(Pf Pf ) on Pf Pf and the C(Pf Pf )coalgebra determined by the Pf Pf -coalgebra to give an account of derivation trees for the atoms appearing in a given logic program. This bears comparison with the greatest fixed point semantics for logic programs, see e.g. [19], but we do not pursue that here. The following theorem uses the construction given in [16,28].
120
E. Komendantskaya, G. McCusker, and J. Power
Theorem 1. Given an endofunctor H : Set −→ Set with a rank, the forgetful functor U : H-Coalg −→ Set has a right adjoint R. Proof. R is constructed as follows. Given Y ∈ Set, we define a transfinite sequence of objects as follows. Put Y0 = Y , and Yα+1 = Y × H(Yα ). We define δα : Yn+1 −→ Yn inductively by Yα+1 = Y × HYα
Y ×Hδα−1
−→
Y × HYα−1 = Yα , π
1 with the case of α = 0 given by the map Y1 = Y × HY −→ Y . For a limit ordinal, let Yα = limβ<α (Yβ ), determined by the sequence
δβ
Yβ+1 −→ Yβ . If H has a rank, there exists α such that Yα is isomorphic to Y × HYα . This Yα forms the cofree coalgebra on Y . Corollary 1. If H has a rank, U has a right adjoint R and putting G = RU , G possesses a canonical comonad structure and there is a coherent isomorphism of categories G-Coalg ∼ = H-Coalg, where G-Coalg is the category of G-coalgebras for the comonad G. It will be helpful to make the correspondence between H-coalgebras and Gcoalgebras more explicit. Given an H-coalgebra p : Y −→ HY , we construct maps pα : Y −→ Yα for each ordinal α as follows. The map p0 : Y −→ Y is the identity, and for a successor ordinal, pα+1 = id, Hpα ◦ p : Y −→ Y × HYα . For limit ordinals, pα is given by the appropriate limit. By definition, the object GY is given by Yα for some α, and the corresponding pα is the required G-coalgebra. We can apply the general results given by the proof of Theorem 1 and Corollary 1 to extend our analysis of logic programs viewed as Pf Pf -coalgebras. Construction 1. Taking p : At −→ Pf Pf (At), by the proof of Theorem 1, the corresponding C(Pf Pf )-coalgebra where C(Pf Pf ) is the cofree comonad on Pf Pf is given as follows: C(Pf Pf )(At) is given by a limit of the form . . . −→ At × Pf Pf (At × Pf Pf (At)) −→ At × Pf Pf (At) −→ At. This chain has length ω. As above, we inductively define the objects At0 = At and Atn+1 = At × Pf Pf Atn , and the cone p0 = id : At −→ At(= At0 ) pn+1 = id, Pf Pf (pn ) ◦ p : At −→ At × Pf Pf Atn (= Atn+1 ) and the limit determines the required coalgebra p : At −→ C(Pf Pf )(At).
Coalgebraic Semantics for Parallel Derivation Strategies
121
Construction 2. Note that by Remark 2, a similar construction for Pf Listcoalgebras can be used to give semantics to SLD derivations for logic programs whose bodies are treated as lists of atoms: one just has to use Pf List instead of Pf Pf in construction 1. Construction 1 describes a structure that resembles the derivation trees used in logic programming, but as we show in the next example, these are not the traditional proof-trees or SLD-trees from Definitions 10 and 9. Example 7. Consider the logic program from Example 2. The program has three atoms, namely q(b,a), s(a,b) and p(a). So At = {q(b,a), s(a,b), p(a)}. And the program can be identified with the Pf Pf -coalgebra structure on At given by p(q(b,a)) = {{}, {s(a,b)}}, where {} is the empty set. p(s(a,b)) = {{}}, i.e., the one element set consisting of the empty set. p(p(a)) = {{q(b,a),s(a,b)}}. Consider the C(Pf Pf )-coalgebra corresponding to p. It sends p(a) to the parallel refutation of p(a) depicted on the left side of Figure 3. Note that the nodes of the tree alternate between those labelled by atoms and those labelled by bullets (•). The set of children of each bullet represents a goal, made up of the conjunction of the atoms in the labels. An atom with multiple children is the head of multiple clauses in the program: its children represent these clauses. We use the traditional notation 2 to denote {}. Where an atom has a single •-child, we can elide that node without losing any information; the result of applying this transformation to our example is shown on the right in Figure 3. As we shall shortly explain, the resulting tree is precisely the parallel and-or derivation tree for the atomic goal ← p(a); see Definition 11. In contrast, considering the program as a Pf List-coalgebra, we would have: p(q(b,a)) = {nil, [s(a,b)]}, i.e., the set of two lists, with nil being the empty list. p(s(a,b)) = {nil}. p(p(a)) = {[q(b,a) :: s(a,b)]}, i.e, the set containing one list [q(b,a) :: s(a,b)]. The action of the corresponding C(Pf List)-coalgebra on p(a) can be depicted similarly to Fig. 3, subject to a mildly modified understanding of the picture. In replacing Pf by List, the order of atoms in a goal is vital, and an atom may occur more than once. So to account for List, we must regard the diagram as a tree with a given embedding of the children of each •-node in the plane, and allow several children of the same •-node to be labelled by the same atom. Thus the children of a •-node are understood as a list, corresponding to their relative locations in the plane, and the same atom may appear more than once among them. On the other hand, the •-nodes are not ordered, so the children of the atom-nodes are not embedded in the plane.
122
E. Komendantskaya, G. McCusker, and J. Power ← p(a)
← p(a) q(b, a) q(b, a)
2
s(a, b) s(a, b)
s(a, b)
2
2
s(a, b)
2
2
2 Fig. 3. The action of p : At −→ C(Pf Pf )(At) on p(a), and the corresponding and-or derivation tree
5
Coalgebraic Semantics and Parallel Execution of Logic Programs
The tree shown in Example 7 differs from the SLD-tree (cf. Definition 9) and the proof-tree (cf. Definition 10) constructed for the same logic program in Example 2. The reason is that the derivations modelled by the G-coalgebras have strong relation to parallel logic programming, [27,15], while both proof-trees and SLDtrees describe sequential derivation strategies. One of the distinguishing features of logic programming languages is that they allow implicit parallel execution of programs. The three main types of parallelism used in implementations of logic programs are and-parallelism and or-parallelism, and their combination; see [12,23] for an excellent analysis of parallelism in logic programming. Or-parallelism arises when more than one clause unifies with the goal atom — the corresponding bodies can be executed in Or-parallel fashion. Or-parallelism is thus a way of efficiently searching for solutions to a goal, by exploring alternative solutions in parallel. It corresponds to the parallel execution of the branches of an SLD-tree, cf. Definition 9 and Example 2. Or-parallelism has successfully been exploited in Prolog in the Aurora [20] and the Muse [1] systems both of which have shown very good speed-up results over a considerable range of applications. (Independent) And-parallelism arises when more than one atom is present in the goal, and the atoms do not share variables. In this section, we consider only ground logic programs, and so the latter condition is satisfied trivially. That is, given a goal G = ← B1 , . . . Bn , an And-parallel algorithm of SLD resolution looks for SLD derivations for each of Bi simultaneously. Independent And-parallelism is thus a way of splitting up a goal into subgoals, and corresponds to the parallel computation of all branches in a proof-tree, see Definition
Coalgebraic Semantics for Parallel Derivation Strategies
123
10 and Example 2. Independent And-parallelism has been successfully exploited in the &-Prolog System [13]. The comonad we have constructed in the previous section models a synthetic form of parallelism: And-Or parallelism. The most common way to express AndOr parallelism in logic programs is through and-or trees [12], which consist of or-nodes and and-nodes. Or-nodes represent multiple clause heads unifying with a goal atom, while and-nodes represent multiple subgoals in the body of a clause being executed in and-parallel. And-Or parallel PROLOG was first implemented in the Andorra system [8], with many more implementations following it [12]. Definition 11. Let P be a logic program and G =← A be an atomic goal. The parallel and-or derivation tree for A is the possibly infinite tree T satisfying the following properties. – – – – –
A is the root of T . Each node in T is either an and-node or an or-node. Each or-node is given by •. Each and-node is an atom. For every node A occurring in T , if A is unifiable with only one clause B ← B1 , . . . , Bn in P with mgu θ, then A has n children given by andnodes B1 θ, . . . Bn θ. – For every node A occurring in T , if A is unifiable with exactly m > 1 distinct clauses C1 , . . . , Cm in P via mgus θ1 , . . . , θm , then A has exactly m children given by or-nodes, such that, for every i ∈ m, if Ci = B i ← B1i , . . . , Bni , then the ith or-node has n children given by and-nodes B1i θi , . . . , Bni θi .
There are non-trivial choices about when to regard two trees as equivalent: one possibility is when they are equivalent in the plane, another is when they are combinatorially equivalent, a third is a hybrid. These correspond to characterisations as C(ListList)-coalgebras, as C(Pf Pf )-coalgebras, and as C(Pf List)-coalgebras respectively, yielding three theorems, the second of which is as follows. Theorem 2 (Soundness and completeness). Let P be a variable-free logic program, At the set of all atoms appearing in P , and p¯ : At −→ C(Pf Pf )(At) the C(Pf Pf )-coalgebra generated by P . (Recall that p¯ is constructed as a limit of a cone pn over an ω-chain.) Then, for any atom A, p¯(A) expresses precisely the same information as that given by the parallel and-or derivation tree for A, that is, the following holds: – For a derivation step n of the parallel and-or tree for A, pn (A) is isomorphic to the and-or parallel tree for A of depth n. – The and-or tree for A is of finite size and has the depth n iff p¯(A) = pn (A). – The and-or tree for A is infinite iff p¯(A) is given by the element of the limit limω (pn )(At) of an infinite chain given by Construction 1.
124
E. Komendantskaya, G. McCusker, and J. Power
Proof. We use Constructions 1 and 2 and put, for every A ∈ At: – p0 (A) = A, – p1 (A) = (A, {bodies of clauses in P with the head A}); – p2 (A) = (A, {bodies of clauses in P with the head A, together with, for each formula Aij in the body of each clause with the head A, { bodies of clauses in P with the head Aij }} ); – etc. The limit of the sequence is precisely the structure described by Construction 1, moreover, for each A in At, p0 (A) corresponds to the root of the and-or parallel tree, and each pn (A) corresponds to the nth parallel derivation step for A. Note that the result above relates to the Theory of Observables for logic programming developed in [7]. According to this theory, traditional characterisation of logic programs in terms of input/output behaviour is not sufficient for the purposes of program analysis and optimisation. Therefore, it is useful to have complete information about the SLD-derivation, e.g., the sequences of goals, most general unifiers, and variants of clauses. The following four observables are the most important for the theory [10,7]. – Partial answers are the substitutions associated to a resolvent in any SLDderivation; correct partial answers are substitutions associated to a resolvent in any SLD-refutation. – Call patterns are atoms selected in any SLD-derivation; correct call patterns are atoms selected in any SLD-refutation. – Computed answers are the substitutions associated to an SLD-refutation. – A successful derivation is observation of successful termination. As argued in [10,7], one of the main purposes of giving a semantics to logic programs is its ability to observe equal behaviors of logic programs and distinguish logic programs with different computational behavior. Therefore, the choice of observables and semantic models is closely related to the choice of equivalence relation defined over logic programs; [10]. Definition 12. Let P1 and P2 be ground logic programs. Then we define P1 ≈ P2 if and only if, for any (not necessarily ground) goal G, the following four conditions hold: 1. 2. 3. 4.
G G G G
has has has has
a refutation in P1 if and only if G has a refutation in P2 ; the same set of computed answers in P1 and P2 . the same set of (correct) partial answers in P1 and P2 . the same set of call patterns in P1 and P2 .
Following the terminology of [10,7], we can state the following correctness result.
Coalgebraic Semantics for Parallel Derivation Strategies
125
Theorem 3. For ground logic programs P1 and P2 , if the parallel and-or tree for P1 is equal to the parallel and-or tree for P2 , then P1 ≈ P2 . The converse of Theorem 3 does not hold. That is, there can be observationally equivalent programs that have different and-or parallel trees. Example 8. Consider two logic programs, P1 and P2 , whose clauses are exactly the same, with the exception of one clause: P1 contains A ← B1 , . . . , Bi , false, . . . , Bn ; and P2 contains the clause A ← B1 , . . . Bi , false instead. The atoms in the clauses are such that B1 , . . . , Bi have refutation in P1 and P2 , and false is an atom that has no refutation in the programs. In this case, all derivations that involve the two clauses in P1 and P2 will always fail on false, and P1 will be observationally equivalent to P2 . However, their and-or parallel trees will give an account to all the atoms in the clause.
6
Conclusions
In this paper, we have modelled the derivation strategies of logic programming by coalgebra, with the bulk of our work devoted to modelling variable-free programs. We plan to extend the coalgebraic analysis to non-ground logic programs. In general, and-or parallelism characterised here by Pf Pf -coalgebras and Pf List-coalgebras is not sound for first-order derivations. In practice of logic programming, a more sophisticated parallel derivations are used, in order to coordinate substitutions computed in parallel. For example, composition (and-or parallel) trees were introduced in [12] as a way to simplify the implementation of the traditional and-or trees, which required synchronisation of substitutions in the branches of parallel derivation trees. In the coalgebraic semantics of firstorder logic programs, Pf List-coalgebras will play a more prominent role than Pf Pf -coalgebras, and their relation to composition trees will become important. The other topic for future investigations will be to explore how different approaches to bisimilarity, as e.g. analysed in [25], relate to the observational semantics [7] and observational equivalence of logic programs. In Theorem 3, we used the equality of parallel and-or trees to characterise observational equivalence of programs, but one could consider bisimilarity of C(Pf List)-coalgebras instead. The choice of bismilarity relation will determine a particular kind of the observational equivalence characterised by the semantics. This line of reserach will become more prominent in case of non-ground logic programs. The situation regarding higher-order logic programming languages such as λ-PROLOG [21] is more subtle. Despite their higher-order nature, such logic programming languages typically make fundamental use of sequents. So it may well be fruitful to consider modelling them in terms of coalgebra too, albeit probably on a sophisticated base category such as a category of Heyting algebras. Another area of research would be to investigate the operational meaning of coinductive logic programming [3,11,24] which requires a slight modification to the algorithm of SLD-resolution we have considered in this paper.
126
E. Komendantskaya, G. McCusker, and J. Power
References 1. Ali, K., Karlsson, R.: Full prolog and scheduling or-parallelism in muse. Int. Journal Of Parallel Programming 19(6), 445–475 (1991) 2. Amato, G., Lipton, J., McGrail, R.: On the algebraic structure of declarative programming languages. Theor. Comput. Sci. 410(46), 4626–4671 (2009) 3. Ancona, D., Lagorio, G., Zucca, E.: Type inference by coinductive logic programming. In: Berardi, S., Damiani, F., de’Liguoro, U. (eds.) TYPES 2008. LNCS, vol. 5497, pp. 1–18. Springer, Heidelberg (2009) 4. Asperti, A., Martini, S.: Projections instead of variables: A category theoretic interpretation of logic programs. In: ICLP, pp. 337–352 (1989) 5. Bonchi, F., Montanari, U.: Reactive systems (semi-)saturated semantics and coalgebras on presheaves. Theor. Comput. Sci. 410(41), 4044–4066 (2009) 6. Bruni, R., Montanari, U., Rossi, F.: An interactive semantics of logic programming. TPLP 1(6), 647–690 (2001) 7. Comini, M., Levi, G., Meo, M.C.: A theory of observables for logic programs. Inf. Comput. 169(1), 23–80 (2001) 8. Costa, V.S., Warren, D.H.D., Yang, R.: Andorra-I: A parallel prolog system that transparently exploits both and- and or-parallelism. In: PPOPP, pp. 83–93 (1991) 9. Falaschi, M., Levi, G., Palamidessi, C., Martelli, M.: Declarative modeling of the operational behavior of logic languages. TCS 69(3), 289–318 (1989) 10. Gabrielli, M., Levi, G., Meo, M.: Observable behaviors and equivalnences of logic programs. Information and Computation 122(1), 1–29 (1995) 11. Gupta, G., Bansal, A., Min, R., Simon, L., Mallya, A.: Coinductive logic programming and its applications. In: Dahl, V., Niemel¨ a, I. (eds.) ICLP 2007. LNCS, vol. 4670, pp. 27–44. Springer, Heidelberg (2007) 12. Gupta, G., Costa, V.S.: Optimal implementation of and-or parallel prolog. In: Proceedings of Parallel Architectures and Languages Europe, PARLE 1992, pp. 71–92. Elsevier North-Holland, Inc., New York (1992) 13. Hermenegildo, M.V., Greene, K.J.: &-prolog and its performance: Exploiting independent and-parallelism. In: ICLP, pp. 253–268 (1990) 14. Jaume, M.: On greatest fixpoint semantics of logic programming. J. Log. Comput. 12(2), 321–342 (2002) 15. Kanellakis, P.C.: Logic programming and parallel complexity. In: Foundations of Deductive Databases and Logic Programming, pp. 547–585. M. Kaufmann, San Francisco (1988) 16. Kelly, G.M.: A unified treatment of transfinite constructions for free algebras, free monoids, colimits, associated sheaves, and so on. Bull. Austral. Math. Soc. 22, 1–83 (1980) 17. Kinoshita, Y., Power, A.J.: A fibrational semantics for logic programs. In: Herre, H., Dyckhoff, R., Schroeder-Heister, P. (eds.) ELP 1996. LNCS (LNAI), vol. 1050, Springer, Heidelberg (1996) 18. Komendantskaya, E., Power, J.: Fibrational semantics for many-valued logic programs: Grounds for non-groundness. In: H¨ olldobler, S., Lutz, C., Wansing, H. (eds.) JELIA 2008. LNCS (LNAI), vol. 5293, pp. 258–271. Springer, Heidelberg (2008) 19. Lloyd, J.: Foundations of Logic Programming, 2nd edn. Springer, Heidelberg (1987) 20. Lusk, E.L., Warren, D.H.D., Haridi, S.: The aurora or-parallel prolog system. New Generation Computing 7(2,3), 243–273 (1990) 21. Miller, D., Nadathur, G.: Higher-order logic programming. In: Shapiro, E. (ed.) ICLP 1986. LNCS, vol. 225, pp. 448–462. Springer, Heidelberg (1986)
Coalgebraic Semantics for Parallel Derivation Strategies
127
22. Paulson, L.C., Smith, A.W.: Logic programming, functional programming, and inductive definitions. In: Schroeder-Heister, P. (ed.) ELP 1989. LNCS, vol. 475, pp. 283–309. Springer, Heidelberg (1991) 23. Pontelli, E., Gupta, G.: On the duality between or-parallelism and and-parallelism in logic programming. In: Haridi, S., Ali, K., Magnusson, P. (eds.) Euro-Par 1995. LNCS, vol. 966, pp. 43–54. Springer, Heidelberg (1995) 24. Simon, L., Bansal, A., Mallya, A., Gupta, G.: Co-logic programming: Extending logic programming with coinduction. In: Arge, L., Cachin, C., Jurdzi´ nski, T., Tarlecki, A. (eds.) ICALP 2007. LNCS, vol. 4596, pp. 472–483. Springer, Heidelberg (2007) 25. Staton, S.: Relating coalgebraic notions of bisimulation. In: Kurz, A., Lenisa, M., Tarlecki, A. (eds.) CALCO 2009. LNCS, vol. 5728, pp. 191–205. Springer, Heidelberg (2009) 26. Sterling, L., Shapiro, E.: The art of Prolog. MIT Press, Cambridge (1986) 27. Ullman, J.D., Gelder, A.V.: Parallel complexity of logical query programs. Algorithmica 3, 5–42 (1988) 28. Worrell, J.: Toposes of coalgebras and hidden algebras. Electr. Notes Theor. Comput. Sci. 11 (1998)
Learning in a Changing World, an Algebraic Modal Logical Approach Prakash Panangaden1, and Mehrnoosh Sadrzadeh2, 1
School of Computer Science, McGill University, Montr´eal, Canada
[email protected] 2 Oxford University Computing Laboratory, Oxford, UK
[email protected]
Abstract. We develop an algebraic modal logic that combines epistemic and dynamic modalities with a view to modelling information acquisition (learning) by automated agents in a changing world. Unlike most treatments of dynamic epistemic logic, we have transitions that “change the state” of the underlying system and not just the state of knowledge of the agents. The key novel feature that emerges is the need to have a way of “inverting transitions” and distinguishing between transitions that “really happen” and transitions that are possible. Our approach is algebraic, rather than being based on a Kripke-style semantics. The semantics are given in terms of quantales. We study a class of quantales with the appropriate inverse operations and prove properties of the setting. We illustrate the ideas with toy robot-navigation problems. These illustrate how an agent learns information by taking actions.
1 Introduction Epistemic logic has proved very important in the analysis of protocols in distributed systems (see, for example, [FHMV95]) and, more generally in any situation where there is some notion of cooperation or “agreement” between agents. The original work in distributed systems, by Halpern and Moses [HM84, HM90] and several others modelled the knowledge of agents using Kripke-style [Kri63] models. In these models there are a set of states (often called “possible worlds”) in which the agent could be and, for each agent, an equivalence relation on the states. If two states are equivalent to an agent then that agent cannot “tell them apart”. An agent “knows” a fact φ in the state s if, in all states t that the agent “thinks” is equivalent to s, the fact φ holds. The quoted words in the preceding sentences are, of course, unnecessary anthropomorphisms that are intended to give an intuition for the definitions. A vital part of any analysis is how processes “learn” as they participate in the protocol. The bulk of papers in the distributed systems community treat this as a change in the Kripke equivalence relations and argue about these changes only in the semantics. The logic itself does not have the “dynamic” modalities that refer to updating of the state of knowledge. On the other hand, dynamic epistemic logic has indeed been studied; see, for example the recent book [vDvdHK08]. In the second author’s
Supported by NSERC and the Office of Naval Research. Corresponding author; supported by EPSRC grant EP/F042728/1.
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 128–141, 2011. c Springer-Verlag Berlin Heidelberg 2011
Learning in a Changing World, an Algebraic Modal Logical Approach
129
doctoral dissertation an algebraic approach to dynamic epistemic logic was studied in depth [BCS07, Sad06]. The advantage of working in the algebraic setting is that it abstracts over the details of the Kripke structures and showcases the high-level structure of the actions and their updates. It permits the use of simple algebraic reasoning, in particular the use of adjoints allows one to manipulate inequalities in a useful manner. As a result, one can relate the structure of epistemic actions and their updates to the other areas of computer science, e.g. reasoning about correctness of programs and observational logics, e.g. of Abramsky and Vickers [AV93]. It turns out that epistemic update is the action of the quantale of programs/actions on the module of propositions (factual and epistemic), hence it is the left adjoint to the dynamic modality which encodes the weakest precondition of Hoare Logic. Secondly, epistemic modalities are also encoded as an adjoint pair: the belief modality is the right adjoint of the appearance map, which is the lifting to subsets of the accessibility relation of the Kripke structure. The presentation of epistemic modalities as adjoints was one of the key innovations of [Sad06]. This gives a very simple method of reasoning about knowledge acquisition after an action, i.e. by uniform unfolding of epistemic and dynamic adjunctions. This method simplifies, to a great extent, the proofs of complex protocols and puzzles, such as the muddy children, even the versions with dishonest children, for details see [BCS07, Sad06]. The bulk of the work in this area (algebraic and relational), concerns situations where the state of knowledge is changed by broadcasts but not situations where the state of the system is changed. An illuminating concrete example of such situations arises in robot navigation. A general feature of these protocols is that an agent is given the description of a place, but cannot determine exactly where it is; however, it can move and as a result may acquire information that allows it to infer its present location. Consider a robot that is given the map of a small computing laboratory with 5 rooms accessible via 3 actions, as follows: s2 b / > s3 c / > s5 | | b ||| c ||| a | | | | || || s1 a / s4 Since the robot can do the same actions in the pairs s1 , s2 and s3 , s4 , it cannot tell them apart. Once in s1 (similarly for s2 ), it thinks that it could be in s1 or s2 , and once in s3 (similarly for s4 ), it thinks that it could be in s3 or s4 . But if it is in s1 and it performs an a action, then it reaches s4 and learns where it is and where it had been just before the a action. A deeper investigation of such situations reveals that it is not a question of “patching up” the theory. There are some interesting fundamental changes that need to be made. First of all, one has to distinguish between transitions that exist in the agent’s “mental model” of the system and actions that actually occur. Second, one has to introduce a converse dynamic modality in order to correctly formulate the axioms for updating knowledge. To see why, let us reason as we think the robot should: when it reaches s4 , it checks with its map and reasons that the only way it could have reached s4 would be that it was originally in s1 . It rules out s3 from its uncertainty set about s4 , because, according to the map, it could not have reached s3 via an a action. We have two types of
130
P. Panangaden and M. Sadrzadeh
data here, the locations and actions described on the map versus the ones in reality. The data on the map are hard-coded in the robot and there is no uncertainty about it, the map fully describes the system. But the real locations and actions are only partially known. The robot is uncertain about its location but the actions it takes change its uncertainties. The other issue is that to be able to encode what actions could have led the robot to where it is, it needs to look back, so we need a converse operation to reason about the past. Now by moving from s1 to s4 , the robot has changed its uncertainty, acquired information, and learned where it is located. This is exactly the manner in which our new uncertainty reduction axiom formalizes the elimination of past uncertainties: after performing a certain move in the real world, the robot consults its description, considers its possibilities and eliminates the ones that could not have been reached as a result of the action it just performed. Furthermore with this converse operation, we can also derive information about past, that the robot was in s1 before doing action a. This paper presents an algebraic theory with these features. The algebra of previous work, e.g. [BCS07] fails for such situations. The reason is that its reduction axiom responsible for changing the uncertainty after an action, is only geared towards epistemic actions and is not powerful enough for fact-changing actions. It requires that the uncertainty about (possible states of) a location after an action to be included in the result of applying the action to the uncertainty about the location beforehand, a property similar to perfect recall in protocol models of [HM84, HM90]. This fails here, since after performing an a at s1 one ends up in s4 , hence uncertainty about s1 after an a is the same as uncertainty about s4 , consisting of s3 and s4 . But performing a on the set of uncertainties about s1 , consisting of s1 and s2 , results in both s4 and s1 . However, {s4 , s1 } is not included in {s3 , s4 }. Moreover, after the robot moved to s4 , it can conclude that it was in s1 before moving; the language of [BCS07] simply cannot express these past tense properties. Finally, regarding related work, dynamic epistemic logic has been extended with assignments and post-conditions, e.g. see [vanDit05], to be able to reason about learning after fact-changing actions. Although the protocols we are interested in can be modeled in the relational models of [vanDit05] (these being transition systems with uncertainty as well as action transitions), the reduction axiom thereof cannot derive the knowledge properties we are interested in. This may be because their approach has different kinds of fact-changing actions in mind, e.g. the ones that change the status of a child in the muddy children puzzle from dirty to clean via washing (and not our location-changing actions). Nevertheless, they do not discuss or specify what kind of actions their reduction axiom targets. So there is indeed a gap in modeling and reasoning about the protocols we deal with here. Also, since we use converse actions, there might be connections to a DEL with converse actions, e.g. see [Auch07]. However, a preliminary study seems to indicate that our reduction axiom is still very different from the one developed there. A further exploration of these connections constitutes future work. We develop an algebraic setting to formalize information acquisition from such navigation protocols. We study special cases of the past and future deterministic action and converse action operations of the algebra and prove some of their axiomatic properties. We use these to establish connections with temporal algebras of von Karger [vK98], applied to model program evolution. We apply our algebra to model a grid and
Learning in a Changing World, an Algebraic Modal Logical Approach
131
a map-based navigation protocol and use the axioms to prove that the agent learns where he is and was after moving about. Further applications of our setting are to AI, mobile communication, security, and control theory. We show that our algebraic structure generalizes that of previous work [BCS07], by proving that the latter faithfully embeds in ours. Hence our setting is also strong enough to reason about learning as a result of communication actions.
2 The Algebra of Di-systems We need to model “actions” and “formulas”. The actions are modelled by a quantale while the propositions are a module over the quantale; i.e. actions modify propositions. Definition 1. A quantale , •, 1) is a sup-lattice equipped with a unital monoid (Q, structure satisfying q • i qi = i (q • qi ) and i qi • q = i (qi • q). Instead of an arbitrary sup-lattice we take it to be a completely distributive prime-algebraic lattice. Recall that a prime element, or simply “prime”, p in lattice has the property that for any x, y in the lattice, p ≤ x ∨ y implies that p ≤ x or p ≤ y; “prime algebraic” means that every element is the supremum of the primes below it. The restriction to prime algebraic lattices is not a serious restriction for the logical applications that we are considering; it would be a restriction for extensions to probabilistic systems; we will address such issues in future work. The use of algebraicity is to be able to use simple set-theoretic arguments via the representation theorem for such lattices [Win09]. For finite distributive lattices it is not a restriction at all because of Birkhoff’s classical representation theorem. Henceforth, we will not explicitly state that we are working with (completely) distributive prime-algebraic lattices. Definition 2. A right-module over Q is a sup-lattice M with an action of Q on M , − · −: M × Q − → M satisfying (m · q) · q = m · (q • q )
m·
i
qi =
i
(m · qi )
mi · q =
i
(mi · q)
m·1 = m
i
We call the collection of actions and propositions a system. Definition 3. A system is a pair consisting of a quantale Q and a right-module M over Q. We write (M, Q, ·) for a system. This is closely related to the definition of Abramsky and Vickers who have also argued for the application to Computer Science of quantales of actions, see [AV93]. Like it is usually done, we interpret elements of the module as propositions and the order as entailment, thus m ∨ m is the logical disjunction and ⊥ is the falsum. The elements of the quantale are interpreted as actions and the order is the order of non-determinism, thus q ∨ q is the non-deterministic choice and ⊥ is crash, monoid multiplication q • q is sequential composition, and its unit 1 is the action that does nothing.
132
P. Panangaden and M. Sadrzadeh
Example 1. Consider the following transition system
z1
x@ ~~ @@@ a ~ @@ ~ @@ ~~ a ~~ ~
z2
~~ ~~ ~ ~ a ~ ~
y
We model it as a system (L(S), M(A∗ ), ·), where A∗ is the free monoid generated from the set A = {a} with the multiplication being juxtaposition and its unit the empty string. M(A∗ ) is the quantale generated on that monoid and L(S) is the sup-lattice generated from the set S = {x, y, z1 , z2 }. The most concrete examples of L(S) and M(A∗ ) are P(S) and P(A∗ ). The action on atoms is given by x · a = z1 ∨ z2 and y · a = z2 , whereas z1 · a = z2 · a = ⊥. This is extended to juxtaposition and choice (subsets of actions), as well as subsets of states pointwisely. Example 2. The powerset P(S) of a set S is the right module of the quantale of all the relations thereon P(S × S). Relational composition is the monoid multiplication, the diagonal relation is its unit, and the join is set union. The action is the pointwise image of the relation, i.e. for W ⊆ S and R ⊆ S × S R[w] = {z ∈ W | ∃w ∈ W, (w, z) ∈ R} W ·R = w∈W
Since the action preserves all the joins of its module, the map − · q : M − → M , obtained by fixing the quantale argument, has a Galois right adjoint that preserves all the meets. This is denoted by − · q [q]− and defined in the canonical way, as follows: [q]m :=
{m | m · q ≤ m}
The right adjoints stand for the “dynamic modality” of Hoare logic, encoding the “weakest preconditions” of programs. Each of [q]m is read as “after doing action q or running program q, proposition m holds”. This is, in effect, all the propositions that should be true at the input of q such that at its output m holds. One gets very nice logical properties, relating the action and its adjoint to each other and to the ∨ and ∧ operators of the lattice and their units ⊥ and . Some examples are as follows: Proposition 1. The following inequalities hold in any system (M, Q, ·): (1) ([q]m) · q ≤ m (3) (m ∧ m ) · q ≤ m · q ∧ m · q (5) [q](m ∨ m ) ≥ [q]m ∨ [q]m (6) q ≤ q =⇒ [q ]m ≤ [q]m (8) [q ∨ q ]m = [q]m ∧ [q ]m (10) [q∧ q ]m ≥ [q]m ∨ [q ]m (12) [ i qi ]m = i [qi ]m
(2) m ≤ [q](m · q) (4) m · (q ∧ q ) ≤ m · q ∧ m · q (7) [⊥]m = (9) [q ∨ q ]m ≤ [q]m ∨ [q ]m (11) [q ∧ q ]m ≥ [q]m ∧ [q ]m
Proof. The proofs are easy but pedagogical, for reasons of space we do not give them here and refer the reader to the full version of the paper [PS10].
Learning in a Changing World, an Algebraic Modal Logical Approach
133
Definition 4. A sup lattice M is a right di-module of the quantale Q whenever there are two right actions − · − : M × Q − → M and − × − : M × Q − → M . We call the pair of a quantale and its di-module (M, Q, ·, ×) a di-system. Definition 5. Whenever the two actions – written ·and·c for the purposes of this definition – of a di-system are related by the following three axioms (i) m · q ≤ m =⇒ m ≤ m ·c q c
(ii) m · q ≤ m =⇒ m ≤ m · q (iii) m ·c (q • q ) = (m ·c q ) ·c q
whenever m · q = ⊥ whenever m ·c q = ⊥
then we refer to the di-system as a converse di-system and denote it by (M, Q, ·, ·c ). Proposition 2. A converse di-system satisfies m ≤ (m · q) ·c q m ≤ (m ·c q) · q
whenever whenever
m · q = ⊥ m ·c q = ⊥
Definition 6. A converse di-system is past-deterministic iff m ≤ m · q =⇒ m ·c q ≤ m , for m · q = ⊥. It is future-deterministic iff m ≤ m ·c q =⇒ m · q ≤ m , for m ·c q = ⊥. Proposition 3. In a past-deterministic converse di-system we have m ≤ m · q ⇐⇒ m ·c q ≤ m for m · q, m ·c q = ⊥. In a future-deterministic converse di-system we have m ≤ m ·c q ⇐⇒ m · q ≤ m for m ·c q, m · q = ⊥. Example 3. Consider the transition system of example 1, this is moreover an example of a converse di-system (L(S), M(A∗ ), ·, ·c ), where the converse action is given by z1 ·c a = x, and z2 ·c a = x ∨ y. It is easy to check that these satisfy the inequalities of definition 5, but not their converses: the transition system is neither past-deterministic nor future-deterministic. A counterexample for the converse of part (i) is x ≤ z2 ·c a but x · a z2 . If we eliminate the leftmost edge, then the system becomes futuredeterministic and the converse of (i) holds. A counterexample for the converse of part (ii) is z2 ≤ y · a but z2 ·c a y. If we eliminate the rightmost edge, then the system becomes past-deterministic and the converse of (i) holds. Example 4. The transition system of the introduction is a future-deterministic converse di-system, in the same way as the above example, where S = {s1 , . . . , s5 } and A = {a, b, c}. It is not past-deterministic, since s3 ·c b = s1 ∨ s2 , also s5 ·c c = s3 ∨ s4 . Example 5. Consider the setting of example 2, this is also an example of a converse di-system, where the converse action is the point wise image of the converse relation, i.e. for W ⊆ S and Rc ⊆ S × S converse of R, we have: Rc [w] = {z ∈ W | ∃w ∈ W, (w, z) ∈ Rc } W ·c R = w∈W
It is easy to see that W ·c R = W · Rc . If Rc is a singleton then this di-system becomes a past-deterministic one, if R is a singleton, it becomes future-deterministic.
134
P. Panangaden and M. Sadrzadeh
The converse action preserves all the joins of the module, thus similar to the action, it has a Galois right adjoint denoted by − ·c q [q]c −, defined in the canonical way. Similar to [q]m, we read [q]c m as “before doing action q, proposition m held”. We end this section by proving some logical properties that relate the action and its converse to their adjoints. These are of particular interest, since it turns out that in the presence of a Boolean negation on the module, the de Morgan dual of the right adjoint to the action is the converse action, and the de Morgan dual of the right adjoint to the converse action is the action. In other words − · q and [q]c − are de Morgan duals and so are − ·c q and [q]−. Our modules need not necessary be Boolean, nevertheless, these connections can be expressed using the following properties, which axiomatize de Morgan duality in the absence of negation. Proposition 4. In any converse di-system we have [q](l ∨ l ) ≤ [q]l ∨ l ·c q and [q]c (l ∨ l ) ≤ [q]c l ∨ l · q. If it is future-deterministic, we also have l ·c q ∧ [q]l ≤ (l ∧ l ) ·c q. If its past-deterministic, we also have l · q ∧ [q]c l ≤ (l ∧ l ) · q. Proposition 5. If the module of a past and future deterministic converse di-system is a Boolean algebra with negation operator ¬− : M − → M , we have m · q = ¬[q]c ¬m c and m · q = ¬[q]¬m. For details of this, we refer the reader to [PS10]. We have also defined a Kleene star for iteration and shown that it preserves the adjunctions. In the Boolean setting of von Karger [vK98], these iteration operators model modalities of temporal logic.
3 Navigation Di-systems To distinguish the “potential” actions that happen in the model used by the agent, for example, actions described by a map, from the “real” actions that take place in the real world, we use a formalism in which there is a family of systems indexed by action sequences representing how the model is modified as actions occur1. In this section, we take the concrete view of example 1, since the protocols that we are interested in modeling take place in navigation systems that have a set of locations or states and a set of actions. These states give rise to the module on which the quantale of actions acts. Let A be a set of actions and let Q be the free quantale generated by A, i.e. P(A∗ ), where A∗ , is, as usual, the free monoid generated by A, in other words the set of sequences of actions. We write α for a typical element of A∗ and we write αa for the sequence α with a appended. We let S be a set of states and we define M to be P(S), the powerset of S. With this choice of M and Q we assume that we have a di-system (M, Q, ·, ·c ). Definition 7. A navigation pre di-system is a di-system (M, Q, ·, ·c ) together with a second action of Q and its converse ((M, Q, ·, ·c ), Q, , c ) constructed freely. In other words s a is an element of the module and (s a) b can be regarded as s ab and, in general (s α) a = s αa. Finally, in order to describe the action of a general member of Q we lift the action to sets pointwise. 1
This kind of construction is an example of what is called a presheaf, but it will not be necessary for the reader to know these concepts in advance.
Learning in a Changing World, an Algebraic Modal Logical Approach
135
Potential and real actions have the same labels and both live in the quantale Q. Potential actions change the state of the map via the actions · and ·c , real actions change the state of the world via the actions and c . The reason potential and real actions are distinguished from one another is that their targets have different uncertainties. For example, consider the scenario of the introduction, modeled as a converse di-system in example 4. There, the uncertainty of s1 ·a is s3 ∨s4 , whereas the uncertainty of s1 a is only s4 . So the real actions have an extra significance: they also change the uncertainty of the states. To encode the uncertainties, we use lax endomorphisms of the system. The reason these are called lax is that we require them to satisfy inequality axioms rather than equalities. These axioms encode the change of uncertainty; the reason they are inequalities has been motivated in [Sad06]. In a nutshell, this is done in order to be able to encode the process of learning as a decrease in the uncertainty, hence an increase in information. Definition 8. A lax endomorphism u of a navigation pre di-system consists of a pair of endomorphisms u = (uM : M − → M, uQ : Q − → Q), where uM s preserve joins of Q M and u preserves of Q, moreover we have joins uM (m q) ≤ (1) m ∈ M | m ≤ uM (m · q), m ·c uQ (q) = ⊥ (2) uQ (q • q ) ≤ uQ (q) • uQ (q ) 1 ≤ uQ (1) (3) We read uM (m) as the uncertainty about proposition m, the join of all propositions that are possibly true when in reality m is true. For example uM (m) = m ∨ m , says that in reality m is true, but agent considers it possible that either m or m might be true. Similarly, we read uQ (q) is the uncertainty about action q, the join of all actions that are possibly happening when in reality action q is happening. For example, uQ (q) = q ∨ q says that in reality action q is happening but the agent considers it possible that either q or q is happening. Putting it all together, we define: Definition 9. A navigation di-system (Nav-diSys) is a navigation pre di-system ((M, Q, ·, ·c ), , c , u) endowed with a di-system lax endomorphism u = (uM , uQ ). The real action − q changes the uncertainty of a proposition m via inequalities of definition 8. We refer to it as the uncertainty reduction axiom. The intuition behind it is as follows: when one does actions in reality, they change our uncertainty. In navigation systems this change is as follows: the uncertainty after performing an action in reality uM (m q) is the uncertainty of performing a potential action according to the description of the system, i.e. uM (m · q) minus the choices to which one could not have reached via a q action (according to the description). For example, uM (m · q) can be a choice of m ∨ m and it is not possible to reach m via a q action, i.e. m ·c q = ⊥. Hence m is removed from the choices in uM (m q), hence uM (m q) = m . The other two inequalities are for coherence of uncertainty with regard to composition, the motivations for these are as in [BCS07]. Example 6. The transition system of example 4 is modeled in the following Nav-diSys ((P(Σ), P(A∗ ), ·, ·c ) , P(A∗ ), , c , u)
136
P. Panangaden and M. Sadrzadeh
Here, Σ is obtained by closing the set of states S under product with A, i.e. Σ := i i S × A . So it contains states s ∈ S, pairs of states and actions (s, a) ∈ S × A, pairs of pairs of states and actions ((s, a), b) ∈ (S × A) × A) and so on. The potential action on states s · a is given by the transitions. This is extended to pairs by consecutive application of the action, i.e. (s, a) · b is given by (s · a) · b. The pairs encode the real actions, i.e. s a := (s, a), (s a) b := ((s, a), b), . . . for the atoms and extend it to all the other elements pointwisely, e.g. s (a ∨ b) := (s a) ∨ (s b) and s (a • b) := ((s, a), b). Since real actions cannot be reversed, their corresponding converse action is taken to be the same as the converse of the potential action, i.e. for all actions a and states s, we have that s c a = s ·c a. The converse of real action c , is introduced for reasons of symmetry with the real action, so that we can uniformly use their right adjoints to express the logical properties “after” and “before”. The lax di-system endomorphism on the module uM are determined by indistinguishability of states as follows: s, s are indistinguishable iff the same action a can be performed on them. In formal terms uM (s) := {s ∈ M | ∀a ∈ A,
s · a = ⊥ iff
s · a = ⊥}
The uM of the states updated by potential actions is the uM of the image, i.e. for the transition system of the introduction, we have uM (s1 · a) = uM (s4 ) = s4 ∨ s4 . The uM of states updated by the real action is determined by inequality (1) of definition 8, e.g. uM (s1 a) = uM (s1 , a) ≤ s4 . The uncertainties of actions, i.e. uQ can be set similar to that of states: by indistinguishability under application to states. Since for our navigation applications these do not play a crucial role, we assume them to be the identity, i.e. uQ (q) = q for all q ∈ P(A∗ ). Finally, recall that since each projection of u is join preserving, it has a Galois right adjoint, we focus on the right adjoint of uM , which we denote by the epistemic modality 2. This is canonically defined as follows 2m := {m ∈ M | uM (m ) ≤ m} We read 2m as ‘according to the information available m holds in reality’. Alternatively, one can use the belief modality of doxastic logic and read it as ‘it is believed that, or the (sole) agent believes that, m holds in reality’. Putting these modalities together with the dynamic ones, we can express properties such as [q]2m, read as “after action q the agent believes that m holds”, and such as [q]2[q]c m, read as “after action q the agent believes that before action q proposition m held”, and so on.
4 Applications to Navigation 4.1 Map-Based Navigation In the protocols of this section, the agent has a map of a place, it moves accordingly to be able to find out where it is (i.e. if it already knows where it is, there is no reason to move). Consider the navigation protocol of introduction, we encode it in a Nav-diSys with the set of locations S = {s1 , s2 , s3 , s4 , s5 }, the set of actions Ac = {a, b, c} and show that after doing an a action on s1 , the robot knows where it is and where it was before moving.
Learning in a Changing World, an Algebraic Modal Logical Approach
137
Proposition 6. The following hold in a N based on the above data. s1 ≤ [a]2[a]c s1
s1 ≤ [a]2s4
Proof. Consider the first one: by the adjunction −a [a]−, it is equivalent to s1 a ≤ 2s4 . By the adjunction uM 2, this is equivalent to uM (s1 a) ≤ s4 . Now by the uncertainty reduction inequality, it is enough to show that {si ∈ S | si ≤ uM (s1 · a), si ·c a = ⊥} ≤ s4 Since s1 ·a = s4 , and uM (s4 ) = s3 ∨s4 , but s3 ·c a = ⊥ where as s4 ·c a = ⊥, hence the lhs of the above is equal to s4 , which is ≤ s4 . Consider the second inequality, it becomes equivalent to uM (s1 a) c a ≤ s1 , by a series of 3 unfoldings of adjunctions. We have shown that uM (s1 a) ≤ s4 , so it suffices to show s4 c a ≤ s1 , which is true since s4 c a = s4 ·c a = s1 ≤ s1 . For an example of a protocol based on the partial map of a city, see [PS10] 4.2 Staircase Navigation Navigating on the staircase is one of the simplest cases of robot navigation: if the robot is anywhere except for the first and last floor, it does not know where it is. But if it moves to any of these location, it learns where it is and was before moving. We model the n-floor stair case as n ∈ N locations S = {fn | n ∈ N}. The atomic actions available to the robot are Ac = {up, down}. f1 o
up
)
down
f2 o
up down
*
··· o
up down
+
up
fn−1 o
down
*
fn
The floors f 2 to fn−1 are indistinguishable from one another, i.e. for 1 < i < n, we have uM (fi ) = 1
fk ≤ [upn−k ]2[upn−k ]c fk fk ≤ [dnk−1 ]2[dnk−1 ]c fk fk ≤ [dnk−1 ]2[upk−1 ]fk
for 1
dnk−n and dnk−1 .
n−k
k−1
For the proof see [PS10]. 4.3 Grid Navigation A more complex robot navigation protocol happens on the grid: a robot is in a grid with n rows and m columns, it can go up, down, left, and right and is supposed to move
138
P. Panangaden and M. Sadrzadeh
about and find out where it is. The grid cells look alike to it as long as it can do the same movements in them, hence it knows where it is iff it ends up in one of the four corner cells. We model this protocol in a Nav-diSys and show that no matter where the robot is, there is always some sequence of movements that it can do to get it to one of the corners. After doing either of these it learns where it is and where it was beforehand. Each grid cell is modeled by a state sij in the i’th row and j’th column. Uncertainty of corner states s11 , s1m , sn1 , snm is identity, i.e. uM (s11 ) = s11
uM (s1m ) = s1m
uM (sn1 ) = sn1
uM (snm ) = snm
For the non-corner cells of the first row and first column, we have uM (s1j ) = s1y uM (si1 ) = sx1 1
1<x
For the non-corner cells of last row n and last column m , we have sny uM (sim ) = sxm uM (snj ) = 1
For the rest of the cells we have uM (sij ) =
1<x
1<x
sxy . The set of actions is
Ac = {u, d, l, r}, their non-applicability is as follows
s1j · u = s1j ·c d = si1 · l = si1 ·c r = snj · d = snj ·c u = sim · r = sim ·c l = ⊥ All the other actions are applicable in all the other states. Proposition 8. The following hold in a N based on the above data. sij ≤ [α]2(s11 ∨ s1m ∨ sn1 ∨ snm )
sij ≤ [α]2[α]c sij
for 1 < i < n, 1 < j < m and α the following choices of sequences of movements (ui−1 ∨ dn−i ) • (lj−1 ∨ rm−j ) ∨ (lj−1 ∨ rm−j ) • (ui−1 ∨ dn−i ) For the proof see [PS10].
5 Embedding Epistemic Systems An algebraic semantics for information learning from communication has been presented in previous work [BCS07], referred to as epistemic systems. In this section we make the connection between epistemic systems and Nav-diSys formal. Definition 10. A (mono-modal) epistemic system (M, Q, − ⊗ −, f ) as defined in [BCS07] is a quantale Q acting on its right module M via the action − ⊗ − : M × Q
Learning in a Changing World, an Algebraic Modal Logical Approach
139
− → M , where f = (f M : M − → M, f Q : Q − → Q) is a lax system endomorsphism of the setting satisfying the following three inequalities f M (m ⊗ q) ≤ f M (m) ⊗ f Q (q) Q
Q
Q
(1)
f (q • q ) ≤ f (q) • f (q )
(2)
Q
1 ≤ f (1)
(3)
Moreover every element of the quantale q ∈ Q has a kernel, ker(q) = {m ∈ M | m ⊗ q = ⊥} and the module has a special subset F act ⊆ M , defined as Φ = {p ∈ M | ∀q ∈ Q, p ⊗ q ≤ p}. The module and quantale have a set of atoms At(M ) and At(Q) and we have that At(M ) ⊆ Φ. Inequality number (1) is referred to as the appearance-update inequality. The kernel of each action encodes the propositions to which the action cannot apply, i.e. if you update those propositions with this action, you will get the ⊥. Kernels are the opposite of the preconditions of actions, as used in the DEL literature, as propositions to which the action can be applied. The facts represent states, and the reason they are stable under updates here is that epistemic actions do not change the state of the world, but only the state of information of agents. Definition 11. An atomic Nav-diSys, similarly atomic epistemic system, is one that has an atomic module with set of atoms At(M ) and an atomic quantale with a set of atoms At(Q). Definition 12. A weak reflexive Nav-diSys is an atomic one in which for s ∈ At(M ), π ∈ At(Q) we have s ≤ uM (s) and π ≤ uQ (π)2 . Theorem 1. Given a weak reflexive atomic Nav-diSys N , the structure Nσ
=
(M σ , Qσ , − ⊗ −, f )Φ
obtained by setting M σ to M , Qσ to Q, f to u, Φ to At(M ), and m ⊗ q to m q, is an atomic epistemic system. Proof. We need to show that N σ satisfies the appearance-update axiom. We do so by deriving it from the uncertainty reduction axiom of N . In an atomic setting the uncertainty reduction axiom becomes equivalent to the following (I) uM (m q) ≤ {si ∈ At(M ) | si ≤ uM (m · q), si ·c uQ (q) = ⊥} In the atomic epistemic system N σ , the appearance-update axiom becomes equivalent to the following (II) uM (m q) ≤ {tj ∈ At(M ) | tj ≤ uM (m), tj uQ (q) = ⊥} This is a result of atoms becoming facts, that is since tj ∈ Φ we obtain tj ⊗ uQ (q) ≤ tj . We show (I) ≤ (II). Take si ≤ (I), that is si ≤ uM (m · q) where si ·c uQ (q) = ⊥. 2
Concrete systems that arise from applications have this property.
140
P. Panangaden and M. Sadrzadeh
We analyze uM (m · q) by analyzing m · q, which is the same as m q in N σ , and is thus equivalent to m q = {wk ∈ At(M ) | wk ≤ m, wk q = ⊥} From this by monotonicity of uM , we obtain uM (m q) = {uM (wk ) ∈ At(M ) | wk ≤ m, wk q = ⊥} From the above and si ≤ uM (m · q) = uM (m q) in N σ we obtain that si ≤ uM (wk ) where wk q = ⊥. Since wk ≤ m then uM (wk ) ≤ uM (m), thus si ≤ uM (m). Since wk q = ⊥ and by weak reflexivity from wk ≤ uM (wk ) and q ≤ uQ (wk ), we have wk q ≤ uM (wk ) uQ (q), we obtain that uM (wk ) uQ (q) = ⊥, hence si ≤ (II). Weak reflexive and transitive Nav-diSys’s and epistemic systems form a pair of categories with morphisms of each being its corresponding lax endomorphisms. In this setting, the above construction becomes a forgetful functor from the latter to the former, most likely having a right adjoint.
6 Conclusions and Future Work We have developed an algebraic framework for dynamic epistemic logic in which the dynamic and epistemic modalities appear as right adjoints. The key new feature in the present work relative to previous work [Sad06, BCS07] is the presence of converse actions and the algebraic laws that govern uncertainty reduction. Robot navigation protocols, as well as the three-player game in Phillips’s thesis [Phi09], give examples in which the old learning inequality was violated, showing that there were new subtleties that arise when there are actions that really change the state of the world. A number of directions for future work naturally suggest themselves. On the purely theoretical side, we would like to relate boolean converse di-systems to Kleene algebras with test and converse. To develop a logic for Nav-diSys, we need to first develop a logic for the algebra of di-systems. The latter must be similar to the positive fragment of Propositional Dynamic Logic with converse [Par78]. It seems routine to add epistemic modalities to this, the challenge would be to come up with a logical form of the uncertainty reduction axiom. Establishing closer connections to other DEL logics [Auch07, vanDit05] are also worth investigating. We are also particularly interested in extending this work to apply to examples that involve security protocols where “knowledge” and “learning” play evident roles. A fundamental extension, and one in which we have begun preliminary investigations, is the extension to the probabilistic case. Here knowledge and information theory may well merge in an interesting and not obvious way.
Acknowledgements We have benefited greatly from discussions with Caitlin Phillips and Doina Precup. The latter invented the three-player game and the former discovered the violation of the update inequality. This research was supported by EPSRC (UK) and NSERC (Canada) and the Office of Naval Research.
Learning in a Changing World, an Algebraic Modal Logical Approach
141
References [AV93]
Abramsky, S., Vickers, S.: Quantales observational logic and process semantics. Mathematical Structures in Computer Science 3, 161–227 (1993) [Auch07] Aucher, G., Herzig, A.: From DEL to EDL: Exploring the Power of Converse Events. In: Mellouli, K. (ed.) ECSQARU 2007. LNCS (LNAI), vol. 4724, pp. 199–209. Springer, Heidelberg (2007) [BCS07] Baltag, A., Coecke, B., Sadrzadeh, M.: Epistemic actions as resources. Journal of Logic and Computation 17, 555–585 (2007) (arXiv:math/0608166) [DMS06] Desharnais, J., M¨uller, B., Struth, G.: Kleene algebra with domain. ACM Trans. Comput. Log. 7, 798–833 (2006) [Dun05] Dunn, M.: Positive modal logic. Studia Logica 55, 301–317 (2005) [FHMV95] Fagin, R., Halpern, J.Y., Moses, Y., Vardi, M.Y.: Reasoning About Knowledge. MIT Press, Cambridge (1995) [vanDit05] van Ditmarsch, H.P., van der Hoek, W., Kooi, B.P.: Dynamic Epistemic Logic with Assignment. In: Proceedings of AAMAS, pp. 141–148 (2005) [GNV05] Gehrke, M., Nagahashi, H., Venema, Y.: A Sahlqvist theorem for distributive modal logic. Annals of Pure and Applied Logic 131, 65–102 (2005) [HKT00] Harel, D., Kozen, D., Tiuryn, J.: Propositional Dynamic Logic. MIT Press, Cambridge (2000) [HM84] Halpern, J.Y., Moses, Y.: Knowledge and common knowledge in a distributed environment. In: Proceedings of the Third ACM Symposium on Principles of Distributed Computing, pp. 50–61 (1984); A revised version appears as IBM Research Report RJ 4421 (August 1987) [HM90] Halpern, J., Moses, Y.: Knowledge and common knowledge in a distributed environment. JACM 37, 549–587 (1990) [Kri63] Kripke, S.: Semantical analysis of modal logic. Zeitschrift fur Mathematische Logik und Grundlagen der Mathematik 9, 67–96 (1963) [PS10] Panangaden, P., Sadrzadeh, M.: Learning in a changing world via algebraic modal logic, http://www.comlab.ox.ac.uk/files/2815/mehrnoosh_ prakash.pdf and http://www.cs.mcgill.ca/˜prakash/Pubs/ mehrnoosh_prakash.pdf [Par78] Parikh, R.: The Completeness of Propositional Dynamic Logic. In: Winkowski, J. (ed.) MFCS 1978. LNCS, vol. 64, pp. 403–415. Springer, Heidelberg (1978) [Phi09] Phillips, C.: An algebraic approach to dynamic epistemic logic. Master’s thesis, School of Computer Sciecne. McGill University (2009) [Sad06] Sadrzadeh, M.: Actions and Resources in Epistemic Logic. PhD thesis, Universit´e du Qu´ebec a` Montr´eal (2006) [SD09] Sadrzadeh, M., Dyckhoff, R.: Positive logic with adjoint modalities: Proof theory, semantics and reasoning about information. ENTCS 23, 211–225 (2009) [vDvdHK08] van Ditmarsch, H., van der Hoek, W., Kooi, B.: Dynamic Epistemic Logic. Synthese Library, vol. 337. Springer, Heidelberg (2008) [vK98] von Karger, B.: Temporal algebras. Mathematical Structures in Computer Science 8, 277–320 (1998) [Win09] Winskel, G.: Prime algebraicity. Theoretical Computer Science 410, 4160–4168 (2009)
Matching Logic: An Alternative to Hoare/Floyd Logic Grigore Ro¸su1 , Chucky Ellison1 , and Wolfram Schulte2 1
University of Illinois at Urbana-Champaign {grosu,celliso2}@illinois.edu 2 Microsoft Research, Redmond
[email protected]
Abstract. This paper introduces matching logic, a novel framework for defining axiomatic semantics for programming languages, inspired from operational semantics. Matching logic specifications are particular first-order formulae with constrained algebraic structure, called patterns. Program configurations satisfy patterns iff they match their algebraic structure and satisfy their constraints. Using a simple imperative language (IMP), it is shown that a restricted use of the matching logic proof system is equivalent to IMP’s Hoare logic proof system, in that any proof derived using either can be turned into a proof using the other. Extensions to IMP including a heap with dynamic memory allocation and pointer arithmetic are given, requiring no extension of the underlying first-order logic; moreover, heap patterns such as lists, trees, queues, graphs, etc., are given algebraically using fist-order constraints over patterns.
1 Introduction Hoare logic, often identified with axiomatic semantics, was proposed more than forty years ago [15] as a rigorous means to reason about program correctness. A Hoare logic for a language is given as a formal system deriving Hoare triples of the form {ϕ} s {ϕ }, where s is a statement and ϕ and ϕ are state properties expressed as logical formulae, called precondition and postcondition, respectively. Most of the rules in a Hoare logic proof system are language-specific. Programs and state properties in Hoare logic are connected by means of program variables, in that properties ϕ, ϕ in Hoare triples {ϕ} s {ϕ } can and typically do refer to program variables that appear in s. Moreover, Hoare logic assumes that the expression constructs of the programming language are also included in the logical formalism used for specifying properties, typically firstorder logic (FOL). Hoare logic is deliberately abstract, in that it is not concerned with “low-level” operational aspects, such as how the program state is represented or how the program is executed. In spite of serving as the foundation for many program verification tools and frameworks, it is well-known that it is difficult to specify and prove properties about the heap (i.e., dynamically allocated, shared mutable objects) in Hoare logic. In particular, local reasoning is difficult because of the difficulty of frame inference [23, 25, 29]. Also, it
Supported in part by NSF grants CCF-0916893, CNS-0720512, and CCF-0448501, by NASA contract NNL08AA23C, by a Samsung SAIT grant, and by several Microsoft gifts.
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 142–162, 2011. c Springer-Verlag Berlin Heidelberg 2011
Matching Logic: An Alternative to Hoare/Floyd Logic
143
is difficult to specify recursive predicates, because they raise both theoretical (consistency) and practical (hard to automatically reason with) concerns. Solutions are either ad hoc [18], or involve changing logics [21]. Finally, program verifiers based on Hoare logic often yield proofs which are hard to debug and understand, because these typically make heavy use of encoding (of the various program state components into a flat FOL formula) and follow a backwards verification approach (based on weakest precondition). Separation logic takes the heap as its central concern and attempts to address the limitations above by extending Hoare logic with special logical connectives, such as the separating conjunct “∗” [23, 25], allowing one to specify properties that hold in disjoint portions of the heap. The axiomatic semantics of heap constructs can be given in a forwards manner using separation connectives. While program verification based on separation logic is an elegant approach, it is unfortunate that one would need to extend the underlying logic, and in particular theorem provers, to address new language features. In an effort to overcome the limitations above and to narrow the gap between operational semantics (easy) and program verification (hard), we introduce matching logic, which is designed to be agnostic with respect to the underlying language configuration, as long as it can be expressed in a standard algebraic way. Matching logic is similar to Hoare logic in many aspects. Like Hoare logic, matching logic specifies program states as logical formulae and gives axiomatic semantics to programming languages in terms of pre- and post-conditions. Like Hoare logic, matching logic can generically be extended to a formal, syntax-oriented compositional proof system. However, unlike Hoare logic, matching logic specifications are not flattened to arbitrary FOL formulas. Instead, they are kept as symbolic configurations, or patterns, i.e., restricted FOL= (FOL with equality) formulae possibly containing free and bound (existentially quantified) variables. This allows the logic to stay the same for different languages—new symbols can simply be added to the signature, together with new axioms defining their behavior. Matching Logic Patterns. Patterns (Sec. 3) can be defined on top of any algebraic specification of configurations. In this paper, the simple language IMP (Sec. 2) uses two-item configurations . . .k . . .env , where . . .k holds a fragment of program and . . .env holds an environment (map from program variables to values). The order of items in the configuration does not matter, i.e., the top . . . cell holds a set. We use the typewriter font for code and italic for logical variables. A possible configuration γ is: x := 1; y := 2k x → 3, y → 3, z → 5env One pattern p that is matched by the above configuration γ is the FOL= formula: ∃a, ρ.(( = x := 1; y := 2k x → a, y → a, ρenv ) ∧ a ≥ 0) where “” is a placeholder for configurations (regarded as a special variable). To see that γ matches p, we replace with γ and prove the resulting FOL= formula (bind ρ to “z → 5” and a to 3). For uniformity, we prefer to use the notation (described in Sec. 3) x := 1; y := 2k x →?a, y →?a, ?ρenv ?a ≥ 0form
144
G. Ro¸su, C. Ellison, and W. Schulte
for patterns, which includes pattern’s constraints as an item . . .form in its structure . . ., at the same time eliminating the declaration of locally bound variables but tagging them with ? to distinguish them from other variables; we also allow free variables in patterns, acting as proof parameters (like in Hoare logic). It is this notation for patterns that inspired us to call their satisfaction matching, which justifies the name of our logic. Extending IMP with a heap gives us the language HIMP (Sec. 5), which uses configurations of the form . . .k . . .env . . .mem ; one possible HIMP configuration is: [x] := 5; z := [y]k x → 2, y → 2env 2 → 7mem describing a program that will first assign 5 to the location pointed to by x, then assign the value pointed to by y to z. Note that pointers x and y are aliased here, so z will receive 5. One of many patterns that are matched by the above configuration is: [x] := 5; z := [y]k x →?a, y →?a, ?ρenv ?a →?v, ?σmem ?a ≥ 0form The above pattern specifies all the configurations where x and y are allocated and aliased. Matching logic can therefore express separation at the term level instead of at the formula level; in particular, no disjointness predicate is needed (see Sec. 5 and 6). Note that the constraint ?a ≥ 0 is redundant in the pattern above, because one can infer it from the fact that the binding ?a →?v appears in . . .mem (the cell . . .mem wraps a map structure whose domain is the non-negative integers). To simplify the presentation, in general we assume that the . . .form cell appearing in a pattern includes all the current constraints of the pattern, i.e., not only those related to the program itself (like the ?a ≥ 0 on the previous page) but also those pertaining to or resulting from the background mathematical theories; in our implementation, for practical reasons we actually keep the two distinct and only derive consequences of the background theories (like the ?a ≥ 0 above) by need.
Assignment in Matching vs. Hoare/Floyd Logic. Both Hoare logic and matching logic are defined as language-specific proof systems, the former deriving triples {ϕ} s {ϕ } as explained and the latter deriving pairs of patterns Γ Γ , with the intuition that if program configuration (which includes the code) γ matches Γ then the configuration γ obtained after completely reducing γ matches Γ ; we only discuss partial correctness in this paper. To highlight a fundamental difference between Hoare logic and matching logic, let us examine the assignment rules for each. The Hoare logic assignment rule, {ϕ[e/x]} x := e {ϕ} or (HL-asgn) in Fig. 1, is perhaps the most representative rule of Hoare logic, showing how program variables and expressions are mixed in formulae: if the program state satisfies the formula ϕ[e/x] (i.e., ϕ in which all free occurrences of variable x are replaced by expression e) before the assignment statement “x := e”, then the program state will satisfy ϕ after the execution of the assignment statement (IMP has side-effectfree expressions). In contrast, the matching logic rule for assignment,
ek C vk C x := ek C ·k C[x ← v]
Matching Logic: An Alternative to Hoare/Floyd Logic
145
or (ML-asgn) in Fig. 2, says that if e reduces to v in a program configuration matching pattern ek C (for IMP, C only contains an environment item; however, we prefer to use a meta-variable C to increase the modularity of our proof system), then after executing the assignment “x := e” the configuration matches ·k C[x ← v], where the “·” stands for the empty or completed computation and where C[x ← v] is some operation in the algebraic data type of configurations which updates the binding of x to v in C’s environment; in the case of IMP, for example, this latter operation can be defined using an equation ρenv [x ← v] = ρ[v/x]env (we assume the map update operation _[_/_] already defined as part of the mathematical background theory). The most apparent difference is the forward proof style used. The proof proceeds in the order of execution, and the environment is explicit (as an item in C), instead of encoded in a flat formula, like in Hoare logic. In fact, replacing the patterns with actual configurations that match them gives a rule that would not be out of place in standard big-step operational semantics. Even though the matching rule above is a forward rule, it is not the Floyd rule [10]: {ϕ} x := e {∃v. (x = e[v/x]) ∧ ϕ[v / x]} The Floyd rule above is indeed forward, but the reader should notice that its use results in the addition of an existential quantifier. For a given program, this rule might result in the introduction of many quantifiers, which are often hard for verification tools to handle tractably. Looked at in this light, the matching logic rule offers the best of both worlds— the rule is both forwards and does not introduce quantifiers. Indeed, none of the matching logic proof rules introduce quantifiers. Working forward is arguably more “natural”. Additionally, as discussed in [12], working forward has a number of other concrete advantages related to reverse engineering, testing, debugging, model-checking, etc. Contributions. The main contribution of this paper is the formal introduction of matching logic. We show that for the simple imperative language IMP, a restricted use of the matching logic proof system is equivalent to a conventional Hoare logic proof system. The translation from Hoare to matching logic is generic and should work for any language, suggesting that any Hoare logic proof system admits an equivalent matching logic proof system. However, the other translation, going from matching logic to Hoare logic, appears to be language specific because it relies on finding appropriate encodings of program configuration patterns into Hoare specifications; it is not clear that they always exist. Section 2 discusses preliminaries and IMP. Secion 3 describes the details of matching logic and gives a matching logic semantics to IMP. Using IMP, Section 4 establishes the relationship between matching logic and Hoare logic. Section 5 shows how easily the concepts and techniques extend to a language with a heap. Finally, Sec. 6 discusses an example and our experience with a matching logic verifier implemented in Maude.
2 Preliminaries, the IMP Language, and Hoare Logic Here we introduce material necessary to understand the contributions and results of this paper. First we discuss background material, then the configuration and notations we use for IMP, and finally Hoare logic by means of IMP.
146
G. Ro¸su, C. Ellison, and W. Schulte
Preliminaries. We assume the reader is familiar with basic concepts of algebraic specification and first-order logic with equality. The role of this section is to establish our notation for concepts and notions used later in the paper. An algebraic signature (S, Σ) is a finite set of sorts S and a finite set of operation symbols Σ over sorts in S. For example, S may include sorts E for expressions and S for statements, and Σ may include operation symbols like “if(_)_else_ : E × S × S → S ”. Here we used the mixfix notation for operation symbols, where underscores are placeholders for arguments, which is equivalent to the context-free or BNF notation. Since the latter is more common for defining programming language syntax, we prefer it from here on; so we write “S if (E) S else S ”. We write Σ instead of (S, Σ) when S is understood or irrelevant. We let TΣ denote the initial Σ-algebra of ground terms (i.e., terms without variables) and TΣ (X) denote the free Σ-algebra of terms with variables in X, where X is an S-indexed set of variables. We next briefly recall first-order logic with equality (FOL= ). A first-order signature (S, Σ, Π) extends an algebraic signature (S, Σ) with a finite set of predicate symbols Π. FOL= formulae have the syntax ϕ t = t | π(t) | ∃X.(ϕ) | ¬ϕ | ϕ1 ∧ ϕ2 , plus the usual derived constructs ϕ1 ∨ ϕ2 , ϕ1 ⇒ ϕ2 , ∀X.(ϕ), etc., where t, t range over Σterms of the same sort, π ∈ Π over atomic predicates, t over appropriate term tuples, and X over finite sets of variables. Σ-terms can have variables; all variables are chosen from a fixed sort-wise infinite S-indexed set of variables, Var. We adopt the notation ϕ[e/x] for the capture-free substitution of all free occurrences of variable x by term e in formula ϕ. A FOL= specification (S, Σ, Π, F ) is a FOL= signature (S, Σ, Π) plus a set of closed (i.e., no free variables) formulae F . A FOL= model M is a Σ algebra together with relations for the predicate symbols in Π. Given any closed formula ϕ and any model M, we write M |= ϕ iff M satisfies ϕ. If ϕ has free variables and ρ : Var M is a partial mapping defined (at least) on all free variables in ϕ, also called an Mvaluation of ϕ’s free variables, we let ρ |= ϕ denote the fact that ρ satisfies ϕ. Note that ρ |= ϕ[e/x] iff ρ[ρ(e)/x] |= ϕ. IMP. Here is the syntax of our simple imperative language IMP, using BNF: Nat naturals, Nat+ positive naturals, Int integers PVar identifiers, to be used as program variable names E Int | PVar | E 1 op E2 S skip | PVar := E | S 1 ;S 2 | if (E) S 1 else S 2 | while (E) S Like C, IMP has no boolean expressions (0 means “false” and 0 means “true”). To give a matching logic semantics to a language, Cfg Set[CfgItem] one needs a rigorous definition of program configuraCfgItem E | S k | Envenv tions, like in operational semantics. Besides a syntacEnv Map[PVar, Int] tic term, a configuration may typically contain pieces of semantic data that can be defined as conventional algebraic data types, such as associative lists (e.g., for stacks, queues), sets (e.g., for resources held), maps (e.g., for environments, stores), etc. There is no standard notation for configurations. We use the generic notation from the K framework and rewrite logic semantics [20, 28], where configurations are defined as potentially nested cell terms contentslabel (the label may
Matching Logic: An Alternative to Hoare/Floyd Logic
147
be omitted). The cell contents can be any data-structure, including, e.g., sets of other cells. Here IMP’s configurations are cells of the form . . .k . . .env over the syntax in the box to the right. How to define such cell configurations as algebraic data-types and then use them to give operational semantics to languages is explained in detail in [20, 28]. The reader may have noticed that our formal notion of configuration above is more “generous” than one may want it to be. Indeed, the syntax above allows configurations with arbitrarily many . . .k and . . .env cells. While multiple . . .k cells may make sense in the context of concurrent languages (where each cell . . .k would represent a concurrent thread or process; see [28,20]), having multiple environment cells at the same level with multiple computation cells may indeed be confusing. If one wants to disallow such undesirable configurations syntactically, then one can change the grammar above accordingly; for example, one can replace the first two productions with “C ::= E | S k Envenv ”, etc. However, as argued in [28], allowing configurations with arbitrarily many and untyped nested cells has several advantages that are hard to ignore, particularly with regards to the modularity and extendability of language definitions, so we (admittedly subjectively) prefer them in general. To ensure that bad configurations are never produced, one can check that the initial configuration is well-formed and that none of the subsequent rewrite rules changes the structure of the configuration. As an analogy, language designers often prefer to define language syntax grammars which are rather permissive, and then prove that all the semantic reduction/rewrite rules preserve a carefully chosen notion of well-definedness of the original program; this approach of not using the language syntax grammar to define well-definedness allows more flexibility in designing various type systems or abstractions over the language syntax. Configuration well-definedness “preservation” proofs would be rather easy, but we do not address them in this paper. We do not give IMP an operational semantics here, despite being trivial, since we give it a Hoare logic semantics shortly, which suffices to later prove the soundness of our matching logic semantics as well as its relationship to Hoare logic (Thm. 1). In our companion report [27] we provide an operational semantics, together with direct proofs of soundness for the Hoare logic (Fig. 1) and matching logic (Fig. 2). Hoare Logic Proof System for IMP. · (HL-asgn) Fig. 1 gives IMP an axiomatic semantics {ϕ[e/x]} x := e {ϕ} as a Hoare logic proof system deriving {ϕ1 } s1 {ϕ2 }, {ϕ2 } s2 {ϕ3 } (partial) correctness triples of the form (HL-seq) {ϕ1 } s1 ;s2 {ϕ3 } {ϕ} s {ϕ }, where ϕ and ϕ are FOL formulae called the pre-condition and the {ϕ ∧ (e 0)} s1 {ϕ }, post-condition respectively, and s is a {ϕ ∧ (e = 0)} s2 {ϕ } (HL-if) statement; the intuition for {ϕ} s {ϕ } is {ϕ} if (e) s1 else s2 {ϕ } that if ϕ holds before s is executed, then {ϕ ∧ (e 0)} s {ϕ} ϕ holds whenever s terminates. In addi(HL-while) {ϕ} while (e) s {ϕ ∧ (e = 0)} tion to the specific rules for IMP, there are generic Hoare logic rules, such as Fig. 1. Hoare logic formal system for IMP consequence or framing, that can be considered part of all Hoare logic proof systems.
148
G. Ro¸su, C. Ellison, and W. Schulte
In Hoare logic, program states are mappings from variables to values, and are specified as FOL formulae—environment ρ “satisfies” ϕ iff ρ |= ϕ in FOL. Because of this, program variables are regarded as logical variables in specifications; moreover, because of the rule (HL-asgn) which may infuse program expression e into the pre-condition, specifications in Hoare logic actually extend program expressions. The signature of the underlying FOL is the subsignature of IMP including only the expressions E and their subsorts. Its universe of variables, Var, is PVar. Assume some background theory that can prove i1 op i2 = i1 opInt i2 for any i1 , i2 ∈ Int, where opInt is the mathematical variant of the syntactic op; e.g., 7 +Int 2 = 9. Expressions e are allowed as formulae as syntactic sugar for ¬(e = 0). Here are two examples of correctness triples: {x > 0 ∧ z = old_z} z := x + z {z > old_z} {∃z.(x = 2 ∗ z + 1)} x := x ∗ x + x {∃z.(x = 2 ∗ z)} The first sequent says that the new value of z is larger than the old value after the assignment, and the second says that if x is odd before the assignment then it is even after.
3 Matching Logic We now describe matching logic and use it to give an axiomatic semantics to IMP. Notions and Notations. To define a matching logic proof system, one needs to start with a definition of configurations. Section 2 discussed IMP’s configuration as an example of a two-subcell configuration, but one can have arbitrarily complex configurations. Let L = (ΣL , FL) be a FOL= specification defining the configurations of some language. ΣL is the signature of the language together with desired syntax for the semantic domains and data-structures, such as numbers and operations on them, lists, sets, maps, etc., as in Sec. 2, while the formulae in FL capture the desired properties of the various semantic components in the configurations, such as those of the needed mathematical domains. We may call L = (ΣL , FL ) the background theory, because the formulae in FL are by default assumed to hold in any context. Let Cfg be the sort of configurations in L. Let us fix a model TL of L, written more compactly T . We think of the elements of T as concrete data, and in particular of the elements of sort Cfg as concrete configurations. Even though it is convenient to think of configurations and data as ground ΣL -terms, in this paper we impose no initiality constrains on configurations or on their constituent components; in other words, the formulae in FL are all the FOL= properties that we rely on in subsequent proofs. If one needs properties of configuration components that can only be proved by induction (e.g., commutativity of addition, associativity of list append, etc.), then one is expected to add those properties as part of FL . In fact, we can theoretically assume that FL is not finite, not even enumerable. In our automated matching logic prover (see Sec. 6), FL consists of a finite set of equations, namely those that turned out to be useful in the experiments that we’ve done so far (our current FL is open to change and improvement); therefore, in our case we can conveniently pick T to be the initial model of L, but that is not necessary. Let Var be a sortwise infinite set of logical, or semantical variables, and let “” be a fresh ( Var) variable of sort Cfg.
Matching Logic: An Alternative to Hoare/Floyd Logic
149
While in provers it is useful to make a distinction between program variables and logical variables, there is no theoretical distinction between these in Hoare logic. By contrast, in matching logic these are completely different mathematical objects: the former are syntactic constants in ΣL , while the latter are logical variables in Var. Definition 1. (Simple) matching logic specifications for L, called configuration patterns or more directly just patterns, are particular FOL= formulae over the configuration signature ΣL above of the form ∃X.(( = c) ∧ ϕ), where: – X ⊂ Var is the set of (pattern) bound variables; the remaining variables either in c or free in ϕ are the (pattern) free variables; “” appears exactly once in a pattern; – We call c the pattern structure; it is a term of sort Cfg that may contain logical variables in Var (bound or not); it may (and typically will) also contain program variables in PVar (e.g., in an environment cell), but they are constants, not variables; – ϕ is the (pattern) constraint, an arbitrary FOL= formula. We let Γ, Γ , . . . , range over patterns. For example, the IMP pattern ∃x, y.(( = x := y/xk x → x, y → y, ρenv ) ∧ x 0 ∧ y = x ∗Int z) specifies configurations whose code is “x := y/x” and the value held by x is not 0 and is precisely z times smaller than the value held by y; variables ρ and z are free, so they are expected to be bound by the proof context (like in Hoare logic, free variables in specifications act as proof parameters). Note that we optionally called our matching logic specifications in Def. 1 “simple”. The reason for doing so is because one can combine such specifications into more complex ones. For example, in our implementation briefly discussed in Sec. 6 we also allow disjunctive patterns Γ ∨ Γ , which are useful for case analysis. In this paper we limit ourselves to simple matching logic specifications and we drop the adjective “simple”. Let Var be the set Var ∪ {} of variables Var extended with the special variable of sort Cfg. Valuations Var → T then consist of a concrete configuration γ corresponding to and of a map τ : Var → T ; we write such valuations using a pairing notation, namely (γ, τ) : Var → T . We are now ready to introduce the concept of pattern matching that inspired the name of our axiomatic semantic approach. Definition 2. Configuration γ matches pattern ∃X.(( = c) ∧ ϕ) iff there is a τ : Var → T such that (γ, τ) |= ∃X.(( = c) ∧ ϕ); if τ is relevant, we say that γ τmatches the pattern. (γ, τ) |= ∃X.(( = c) ∧ ϕ) is equivalent to saying that there exists θτ : Var → T with θτVar\X = τVar\X such that γ = θτ (c) and θτ |= ϕ. We next introduce an important syntactic sugar notation for the common case when configurations are bags of cells, that is, when “Cfg Bag[CfgItem]”. If that is the case, then we let C, C , . . . , range over configuration item bag terms, possibly with variables. Notation 1. If L’s configurations have the form “Cfg Bag[CfgItem]”, if C is a Cfg-term (with variables), and if the variables in X are clear from context (say, their name starts with a “?”, e.g., ?x), then we may write C ϕform instead of ∃X.(( = C) ∧ ϕ).
150
G. Ro¸su, C. Ellison, and W. Schulte
The rationale for this notation is twofold: (1) it eases the writing and reading of matching logic proof systems by allowing meta-variables C, C to also range over the additional subcell when not important in a given context; and (2) it prepares the ground for deriving matching logic provers from operational semantics over such cell configurations. With this notation, the IMP pattern below Def. 1 can be written more uniformly as x := y/xk x →?x, y →?y, ρenv ?x 0 ∧ ?y =?x ∗Int zform Since patterns are special FOL= formulae, one can use FOL= reasoning to prove properties about patterns; e.g., one can show that the above pattern implies the one below: x := y/xk x →?x, y →?x ∗Int z, ρenv ?x 0form A matching logic axiomatic semantics is given as a proof system for deriving sequents called correctness pairs, which can be thought of as “symbolic big-step sequents” as they relate configurations before the execution of a program fragment to configurations after:
Definition 3. A matching logic (partial) correctness pair consists of two patterns Γ and Γ , written Γ Γ . We call Γ a pre-condition pattern and Γ a post-condition pattern.
Assuming a hypothetical big-step operational semantics of the language under consideration, (partial) correctness pairs specify properties of big-step sequents (over concrete configurations) γ ⇓ γ , of the form “if γ τ-matches Γ then γ τ-matches Γ ”. For example, if Γ is the IMP pattern above (any of them), then Γ ·k x → z, ?ρenv trueform will be derivable with our matching logic proof system of IMP discussed in the sequel. Like in Hoare logic, most matching logic proof |= Γ ⇒ Γ , Γ Γ1 , |= Γ1 ⇒ Γ 1 1 rules are language-specific, but there are also Γ Γ (ML-conseq) some general purpose proof rules. For example, Γ Γ 1 2 the rules (ML-conseq) and (ML-subst) in the right (ML-subst) box (ξ is a substitution over free variables) can be ξ(Γ1 ) ξ(Γ2 ) used in any matching logic proof context. Also C μ C μ (ML-frame) L L like in Hoare logic, the formulae in the backC μ, FL C μ , FL ground theory can be used when proving implications. Theoretically, this is nothing special because one can always assume that each pattern constraint also includes the background formulae FL ; practically, one would probably prefer to minimize the complexity of the patterns and thus keep the background theory separate, making use of its formulae whenever needed. Unlike in Hoare logic, in matching logic one can also add framing rules for any of the cells in one’s language configurations, not only for the . . .form cell. The rule (ML-frame) in the box above-right shows a generic framing rule for a cell L (for notational simplicity, we used the cell-based Notation 1; if L is form, then _, _ is _ ∧ _). One should add framing rules on a case-by-case basis; for some complex configurations, one may not want to add framing rules for all cells. More details about framing and other generic rules can be found in our technical report [26].
Matching Logic: An Alternative to Hoare/Floyd Logic
C[e] ≡ v C x := e C[x ← v]
(ML-asgn)
C 1 s1 C2 , C2 s2 C 3 C 1 s1 ;s2 C 3
(ML-seq)
C[e] ≡ v, C ∧ (v 0) s1 C , C ∧ (v = 0) s2 C C if (e) s1 else s2 C C[e] ≡ v, C ∧ (v 0) s C C while (e) s C ∧ (v = 0) · C[i] ≡ i
151
(ML-if)
(ML-while) (ML-int)
· (C x → v, ρenv )[x] ≡ v
(ML-lookup)
C[e1 ] ≡ v1 , C[e2 ] ≡ v2 C[e1 op e2 ] ≡ v1 opInt v2
(ML-op)
Fig. 2. IMP matching logic formal system
Matching Logic Proof System for IMP. Figure 2 gives a matching logic proof system for IMP. To make it resemble the more familiar Hoare logic system (Fig. 1), we adopt the following notations:
(C ρenv )[x ← v] for C ρ[v/x]env (C ϕform ) ∧ ϕ for C ϕ ∧ ϕ form C[e] ≡ v for ek C vk C C s C for sk C ·k C
The meta-variables C, C , C1 , C2 above and in Fig. 2 range over appropriate configuration item bag terms so that the resulting patterns are well-formed.The first notation can be included as an operation in the algebraic signature or IMP’s configurations; the third works because IMP’s expressions are side-effect-free (so C does not change). In the case of IMP, configurations have only two subcells, the code and an environment. Using generic meta-variables like C instead of more concrete configuration item terms is key to the modularity of matching logic definitions and proofs. Indeed, to add heaps to IMP in Sec. 5 we only add new rules for the new language constructs (none of the rules in Fig. 2 changes), and let C include an additional heap cell. The rule (ML-lookup) in Fig. 2, which desugared says xk x → v, ρenv Cconfig vk x → v, ρenv Cconfig is derivable, shows an interesting and common situation where a configuration contains a term which, in order to make sense, needs to satisfy additional constraints. Indeed, the term “x → v, ρ” appears in the . . .env cell which wraps a map structure, so one expects that x is not in the domain of ρ. To avoid notational clutter, we always assume that sufficient conditions are given in configurations’
152
G. Ro¸su, C. Ellison, and W. Schulte
constraints so that each term appearing in any configuration is well-defined, according to its sort’s corresponding notion of well-definedness in the background theory, if any. If the background theory does not axiomatize well-definedness for certain sorts, then one may miss the opportunity to derive certain correctness pairs, but it is important to understand that one cannot derive wrong facts. Indeed, the correctness pair above is sound no matter whether C’s constraint states that x is not in the domain of ρ or not, because if x is in the domain of ρ then no concrete configuration will ever match the pre-condition. Despite its operational flavor (e.g., see (ML-asgn)), the matching logic proof system is as rigorous and compositional as the Hoare logic one; in particular, the rule (MLwhile) is almost identical to (HL-while). The soundness of this particular matching logic proof system follows directly from Thm. 1 and the soundness of the Hoare logic proof system in Fig. 1. We additionally have a direct proof of soundness, with respect to an operational semantics, that can be found in our technical report [27].
4 Equivalence of Hoare and (A Restricted Form of) Matching Logic We show that, for IMP, any property provable using Hoare logic is also provable using matching logic and vice-versa. Our proof reductions are mechanical in both directions, which means that one can automatically generate a matching logic proof from any Hoare logic proof and vice-versa. For the embedding of Hoare logic into matching logic part we do not use the fact that the configuration contains only an environment and a computation, so this result also works for other languages that admit Hoare logic proof systems. Before we proceed with the technical constructions, we need to impose a restriction on the structure of matching logic patterns to be used throughout this section. Note that a pattern of the form C x → xenv specifies configurations whose environments only declare x (and its value is x), while a pattern C ·env specifies configurations with empty (or “·”) environments. Thus, one is able to derive x → xenv x := x − x x → 0env , but it is impossible to derive ·env x := x − x x → 0env : one will never be able to “evaluate” x in the empty environment. However, note that the obvious Hoare logic equivalent, namely {true} x := x − x {x = 0}, is unconditionally derivable. To avoid such situations, we: (1) fix a finite set of program variables Z ⊂ PVar which is large enough to include all the program variables that appear in the original program that one wants to verify; and (2), restrict the IMP matching logic patterns to ones whose environments have the domain precisely Z. The need for this restriction in order to prove the equivalence of the two formal systems suggests that matching logic patterns allow for more informative specifications than Hoare logic. Also, we assume that Z ⊆ Var is a set of “semantic clones” of the program variables in Z, that is, Z = {z | z ∈ Z}, and that the semantic variables in Z are reserved only for this purpose. Also, let ρZ be the special environment mapping each program variable z ∈ Z into its corresponding semantic clone z ∈ Z. We first define mappings H2M and M2H taking Hoare logic correctness triples to matching logic correctness pairs, and matching logic correctness pairs to Hoare logic correctness triples, respectively. Then we show in Thm. 1 that these mappings are
Matching Logic: An Alternative to Hoare/Floyd Logic
153
logically inverse to each other and that they take derivable sequents in one logic to derivable sequents in the other logic; for example, if a correctness triple {ϕ} s {ϕ } is derivable with the Hoare logic proof system in Fig. 1, then the correctness pair H2M({ϕ} s {ϕ }) is derivable with the matching logic proof system in Fig. 2. 4.1 H2M: From Hoare to Matching Logic Hoare logic makes no distinction between program and logic variables. Let variables in Hoare specifications but not in the original program be semantic variables in Var. Let H2M(ϕ, s) be an auxiliary map taking formulae ϕ and statements s to patterns as follows: def
H2M(ϕ, s) = ∃Z.(( = sk ρZ env ) ∧ ρZ (ϕ)) Hence, H2M(ϕ, s) is a pattern whose code is s, whose environment ρZ maps each z ∈ Z in ϕ or in s into its semantic clone z ∈ Z, and whose constraint ρZ (ϕ) renames all the program variables in ϕ into their semantic counterparts. We now define the mapping from Hoare logic correctness triples into matching logic correctness pairs as follows: H2M({ϕ} s {ϕ }) = H2M(ϕ, s)
H2M(ϕ , ·)
def
For example, if Z = {x, z} then H2M ({x > 0 ∧ z = u} z := x + z {z > u}) is
∃x, z.(( = z := x + zk x → x, z → zenv ) ∧ x > 0 ∧ z = u) ∃x, z.(( = skipk x → x, z → zenv ) ∧ z > u)
The resulting correctness pairs are quite intuitive, making use of pattern bound variables as a bridge between the program variables and the semantic constraints on them. 4.2 M2H: From Matching to Hoare Logic
Given an environment ρ = (x1 → v1 , x2 → v2 , . . . , xn → vn ), let ρ be the FOL= formula x1 = v1 ∧ x2 = v2 ∧ . . . ∧ xn = vn . We define the mapping M2H taking matching logic statement correctness pairs into Hoare logic correctness triples as follows: M2H (∃X.(( = sk ρenv ) ∧ ϕ) ∃X .(( = skipk ρ env ) ∧ ϕ ) = {∃X.(ρ ∧ ϕ)} s {∃X .(ρ ∧ ϕ )} Γ ) is
Γ is the correctness pair in Sec. 4.1, then M2H(Γ
For example, if Γ
{∃x, z.(x = x ∧ z = z ∧ x > 0 ∧ z = u)} z := x + z {∃x, z.(x = x ∧ z = z ∧ z > u)}
We say that two FOL formulae ϕ1 and ϕ2 are logically equivalent iff |= ϕ1 ⇔ ϕ2 . Moreover, correctness triples {ϕ1 } s {ϕ1 } and {ϕ2 } s {ϕ2 } are logically equivalent iff |= ϕ1 ⇔ ϕ2 and |= ϕ1 ⇔ ϕ2 ; similarly, matching logic correctness pairs Γ1 Γ1 and Γ2 Γ2 are logically equivalent iff |= Γ1 ⇔ Γ2 and |= Γ1 ⇔ Γ2 . Thanks to the rules (HL-conseq) and (ML-conseq), logically equivalent sequents are either both or none derivable. Since ∃x, z.(x = x ∧ z = z ∧ x > 0 ∧ z = u) is logically equivalent to x > 0 ∧ z = u and since ∃z.(z = z ∧ z > u) is logically equivalent to z > u, we can conclude that the correctness triple M2H(Γ Γ ) above is logically equivalent to {x > 0 ∧ z = u} z := x + z {z > u}.
154
G. Ro¸su, C. Ellison, and W. Schulte
Theorem 1 (Equivalence of Matching Logic and Hoare Logic for IMP). Any given Hoare triple {ϕ} s {ϕ } is logically equivalent to M2H(H2M({ϕ} s {ϕ })), and any matching logic correctness pair Γ Γ is logically equivalent to H2M(M2H(Γ Γ )). Moreover, for any Hoare logic proof of {ϕ} s {ϕ } one can construct a matching logic proof of H2M({ϕ} s {ϕ }), and for any matching logic proof of Γ Γ one can construct a Hoare logic proof of M2H(Γ Γ ). (Proof given in [27])
5 Adding a Heap We next work with HIMP (IMP with a heap), an extension of IMP with dynamic memory allocation/deallocation and pointer arithmetic. We show that the IMP matching logic formal system extends modularly to HIMP. The heap allows for introducing and axiomatizing heap data-structures by means of pointers, such as lists, trees, graphs, etc. Unlike in separation logic where such data-structures are defined by means of recursive predicates, we define them as ordinary term constructs, with natural FOL= axioms saying how they can be identified and/or manipulated in configurations or patterns. HIMP Configuration. The configuration of HIMP extends that of IMP with a heap, or memory cell, as shown in the right box. A heap is a (partial) S ... | PVar := cons(List[ E ]) map structure just like the environment, but from | dispose(E) positive naturals (also called pointers) to integers. | PVar := [E] We use no special connective to construct heaps, | [E 1 ] := E 2 as they are simply maps like any other. Our heap- CfgItem . . . | Mem mem related constructs are based on those described by Mem Map[Nat+ , Int] Reynolds [25]. The cons construct is used to simultaneously allocate memory and assign to it, while [E] is used for lookup when on the right-hand side of an assignment, and mutation when on the left-hand side. This is very much like the ∗ operator in C. Finally, dispose(E) removes a single mapping from the heap. Matching Logic Definition of HIMP. Figure 3 shows the rules that need to be added to those in Fig. 2 to obtain a matching logic formal system for HIMP. Since patterns inherit the structure of configurations, matching logic is as modular as the underlying configuration. In particular, none of the matching logic rules in Fig. 2 need to change. To obtain a matching logic semantics for HIMP, all we need to add is one rule for each new language construct, as shown in Fig. 3. To save space, we write “C[(e1 , e2 , . . . , en )] ≡ (v1 , v2 , . . . , vn )” for “C[e1 ] ≡ v1 and C[e2 ] ≡ v2 and . . . and C[en ] ≡ vn ” and “?p → [v1 , ..., vn]” instead of “?p → v1 , ..., ?p → vn ”. One can now easily derive, e.g.,
x := cons(1, x); [x] := [x + 1]; dispose(x + 1)k x → x, ρenv σmem ϕform ·k x →?p, ρenv ?p → x, σmem ϕ∧?p ≥ 0form
where x, ρ and σ are free variables of appropriate sorts and ?p is a bound variable. As we already mentioned, to keep the presentation simple we assume that a pattern’s constraint is strong enough to guarantee that each term appearing in the pattern
Matching Logic: An Alternative to Hoare/Floyd Logic
(ρenv σmem C)[e] ≡ v, where ?p ∈ Var is a fresh variable ρenv σmem C x := cons(e) ρ[?p/x]env ?p → [v], σmem C ∧ (?p ≥ 0)
v →
ρenv v →
v ,
(v → v , σmem C)[e] ≡ v σmem C dispose(e) σmem C
v ,
(ρenv v → v , σmem C)[e] ≡ v σmem C x := [e] ρ[v /x]env v → v , σmem C
(v1 → v2 , σmem C)[(e1 , e2 )] ≡ (v1 , v2 ) v1 → v2 , σmem C [e1 ] := e2 v1 → v2 , σmem C
155
(ML-cons)
(ML-dispose)
(ML-[lookup])
(ML-[mutate])
Fig. 3. HIMP Matching logic formal system (these rules to be added to those in Fig. 2)
is well-defined. That is the reason for which we added the constraint p? ≥ 0 to the postcondition of the bottom correctness pair in (ML-cons) in Fig. 3, and also the reason for which we did not require that the pre-conditions’ constraints in the rules (ML-dispose), (ML-lookup) and (ML-mutate) guarantee that v (first two) and v (the third), resp., are not in the domain of σ. As discussed at the end of Sec. 3, matching logic proof rules are still sound even if the patterns’ constraints do not guarantee well-definedness of all the terms appearing in the pattern, but if this information is missing then one may not be able to derive certain correctness pairs that would otherwise be derivable. In our implementation, we prefer to keep the formula in . . .form free of well-definedness information and, instead, collect that information by need from the rest of the pattern; e.g., we do not need to add the constraint ?p ≥ 0 in our implementation of (ML-cons). Defining and Using Heap Patterns. Most complex programs organize heap data in structures such as linked lists, trees, graphs, etc. To verify such programs, one needs to be able to specify and reason about heap structures. Consider linked lists whose nodes consist of two consecutive locations: an integer (data) followed by a pointer to next node (or 0 for the list end). One is typically interested in reasoning about the sequences of integers held by such list structures. It is then natural to define a list heap constructor “list : Nat × List[Int] → Mem” taking a pointer (the location where the list starts) and a sequence of integers (the data held by the list, with “” for the empty sequence and “:” for sequence concatenation) and yielding a fragment of memory. It does not make sense to define this as one would a function, since it is effectively non-deterministic, but it can be axiomatized as a FOL= formula as follows, in terms of patterns: list(p, α), σmem ϕform C ⇔ σmem p = 0 ∧ α = ∧ ϕform C ∨ p → [?a,?q], list(?q,?β), σmem α =?a:?β ∧ ϕform C In words, a list pattern can be identified in the heap starting with pointer p and containing integer sequence α iff either the list is empty, so it takes no memory and its pointer
156
G. Ro¸su, C. Ellison, and W. Schulte
is null (0), or the list is non-empty, so it holds its first element at location p and a pointer to a list containing the remaining elements at location p + 1. Using this axiom, one can prove properties about patterns, such as: 5 → 2, 6 → 0, 8 → 3, 9 → 5, σmem C ⇒ list(8, 3 : 2), σmem C, and list(8, 3 : 2), σmem C ⇒ 8 → 3, 9 →?q, ?q → 2, ?q + 1 → 0, σmem C Similar axiomatizations are given in [27] for other heap patterns (trees, queues, graphs) and are supported by our current matching logic verifier (briefly discussed below). It is worthwhile emphasizing that we are here talking about axiomatic, as opposed to constructive, definitions of heap patterns. Axioms like the one for lists above are simply added to the background theory. Like with the other axioms in the background theory, one can attempt to prove them from basic principles of mathematics and constructive definitions if one wants to have full confidence in the results of verification.
6 Proof Example and Practical Experience In order to give a more practical understanding of matching logic, here we describe a concrete example as well as a quick description of our verification results using a matching logic verifier for a fragment of C implemented in Maude, which is available for download and online execution at http://fsl.cs.uiuc.edu/ml. List-Reverse Example. Consider proving that a program correctly reverses a list. A similar proof, but using separation logic, is given by Reynolds in [25]. Given x pointing to the beginning of a linked list, the following HIMP program reverses that list in-place: p:=0; while (x!=0) ( y:=[x+1]; [x+1]:=p; p:=x; x:=y ) We assume each list node is two contiguous memory locations, the first containing the value, the second containing a pointer to the next node. Initially [x] is the value of the first node and [x + 1] points to the second node in the list. Matching logic uses the standard notion of loop-invariants to prove loops correct. The fundamental idea is to find one pattern that is always true when leaving the loop, whether the loop condition is true or false. As before, the order of the configuration pieces does not matter. The invariant configuration for our reverse program can be: p →?p, x →?x, y →?xenv list(?p, ?β), list(?x, ?γ)mem rev(α) = rev(?γ):?βform where the environment binds program variable p to pointer ?p and program variables x and y to the same value ?x. In the memory we see list(?p, ?β) and list(?x, ?γ)—two disjoint lists, the first starting with pointer ?p and holding sequence ?β and the second starting with pointer ?x and holding sequence ?γ. Unlike in separation logic where “list” needs to be a predicate holding in a separate portion of the heap, in matching logic “list” is an ordinary operation symbol added to the signature and constrained through axioms as shown in Sec. 5. The pattern formula guarantees that rev(α) = rev(?γ):?β, where
Matching Logic: An Alternative to Hoare/Floyd Logic
157
α, the original sequence of integers in the list pointed to by x, is the only non-bound variable in the pattern. Now we see how the pattern progresses as we move into the loop: p →?p, x →?x, y →?xenv list(?p,?β), list(?x,?γ)mem rev(α) = rev(?γ):?β ∧?x 0form Note, we use bold to indicate changes. Inside the body of the while loop, we know that the guarding condition is true, so we assume it by adding it to our formula. Now that we know ?x is not nil, we can expand the definition of the list list(?x, ?γ) in the heap and thus yield: p →?p, x →?x, y →?xenv list(?p, ?β), ?x → [?a, ?x ], list(?x , ?γ )mem rev(α) = rev(?γ):?β ∧ ?x 0 ∧ ?γ = ?a:?γ form This pattern now contains all the configuration infrastructure needed to process the four assignments y:=[x+1]; [x+1]:=p; p:=x; x:=y, which yield the following pattern: p →?x, x →?x , y →?x env list(?p, ?β), ?x → [?a, ? p], list(?x, ?γ )mem rev(α) = rev(?γ):?β ∧ ?x 0 ∧ ?γ =?a:?γ form The list axiom can be applied again to the resulting pattern to reshape its heap into one having a list at ?x. We can additionally use the fact that ?γ =?a:?γ to rewrite our rev(α) = rev(?γ):?β to rev(α) = rev(?a:?γ ):?β. The axioms for reverse then tell us this is equivalent to rev(α) = rev(?γ ):?a:?β. We therefore obtain the following pattern: p →?x, x →?x , y →?x env list(?x, ?a:?β), list(?x , ?γ )mem rev(α) = rev(?γ ):?a:?β ∧ ?x 0form Now we are in a position to prove that this pattern logically implies the original invariant. Recall that bound variables are quantified existentially. It is then easy to see that since this is a more specific pattern than the invariant itself, the invariant follows logically from this pattern. Thus, we have shown that the invariant always holds after exiting the loop. Experience with Our Verifier. We have implemented a matching logic verifier for a fragment of C, using Maude [4]. Since matching logic is so close to operational semantics, our implementation essentially modifies a rewriting logic executable semantics of the language, in the style presented in [28, 20], turning it into a matching logic prover. Our prover extends the language with assume, assert, and invariant commands, each taking a pattern. For example, Figure 4 shows the list-reverse example above, as given to our verifier. After execution, the following result is output to the user: rewrites: 3146 in 5ms cpu (5ms real) (524420 rewrites/second) result Result: 2 feasible and 3 infeasible paths
Our verifier is path-based, cutting paths as quickly as found infeasible. Each assertion (including the two implicit ones associated to an invariant) results in a proof obligation, namely that the current pattern matches (or implies) the asserted one. To prove such implications, we implemented a simple FOL= prover which: (1) skolemizes the variables
158
G. Ro¸su, C. Ellison, and W. Schulte
assume
<env> p |-> ?p, x |-> ?x, y |-> ?y list(?x)(A) ; p = 0 ; invariant
<env> p |-> ?p, x |-> ?x, y |-> ?y list(?p)(?B), list(?x)(?C) ; while(x != 0) { y = *(x + 1) ; *(x + 1) = p ; p = x ; x = y ; } assert
<env> p |-> ?p, x |-> ?x, y |-> ?y list(?p)(rev(A)) ; Fig. 4. The reverse example in our matching logic prover
bound in the hypothesis pattern; (2) iterates through each subcell in the hypothesis pattern and attempts to match its contents against the corresponding cell in the conclusion, this way accumulating a substitution of the variables bound in the conclusion pattern; (3) the computed substitution is applied on the fly, together with equational simplification axioms for various mathematical domains (such as integer arithmetic, lists, trees, etc.); (4) the remaining matching tasks which cannot be solved using Maude’s term matching and the above substitution propagation, are added to the constraints of the conclusion pattern; (5) eventually, all is left is the two . . .form cells, which may generate an implication of formulae over various domains; if that is the case, we send the resulting formula to the Z3 SMT solver (we have modified/recompiled Maude for this purpose). In most of our experiments, including the one in Fig. 4, our Maude domain axiomatizations were sufficient to discard all proof obligations without a need for an external SMT solver. One can download or use our matching logic verifier through an online interface at http://fsl.cs.uiuc.edu/ml. The following examples are available on the website and can be verified in less than 1 second (all of them, together; we use a 2.5GHz Linux machine to run the online interface): the sum of numbers from 1 to n, three variants of list reverse, list append, list length, queue enqueuing and dequeuing, transferring elements from one queue to another (using transfer of ownership and stealing),
Matching Logic: An Alternative to Hoare/Floyd Logic
159
mirroring a tree, converting (the leaves of) a tree to a list, and insertion-, bubble-, quickand merge-sort using linked lists as in this paper. All proofs are of complete correctness, not only memory safety. Additionally, both aspects are always verified using a single prover—there is no need to have one prover to verify correctness and another to verify memory safety. In fact, matching logic makes no distinction between the two kinds of correctnesses. We have also verified the Schorr-Waite graph marking algorithm [13] (used in garbage collection); the details can be found in [27]. Our verifier automatically generated and proved all 227 paths in a few seconds. The formalization uses novel axiomatizations of clean and marked partial graphs, as well as a specific stack-in-graph structure, which records that during the execution of Schorr-Waite the heap consists at any time of such a stack and either a clean or a marked partial subgraph. To the best of our knowledge, this is the first time the partial correctness of this algorithm has been automatically verified. We use the word “automatically” in the sense that no user intervention is required to make the proof go through other than correctly describing the invariant. Previous automated efforts have either proved only its memory safety [16] or a version restricted to trees [19] automatically.
7 Conclusion, Related Work and Future Work This paper introduced matching logic and showed how it relates to the most traditional logics for program verification, Hoare logic. Matching logic program specifications are constrained algebraic structures, called patterns, formed by allowing (and constraining) variables in program configurations. Configurations satisfy patterns iff they match their structure consistently with their constraints. Matching logic formal systems mimic (bigstep) operational semantics, making them relatively easy to turn into forwards-analysis program verifiers. However, specifications tend to be more verbose in matching logic than in Hoare logic, because one may need to mention a larger portion of the program configuration. On the other hand, matching logic appears to handle language extensions better than Hoare logic, precisely because it has direct access to the program configuration. Matching logic is related to many verification approaches; here we only briefly discuss its relationships to Floyd/Hoare logics, evolving algebra/specifications, separation logic, shape analysis, and dynamic logic. There are many Hoare-logic-based verification frameworks, such as ESC/Java [9], Spec# tool [1], HAVOC [17], and VCC [5]. Caduceus/Why [8, 16] proved many properties relating to the Schorr-Waite algorithm. However, their proofs were not entirely automated. The weakness of traditional Hoarelike approaches is that reasoning about non-inductively defined data-types and about heap structures tend to be difficult, requiring extensive manual intervention in the proof process. The idea of regarding a program (fragment) as a specification transformer to analyze programs in a forwards-style is very old. In fact, Floyd did precisely that in his seminal 1967 paper [10]: if ϕ holds before x := e is executed, then ∃v. (x = e[v/x]) ∧ ϕ[v / x] holds after. Thus, the assignment statement can be naturally regarded as a transition, or a rewrite, from one FOL formula to another. Equational algebraic specifications have
160
G. Ro¸su, C. Ellison, and W. Schulte
also been used to express pre- and post-conditions and then verify programs in a forwards manner using term rewriting [11]. Evolving specifications [24], building upon intuitions from evolving algebra and ASMs, adapt and extend this basic idea to compositional systems, refinement and behavioral specifications. Many other verification approaches, some discussed above or below, can be cast as formula-transforming ones, and matching logic makes no exception. What distinguishes the various approaches is the formalism and style used for specifications. What distinguishes matching logic is its apparently very low level formalism, which drops no detail from the program configuration, which makes it resemble operational semantics. The other approaches we are aware of attempt to do the opposite, namely to use formalisms which are as abstract as possible. Matching logic builds upon the belief that there are some advantages of working with explicit configuration patterns instead of abstract formulae, and that the use of symbolic variables in configurations can still offer a comfortable level of abstraction by only mentioning in each rule those configuration components which are necessary. Separation logic [23,25] is an extension of Hoare logic. There are many variants and extensions of separation logic that we cannot discuss here. There is a major difference between separation and matching logic: the former extends Hoare logic to work better with heaps, while matching logic attempts to provide an alternative to Hoare logics in which the program configuration structure is explicit in the specifications, so heaps are treated uniformly just like any other structures in the configuration. Smallfoot [3] and jStar [7] are separation logic tools with good support for proving memory safety. Shape analysis [29] allows one to examine and verify properties of heap structures. It has been shown to be quite powerful when reasoning about heaps, leading to an automated proof of total correctness for a variant of the Schorr-Waite algorithm [19] restricted to binary trees. The ideas of shape analysis have also been combined with those of separation logic [6] to quickly infer invariants for programs operating on lists. Dynamic logic (DL) [14] extends FOL with modal operators to embed program fragments within program specifications. For example, a partial correctness Hoare triple {ϕ} s {ψ} can be represented as a DL formula ϕ → [s]ψ, where the meaning of the formula [s]ψ is “after executing s, a state may be reached which satisfies ψ”. The advantage of dynamic logic is that programs and specifications co-exits in the same logic, so one needs no other encodings or translations. Perhaps the most mature program verification project based on dynamic logic is KeY [2]. The KeY project and matching logic have many common goals and similarities. Particularly, both attempt to serve as an alternative, rather than an extension, to Hoare logic, and both their current implementations rely on symbolic execution rather than weakest precondition. Even though in principle DL can employ FOL= and configurations, its current uses are still less explicit than the patterns used in matching logic, so one may still need logical encodings of configuration components such as stacks, heaps, pointer maps, etc. It could also be possible to devise a dynamic matching logic where the program fragment is pulled out from patterns and moved into modalities. However, one of the practical benefits of matching logic is that its patterns make no distinction between code or other configuration components, allowing to have transitions/rewrites between patterns in which the code is not involved at all (e.g., defining message delivery, or garbage collection).
Matching Logic: An Alternative to Hoare/Floyd Logic
161
Finally, there is a large body of work on embedding various semantic styles, including operational ones, into higher-level formalisms, and then using the latter to formally relate two or more semantics of the same language, or to prove properties about the embedded languages or about their programs. Relating semantics is practical, because one can use some semantics for some purposes and other for other purposes (e.g., executability versus verification). A representative example in this category is [22], which does that for a language almost identical to our IMP. Note, however, that there is a sharp distinction between such embedding approaches and matching logic, both in purpose and in shape. Matching logic is not an embedding of an operational semantics or of anything else; it is a program verification logic, like Hoare logic, but one inspired from operational semantics. We proved its relationship to Hoare logic in this paper to put it in context, and not to use their relationship as a basis for program verification; in fact, we advocate that one can have some benefits from using matching logic instead of Hoare logic for program verification. Nevertheless, like Hoare logic or other semantics, matching logic can also be embedded into higher-level formalisms and then use the latter to mechanize proofs of relationships to other semantics (such as the result claimed in this paper), or to even verify programs. Our current implementation itself can be regarded as such an embedding of matching logic, namely into rewrite logic. Matching logic is new, so there is much work left to be done. The intrinsic separation available in matching logic might simplify verifying concurrent programs with shared resource access. Also, we would like to infer pattern loop invariants; since configurations in our approach are just ground terms that are being rewritten by semantic rules, and since patterns are terms over the same signature with constrained variables, we believe that narrowing and/or anti-unification can be good candidates to approach the problem. Since our matching logic verification approach makes language semantics practical, we believe that it will stimulate interest in giving formal rewrite logic semantics to various programming languages. Finally, we have paid no attention to compact the representation of patterns in user annotations; we are experimenting with a variant of our prover in which the environment is implicit, so one needs not mention it in patterns anymore.
References 1. Barnett, M., Leino, K.R.M., Schulte, W.: The Spec# programming system. In: Barthe, G., Burdy, L., Huisman, M., Lanet, J.-L., Muntean, T. (eds.) CASSIS 2004. LNCS, vol. 3362, pp. 49–69. Springer, Heidelberg (2005) 2. Beckert, B., Hähnle, R., Schmitt, P.H. (eds.): Verification of Object-Oriented Software. LNCS (LNAI), vol. 4334. Springer, Heidelberg (2007) 3. Berdine, J., Calcagno, C., O’Hearn, P.W.: Symbolic execution with separation logic. In: Yi, K. (ed.) APLAS 2005. LNCS, vol. 3780, pp. 52–68. Springer, Heidelberg (2005) 4. Clavel, M., Durán, F., Eker, S., Meseguer, J., Lincoln, P., Martí-Oliet, N., Talcott, C.: All About Maude - A High-Performance Logical Framework. LNCS, vol. 4350. Springer, Heidelberg (2007) 5. Cohen, E., Moskal, M., Schulte, W., Tobies, S.: A practical verification methodology for concurrent programs. Tech. Rep. MSR-TR-2009-15, Microsoft Research (2009) 6. Distefano, D., O’Hearn, P.W., Yang, H.: A local shape analysis based on separation logic. In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS, vol. 3920, pp. 287–302. Springer, Heidelberg (2006)
162
G. Ro¸su, C. Ellison, and W. Schulte
7. Distefano, D., Parkinson, M.J.: jStar: Towards practical verification for Java. In: OOPSLA 2008, pp. 213–226 (2008) 8. Filliâtre, J.C., Marché, C.: The Why/Krakatoa/Caduceus platform for deductive program verification. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 173–177. Springer, Heidelberg (2007) 9. Flanagan, C., Leino, K.R.M., Lillibridge, M., Nelson, G., Saxe, J.B., Stata, R.: Extended static checking for Java. In: PLDI 2002, pp. 234–245 (2002) 10. Floyd, R.W.: Assigning meaning to programs. In: Schwartz, J.T. (ed.) Proceedings of the Symposium on Applied Mathematics, 19th edn., pp. 19–32. AMS, Providence (1967) 11. Goguen, J., Malcolm, G.: Algebraic Semantics of Imperative Programs. MIT Press, Cambridge (1996) 12. Gordon, M., Collavizza, H.: Forward with Hoare. In: Reflections on the Work of C.A.R. Hoare. History of Computing Series, Springer, Heidelberg (2010) 13. Gries, D.: The Schorr-Waite graph marking algorithm. Acta Informatica 11, 223–232 (1979) 14. Harel, D., Kozen, D., Tiuryn, J.: Dynamic logic. In: Handbook of Philosophical Logic, pp. 497–604 (1984) 15. Hoare, C.A.R.: An axiomatic basis for computer programming. CACM 12(10), 576–580 (1969) 16. Hubert, T., Marché, C.: A case study of C source code verification: The Schorr-Waite algorithm. In: SEFM 2005, pp. 190–199 (2005) 17. Lahiri, S.K., Qadeer, S.: Verifying properties of well-founded linked lists. In: POPL 2006, pp. 115–126 (2006) 18. Lev-Ami, T., Immerman, N., Reps, T., Sagiv, M., Srivastava, S.: Simulating reachability using first-order logic. In: Nieuwenhuis, R. (ed.) CADE 2005. LNCS (LNAI), vol. 3632, pp. 99–115. Springer, Heidelberg (2005) 19. Loginov, A., Reps, T., Sagiv, M.: Automated verification of the Deutsch-Schorr-Waite treetraversal algorithm. In: Yi, K. (ed.) SAS 2006. LNCS, vol. 4134, pp. 261–279. Springer, Heidelberg (2006) 20. Meseguer, J., Ro¸su, G.: The rewriting logic semantics project. Theoretical Computer Science 373(3), 213–237 (2007) 21. Møller, A., Schwartzbach, M.I.: The pointer assertion logic engine. SIGPLAN Not. 36(5), 221–231 (2001) 22. Nipkow, T.: Winskel is (almost) right: Towards a mechanized semantics. Formal Aspects of Computing 10(2), 171–186 (1998) 23. O’Hearn, P.W., Pym, D.J.: The logic of bunched implications. Bulletin of Symbolic Logic 5, 215–244 (1999) 24. Pavlovic, D., Smith, D.R.: Composition and refinement of behavioral specifications. In: ASE 2001, pp. 157–165 (2001) 25. Reynolds, J.C.: Separation logic: A logic for shared mutable data structures. In: LICS 2002, pp. 55–74 (2002) 26. Ro¸su, G., Schulte, W.: Matching logic–Extended report. Tech. Rep. UIUCDCS-R-20093026, University of Illinois (2009) 27. Ro¸su, G., Ellison, C., Schulte, W.: From rewriting logic executable semantics to matching logic program verification. Tech. Rep. UIUC (2009), http://hdl.handle.net/2142/13159 28. Ro¸su, G., Serb˘ ¸ anu¸ta˘ , T.F.: An overview of the K semantic framework. Journal of Logic and Algebraic Programming 79(6), 397–434 (2010) 29. Sagiv, M., Reps, T., Wilhelm, R.: Parametric shape analysis via 3-valued logic. ACM Transactions on Programming Languages and Systems 24(3), 217–298 (2002)
Program Calculation in Coq Julien Tesson1 , Hideki Hashimoto2 , Zhenjiang Hu3 , Fr´ed´eric Loulergue1, and Masato Takeichi2 1 Universit´e d’Orl´eans, LIFO, France {julien.tesson,frederic.loulergue}@univ-orleans.fr 2 The University of Tokyo, Japan {hhashimoto,takeichi}@ipl.t.u-tokyo.ac.jp 3 National Institute of Informatics, Tokyo, Japan
[email protected]
Abstract. Program calculation, being a programming technique that derives programs from specification by means of formula manipulation, is a challenging activity. It requires human insights and creativity, and needs systems to help human to focus on clever parts of the derivation by automating tedious ones and verifying correctness of transformations. Different from many existing systems, we show in this paper that Coq, a popular theorem prover, provides a cheap way to implement a powerful system to support program calculation, which has not been recognized so far. We design and implement a set of tactics for the Coq proof assistant to help the user to derive programs by program calculation and to write proofs in calculational form. The use of these tactics is demonstrated through program calculations in Coq based on the theory of lists.
1
Introduction
Programming is the art of designing efficient programs that meet their specifications. There are two approaches. The first approach consists of constructing a program and then proving that the program meets its specification. However, the verification of a (big) program is rather difficult and often neglected by many programmers in practice. The second approach is to construct a program and its correctness proof hand in hand, therefore making a posteriori program verification unnecessary. Program calculation [1, 2, 3], following the second approach, is a style of programming technique that derives programs from specifications by means of formula manipulation: calculations that lead to the program are carried out in small steps so that each individual step is easily verified. More concretely, in program calculation, specification could be a program that straightforwardly solves the problem, and it is rewritten into a more and more efficient one without changing the meaning by application of calculation rules (theorems). If the program before transformation is correct, then the one after transformation is guaranteed to be correct because the meaning of the program is preserved by the transformation. M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 163–179, 2011. c Springer-Verlag Berlin Heidelberg 2011
164
J. Tesson et al.
Bird-Meertens Formalism (BMF) [1,4], proposed in late 1980s, is a very useful program calculus for representing (functional) programs, manipulating programs through equational reasoning, and constructing calculation rules (theorems). Not only many general theories such as the theory of list [4] and the theory of trees [5] have been proposed, but also a lot of useful specific theories have been developed for dynamic programming [6], parallelization [7], etc. Program calculation with BMF, however, is not mechanical: it is a challenging activity that requires creativity. As a simple example, consider that we want to develop a program that computes the maximum value from a list of numbers and suppose that we have had an insert-sorting program to sort a list. Then a straightforward solution to the problem is to sort a list in descending order and then get the first element: maximum = hd ◦ sort. However, it is not efficient because it takes at least the time of sort. Indeed, we can calculate a linear program from this solution by induction on the input list. If the input is a singleton list [a], we have maximum [a] { def. of maximum }
=
(hd ◦ sort) [a]
{ def. of function composition }
=
hd (sort [a])
{ def. of sort }
=
hd [a]
{ def. of hd }
= a
Otherwise the input is a longer list of the form a :: x whose head element is a and tail part is x, and we have maximum (a :: x) =
{ def. of maximum }
hd (sort (a :: x)) =
{ def. of sort }
hd (if a > hd(sort x) then a :: sort x else hd(sort x) :: insert a (tail(sort x))) =
{ by if law }
if a > hd(sort x) then hd(a : sort x) else hd(hd(sort x) :: insert a (tail(sort x))) =
{ def. of hd }
if a > hd(sort x) then a else hd(sort x) =
{ def. of maximum }
if a > maximum x then a else maximum x =
{ define x ↑ y = if x > y then x else y }
a ↑ (maximum x)
Program Calculation in Coq
165
Consequently we derive the following linear program: maximum [a] =a maximum (a :: x) = a ↑ (maximum x) In this derivation, we transform the program by equational reasoning via unfolding definition of functions and applying some existing calculation laws (rules). Sometimes, we even need to develop new calculations to capture important transformation steps. This calls for an environment, and much effort has been devoted to development of systems to support correct and productive program calculation. Examples are KIDS [8], MAG [9], Yicho [10], and so on. In general, this kind of environments should (1) support interactive development of programs by equational reasoning so that users can focus on his/her creative steps, (2) guarantee correctness of the derived program by automatically verifying each calculation step, (3) support development of new calculation rules so that mechanical derivation steps can be easily packed, and (4) make development process easy to maintain (i.e., development process should be well documented.). In fact, developing such a system from scratch is hard and time-consuming, and there are few systems that are really widely used. The purpose of this paper is to show that Coq [11], a popular theorem prover, provides a cheap way to implement a powerful system for program calculation, which has not been recognized so far. Coq is an interactive proof assistant for the development of mathematical theories and formally certified software. It is based on a theory called the calculus of inductive constructions, a variant of type theory. Appendix A provides a very short introduction to Coq. Although little attention has been paid on using Coq for program calculation, Coq itself is indeed a very powerful tool for program development. First, we can use dependent types to describe specifications in different levels. For instance, we can write the specification for sort by sort : ∀x : list nat, ∃y : list nat, (sorted (y) ∧ permutation(x, y)) saying that for any x, a list of natural numbers, there exists a sorted list y that is a permutation of x, and we, who want to use sort , would write the following specification for maximum: maximum : ∃⊕ : nat → nat → nat, hd ◦ sort = foldr1 (⊕) saying that the straightforward solution hd ◦ sort can be transformed into a foldr1 program. Note that foldr1 is a similar higher order function to foldr , but it is applied to nonempty lists. Second, one can use Coq to describe rules for equational reasoning with dependent types again. Here are two simple calculation rules, associativity of the append operation, and distributivity of the map functions.
166
J. Tesson et al.
Lemma appAssoc : ∀(A : T ype) (l m n : list A), (l ++ m) ++ n = l ++ (m ++ n) Lemma mapDist : ∀(A B C : T ype) (f : B → C) (g : A → B), map f ◦ map g = map (f ◦ g) Third, one can use Coq to prove theorems and extract programs from the proofs. For example, one can prove the specification for maximum in Coq. The proof script, however, is usually difficult to read compared with the calculation previously introduced. This is one of the main problem of using Coq for program calculation. In this paper, we shall report our first attempt of designing and implementing a Coq tactic library (of only about 200 lines of tactic codes), with which one can perform correct program calculations in Coq, utilize all the theories in Coq for his calculation, and develop new calculation rules, laws and theorems. Section 2 shows an example of the calculation for maximum in Coq. A more interesting use of these tactics are demonstrated by an implementation of the Bird’s calculational theory of lists in Coq. All the codes of the library and applications (as well as the maximum example with a classic proof script) are available at the following web page: https://traclifo.univ-orleans.fr/SDPP. The organization of the paper is as follows. First we discuss the design and implementation of a set of tactics for supporting program calculation and writing proofs in calculational form in Section 2. Then, we demonstrate an interesting application of calculation in Section 3. Finally, we discuss the related work in Section 4 and conclude the paper in Section 5.
2
Coq Tactics for Program Calculation
This section starts with an overview of tactics we provide and with a demonstration of how they are used in calculation; it is followed by details about the implementation of the tactics in Coq. 2.1
Overview of Available Tactics
We provide a set of tactics to perform program calculation in Coq. We can use it in two ways: either we want to transform a program but we don’t know what the final result will be; or we want to prove a program equivalent to another program. Let’s take the example of maximum1 presented in introduction. We will first illustrate the case in which we don’t know the final result with the singleton list case; then we will illustrate a calculation which just proves the equality between two known programs with the case in which the list has the form a::x. In the case of a singleton list, we want a function f such that ∀a, maximum d [a] = f a; this is expressed by the type {f | ∀a d, maximum d [a] = f a} of maximum singleton in what is following. 1
maximum d l is defined by hd d (sort l) where hd d l is the function returning the head of the list l, or d if l is an empty list.
Program Calculation in Coq
167
Definition maximum singleton : {f | ∀a d, maximum d [a] = f a}. Begin. LHS = { by def maximum} (hd d (sort [a]) ). = { by def sort; simpl if } (hd d [a]). = { by def hd } a. []. Defined.
The Begin. tactic starts the session by doing some technical preparation of the Coq system which will be detailed later (sect. 2.3). Thus then the user specifies by the LHS tactic that he wants to transform the Left Hand Side of the equation maximum d [a] = f a. If he had wanted to transform the right hand side of the equation he would have used the RHS tactic, or BOTH SIDE tactic to transform the whole equation. By using = { by def maximum} (hd d (sort [a]) ).
the user specifies that the left hand side should be replaced by hd d (sort [a]), and that this transformation is correct by the definition of maximum. For the next transformation, the equality between the term hd d (sort [a]) and the term hd d [a] cannot be proved by the only definition of sort: it also needs some properties over the relation “greater than” used in the definition of sort. The user-defined tactic simpl if which, in a sense, helps to determinate the value of sort [a], is necessary. Actually, we can place here any necessary tactic sequence to prove the equality. Once we achieve a satisfying form for our program, we can use []. to end the transformation. In case the list has the form a::x, we want to prove that maximum d (a::x) is equal to if a ?> (maximum a x) then a else maximum a x2 . Lemma maximum over list : ∀a x d, maximum d (a::x) = if a?> (maximum a x) then a else maximum a x. Begin. LHS = { by def maximum} (hd d (sort (a::x)) ). = { unfold sort } ( hd d ( let x’:=(sort x) in if a ?> (hd a x’) then a :: x’ else (hd a x’):: (insert a (tail x’)) ) ).
2
In Coq, > is defined as a relation, thus we use ?> which is a boolean function.
168
J. Tesson et al. ={ rewrite (if law (hd d)) } (let x’:= (sort x) in if a ?> hd a x’ then hd d (a :: x’) else hd d (hd a x’ :: insert a (tail x’))) . ={ by def hd; simpl if } (let x’ := sort l in if a ?> hd a x’ then a else hd a x’) . ={ by def maximum} (if a?> (maximum a x) then a else maximum a x). []. Qed.
As previously we use Begin. to start the session. Then, left hand side of the equation is transformed using the definitions of programs and a program transformation law, if law. This law states that for any function f, f applied to an if C then g1 else g2 statement is equal to the statement if C then f g1 else f g2. [] ends the proof if the two terms of the equation are convertible. To get the full code for the linear version of maximum, we have to manually pose a new lemma stating that linear maximum3 is equal to maximum. This can be proved easily using previous lemmas. 2.2
More Advanced Use
For the previous example we use the Coq equality but we can also use the system with a user-defined equality. For doing so we use the Setoid module from the standard library which allows to declare an equivalence relation. Let us take the extensional equivalence relation between function defined by Definition ext eq A B (f : A →B) (g : A →B) := ∀a, f a = g a.
Theorems ext eq refl, ext eq sym and ext eq trans which state that ext eq is respectively reflexive, symmetric and transitive can be proved easily. With Setoid we can declare extensional equality as a user-defined equality by the following code: Add Parametric Relation (A B:Type) : (A→B) (@ext eq A B ) reflexivity proved by (@ext eq refl A B) symmetry proved by (@ext eq sym A B) transitivity proved by (@ext eq trans A B) as ExtEq.
Afterward, we will denote @ext eq A B f1 f2 by f1 == f2, the arguments A and B being implicit. Once we have declared our relation, it can automatically be used by tactics like reflexivity, symmetry, transitivity or rewrite which are normally used with Coq equality. However, if we know that f1==f2, then for any F, and we want to rewrite F f1 into F f2 with rewrite tactic, we need to declare F as a morphism for ==. For example the function composition 3
linear maximum d l := match l with | nil ⇒d | a::l’ ⇒if a?> (linear maximum a l’) then a else linear maximum a l’ end.
Program Calculation in Coq
169
Definition comp (A B C : Type) (f : B→C) (g : A →B) := fun (x:A) ⇒f (g x),
is a morphism for == for its two arguments f and g. This means that ∀f1 f2 g, f1 == f2 →comp f1 g == comp f2 g
and that ∀f1 f2 g, f1 == f2 →comp g f1 == comp g f2.
This is implemented by: Add Parametric Morphism A B C : (@comp A B C ) with signature (@extensional equality B C) =⇒ eq =⇒(@extensional equality A C ) as compMorphism. Proof. ... Qed. Add Parametric Morphism A B C :( @comp A B C ) with signature (eq) =⇒ (@extensional equality A B ) =⇒ (@extensional equality A C ) as compMorphism2. Proof. ... Qed.
And here is an example of use: Lemma assoc rewriting : ∀(A : Type) (f : A→A) (g : A→A), ((map f):o:(map f):o:(map g):o:(map g)) == (map(f:o:f):o:map (g:o:g)). Begin. LHS ={ rewrite comp assoc} ( (map f :o: map f ) :o: map g :o: map g ) . ={rewrite (map map fusion f f) } ( map (f:o: f) :o: map g :o: map g). ={rewrite (map map fusion g g) } ( map (f:o: f) :o: map (g :o: g) ). []. Qed.
The :o: is a notation for the function composition, comp assoc states that function composition is associative and map map fusion states that map f :o: map g == map (f :o: g).
We can see here that once the relation and the morphism are declared we can use the tactics as if it was the Leibniz equality.
170
2.3
J. Tesson et al.
Behind the Scene
The Coq system allows the definition of syntax extensions for tactics, using Tactic Notation. These extensions associate an interleaving of new syntactic elements and formal parameters (tactics or terms) to a potentially complex sequence of tactics. For example, ={ta} t1. is defined by Tactic Notation (at level 2) ”=” ”{”tactic(t) ”}” constr(e) := ... .
where tactic(t) specify that the formal parameter t is a tactic and constr(e) specify that the formal parameter e has to be interpreted as a term. Our notation ={ta} t1., asserts that the term t1 is equivalent to the goal (or part of the goal) and proves it using the tactic ta (followed by reflexivity). Then, the goal is rewritten according to the newly proved equivalence. The Begin. tactic introduces all premises and then inspects the goal. If the goal is existentially quantified, we use the eexists tactic which allows to delay the instantiation of existentially quantified variables. This delaying permits to transform the goal until we know what value should have this variable. The LHS and RHS tactics first verify that the form of the goal is R a b and that R is a registered equivalence relation. Then they memorize a state used by our notations to know on which part of the goal they must work. As the term at the right or left hand side of the equivalence can be fully contained in the other side of the equation, we have to protect it so that it remain untouched when rewriting is done by the transformation tactics on the opposite side. To protect it, we use the tactic set which replaces a given term (here the side we want to protect) by a variable and adds a binding of this variable with the given term in the context. When registering the relation ext eq as ExtEq, ExtEq is proved to be an instance of the type class Equivalence ex eq. Type classes are a mechanism for having functions overloading in functional languages and are widely used in the module Setoid. They are defined in details in [12]; here, we only need to know that if there is a declared instance of the class Equivalence with parameter R, Equivalence R can be proved only by using the tactic typeclasses eauto. We use this to check that the goal has the right form so that we can be sure that our transformations produce an equivalent program. Memorisation mechanism. As seen above, our tactics need to memorize informations, but coq does not provide a mechanism to memorize informations between tactic applications. So we introduce a dependent type which carries the informations we want to save. This type memo, with a unique constructor mem, is defined by Inductive memo (s: state) : Prop := mem : memo s ., state being an inductive type defining the informations we want to memorize. The memorization of a state s can now be done by posing mem : memo s. We
define a shortcut
Program Calculation in Coq Tactic Notation ”memorize”constr(s) := pose ( mem
171
: memo s).
which abstracts the memorization mechanism. To access to the memorized informations, we use pattern matching over hypothesis. The main limitation of this mechanism is that we have no way to memorize informations from one (sub-)goal to another. Indeed our mechanism memorize informations by adding a variable to the context but the life-time of interactively added variables is limited to current (sub-)goal. Until now this limitation has not been problematic for our system, but if we want to overcome this later, we would have to develop our system as a coq plugin.
3
Application: BMF in Coq
In this section, we demonstrate the power and usefulness of our Coq tactics library for program calculation through a complete encoding of the lecture note on theory of lists (i.e., Bird-Meertens Formalisms for program calculation, or BMF for short) [4], so that all the definitions, theorems, and calculations in the lecture note can be checked by Coq and guaranteed to be correct. Our encoding4 , about 4000 lines of Coq codes (using our library), contains about 70 definitions (functions and properties) and about 200 lemmas and theorems. In our encoding, we need to pay much attention when doing calculation in Coq. First, we have to translate partial functions into total ones because Coq can only deal with total functions. Second, we should explore different views of lists, being capable of treating lists as snoc lists or join lists, while implementing them upon the standard cons lists. Third, we should be able to code restrictions on functions that are to be manipulated. In the following, we give some simple examples to show the flavor of our encoding, before explaining how to deal with the above issues. 3.1
Examples
To give a flavor of how we encode BMF in Coq, let us see some examples. Using Coq, we do not need to introduce a new abstract syntax to define functions. Rather we introduce new notations or define functions directly in Coq syntaxe. For example, the following defines the filter function (and its notation) for keeping the list elements that satisfy a condition and the concat function for flattening a list of lists. Fixpoint filter (A : Type) (p : A→bool) (l:list A) : list A := match l with | nil ⇒nil | x :: l ⇒ if p x then x :: (filter l) else filter l end. 4
The code is available at our project page.
Fixpoint concat (A : Type) (xs : list (list A)) : list A := match xs with | nil ⇒nil | x :: xs’ ⇒app x (concat xs’) end.
172
J. Tesson et al. Notation ”p <|” := (filter p)(at level 20) : BMF scope.
And the following shows how to prove the filter promotion rule [4] using our Coq tactics. Theorem filter promotion : p <| :o: @concat A = @concat A :o: p <| ∗. Proof. LHS ={ rewrite (filter mapreduce) } (++ / :o: f ∗ :o: @concat A). ={ rewrite map promotion } (++ /:o: @concat (list A) :o: f∗ ∗). ={ rewrite comp assoc } ((++ /:o: @concat (list A)) :o: f∗ ∗). ={ rewrite reduce promotion } (++ / :o: (++ /)∗ :o: f∗ ∗). ={ rewrite concat reduce } (@concat A :o: (++ /)∗ :o: f∗ ∗). ={ rewrite map distr comp } (@concat A :o: (++ / :o: f ∗) ∗). ={ rewrite filter mapreduce } (@concat A :o: (p <|) ∗). []. Qed.
The derivation starts with the left hand side of the equation to the proved, and repeatedly rewrites it with rules until it is equivalent to the right hand side of the equation. For instance, the first derivation step rewrite the filter p<| to a composition of a map f* and a reduce (specific fold) ++ / using the rule filter mapreduce. This derivation relies on lemmas using the functional extensionality axiom, therefore we can use Liebniz equality. In the following, we discuss how we deal with partial functions, different views of lists, and properties imposed on functions. 3.2
Implementation of Partial Functions
In Coq, all functions must be total and should terminate in order to keep the consistency of the proofs. However, there are many cases in which we want to use partial functions in programming. For example, the function that takes a list and returns the first element of the list is a partial function, because the function can not return any value if the list was empty. To implement this kind of functions as total functions, we may add a “default value”, so that if the
Program Calculation in Coq
173
input is out of the domain, then, the function will return the default value. For example, the “head” function is implemented as follows: Variable A : Type. Fixpoint head (d : A) (x : list A) : A := match x with | nil ⇒d | a :: x’ ⇒a end. End head.
Actually, this definition is the same as the hd one in the standard Coq library. Compared to the “paper” or functional programming language versions, we have to deal with this extra parameter. There are several other ways to model a partially defined function in Coq. Another approach is to define a function that returns an optional value, ie a value of the type: Inductive option (A:Set) : Set := | Some: A →option A | None: option A.
A third possibility is to take as extra argument a proof that the list is non empty. Compared to the first solution, this parameter would be removed during the code extraction leading to a more usual function in a functional programming language. However in some cases having proofs as arguments is difficult to handle without the axiom of proof irrevelance. A fourth solution is to create a new type of non-empty lists, an element of this type being a list together with a proof that this list is not nil. Here again in some cases the development may become difficult without the axiom of proof irrevelance. In the example we chose the first solution. 3.3
Exploiting Different Views of a Data Type
There are different views of lists in BMF. We usually define a list as an empty list or a constructor adding an element to a list (cons a l). This captures the view that a list is constructed by adding elements successively to the front of a list (we will call this view “cons list”). However, there are other views of lists. For example, the “snoc list” is a view that a list is constructed by adding an element one by one to the end of an empty list, and the “join list” is another view that a list is constructed by concatenation of two shorter lists. Adopting different views would make it easy and natural to define functions or to prove theorems on lists, so we exploit these views based on the cons list by implementing functions (or properties) on snoc lists or join lists based on those on cons lists as follows.
174
J. Tesson et al.
snoc ind : ∀(A : Type) (P : list A →Prop), P nil → (∀ (x : A) (l : list A), P l →P (l ++ [x])) → ∀l : list A, P l
join induction : ∀(A : Type) (P : list A →Prop), P nil → (∀ a : A, P ([a])) → (∀ x y : list A, P x ∧P y →P (x ++ y)) → ∀x : list A, P x
So, we can make use of induction principles on the snoc list and the join list. To be concrete, let us explain the theorem that reflects the definition on the snoc list. Consider the following foldl function defined on the cons list. Section foldl. Variables (A B : Type). Fixpoint foldl (op : B→A→B) (e : B) (x : list A) : B := match x with | nil ⇒e | a :: x’ ⇒foldl op (op e a) x’ end. End foldl.
In fact, this foldl can be defined more naturally as a homomorphic map from the snoc lists as follows. Section foldl snoc. Variables A B : Type. Fixpoint foldl snoc (op : B →A →B) (e : B) (x : slist A) : B := match x with | snil ⇒e | snoc x’ a ⇒op (foldl snoc op e x’) a end.
Actually, many proofs of theorems in BMF follow from this fact. However, it is difficult to define such foldl snoc function on cons lists as man can not match a cons list with x ++ [a] (pattern matching being dependent of inductive type construction). To resolve this problem, we introduce the following theorem instead: foldl rev char : ∀(A B : Type) (op : B →A →B) (e : B) (f : list A →B), f nil = e → (∀ (a : A) (x : list A), f (x ++ [a]) = op (f x) a) → f = foldl op e
This
theorem means that it is sufficient to prove “f nil = e f(x ++ [a]) = op (f x) a for all x and a” in order to prove “f = foldl op e”.
and
Program Calculation in Coq
3.4
175
Imposing Constraints on a Higher-Order Function
In many cases, we have to define some computation patterns (higher-order function) parametrized with some functions (operators) that satisfy some property. For example, the reduce operator behaves like foldl op e, except that it requires that op and e form a monoid in the sense that op is associative and e is the identity unit of op. To impose such monoidic constraints on the operators of reduce, we define it as follows. Section reduce mono. Variables (A : Type) (op : A →A →A) (e : A). Definition reduce mono (m : monoid op e) : list A →A := foldl op e. End reduce mono.
This reduce mono exploits the fact that a proof is a term in Coq. Now if the associative operator doesn’t have the identity unit in general, we can define the following reduce1 instead, by changing the constraint of reduce mono. Section reduce1. Variables (A : Type) (op : A →A →A). Definition reduce1 (a : assoc op) (d : A) (x : list A) : A := match x with | nil ⇒d | a’ :: x ⇒foldl op a’ x end. End reduce1.
This reduce1 takes a term of type assoc op, where assoc op is a dependent type that denotes op is associative. Because it is natural to view reduce1 as a partial function defined on a non-empty list, we can implement it with a “default value” as mentioned in Section 3.2. In other words, if a list is nil, reduce1 returns the default value, otherwise reduce1 is defined in terms of foldl.
4
Related Work
There are many systems [8, 9, 10, 13] that have been developed for supporting program calculation. Unlike the existing systems, our system is implemented upon the popular and powerful theorem prover Coq with two important features. First, our implementation is light-weighted with only 200 lines of tactic code in Coq. Secondly, our system allows the rich set of theories in Coq to be used in program calculation for free.
176
J. Tesson et al.
Our work is much related to AoPA [14], a library to support encoding of relational derivations in the dependently typed programming language Agda. This follows the line of research for bridging the gap between dependent types and practical programming [15, 16, 17]. With AoPA, a program is coupled with an algebraic derivation whose correctness is guaranteed by the type system. However, Agda does not have a strong auto-proving mechanism yet, which would force users to write complicated terms to encode rewriting steps in calculation. Therefore, we turn to Coq. We have exploited Ltac, a language for programming new tactics for easy construction of proofs in Coq, and have seen many advantages of building calculational systems using Coq. First, the Coq tactics can be used effectively for automatic proving and automatic rewriting, so that tedious calculation can be hidden with tactics. Secondly, new tactics can coexist with the existing tactics, and a lot of useful theories of Coq are ready to use for our calculation. Third, the system on Coq can be used in a trial-and-error style thanks to Coq’s interaction mechanism. A mechanism to describe equational reasoning is available in C-Zar (Radboud University Nijmegen) [18]. This language is a declarative proof language for the Coq proof assistant that can be used instead of the Ltac language to build proofs. It offers a notation to reason on equation but it is limited to Leibniz equality and doesn’t allow to transform terms existentially bounded.
5
Conclusion and Future Work
In this paper, we propose a Coq library to support interactive program calculation in Coq. As far as we are aware, this is the first calculation system built upon Coq. Our experience of using the library in encoding the Bird’s theory of lists in Coq shows the usefulness and power of the library. Our future work includes adding theorem about program calculation and tactics to automate proof of correctness of program transformation. We will consider polytypic calculation [19]. Polytypic programming and proofs begin to have some support within Coq [20, 21]. We will also extend the library to program refinement: Indeed, our system currently imposes restrictions on the relation between transformation by forcing it to be an equivalence. In the future, we could add the possibility to explicitely use relation that are not equivalence but refinement relation.
References 1. Bird, R.: Constructive functional programming. In: STOP Summer School on Constructive Algorithmics, Abeland (September 1989) 2. Kaldewaij, A.: Programming: the derivation of algorithms. Prentice-Hall, Inc., Upper Saddle River (1990) 3. Bird, R., de Moor, O.: Algebra of Programming. Prentice-Hall, Englewood Cliffs (1996) 4. Bird, R.: An introduction to the theory of lists. In: Broy, M. (ed.) Logic of Programming and Calculi of Discrete Design, pp. 5–42. Springer, Heidelberg (1987)
Program Calculation in Coq
177
5. Gibbons, J.: Algebras for Tree Algorithms. D. phil thesis (1991); Also available as Technical Monograph PRG-94 6. de Moor, O.: Categories, relations and dynamic programming. Ph.D thesis, Programming research group, Oxford Univ. (1992) Technical Monograph PRG-98 7. Hu, Z., Takeichi, M., Chin, W.N.: Parallelization in calculational forms. In: POPL 1998: Proceedings of the 25th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 316–328. ACM Press, New York (1998) 8. Smith, D.R.: KIDS — a knowledge-based software development system. In: Lowry, M.R., Mccartney, R.D. (eds.) Automating Software Design, Menlo Park, CA, pp. 483–514. AAAI Press / The MIT Press (1991) 9. de Moor, O., Sittampalam, G.: Generic program transformation. In: Swierstra, S.D., Oliveira, J.N. (eds.) AFP 1998. LNCS, vol. 1608. Springer, Heidelberg (1999) 10. Yokoyama, T., Hu, Z., Takeichi, M.: Yicho: A system for programming program calculations. Technical Report METR 2002–07, Department of Mathematical Engineering, University of Tokyo (June 2002) 11. Bertot, Y., Casteran, P.: Interactive Theorem Proving and Program Development – Coq’Art: The Calculus of Inductive Constructions. Springer, Heidelberg (2004) 12. Sozeau, M., Oury, N.: First-Class Type Classes. In: Mohamed, O.A., Mu˜ noz, C., Tahar, S. (eds.) TPHOLs 2008. LNCS, vol. 5170, pp. 278–293. Springer, Heidelberg (2008) 13. Visser, E.: A survey of strategies in rule-based program transformation systems. J. Symb. Comput. 40(1), 831–873 (2005) 14. Mu, S.c., Ko, H.s., Jansson, P.: Algebra of programming in agda: Dependent types for relational program derivation. J. Funct. Program 19(5), 545–579 (2009) 15. Augustsson, L.: Cayenne - a language with dependent types. In: In International Conference on Functional Programming, pp. 239–250. ACM Press, New York (1998) 16. McBride, C.: Epigram: Practical programming with dependent types. In: Vene, V., Uustalu, T. (eds.) AFP 2004. LNCS, vol. 3622, pp. 130–170. Springer, Heidelberg (2005) 17. Norell, U.: Dependently typed programming in agda. In: Kennedy, A., Ahmed, A. (eds.) TLDI, pp. 1–2. ACM, New York (2009) 18. Corbineau, P.: A Declarative Language for the Coq Proof Assistant. In: Miculan, M., Scagnetto, I., Honsell, F. (eds.) TYPES 2007. LNCS, vol. 4941, pp. 69–84. Springer, Heidelberg (2008) 19. Meertens, L.: Calculate Polytypically! In: Kuchen, H., Swierstra, S.D. (eds.) PLILP 1996. LNCS, vol. 1140, pp. 1–16. Springer, Heidelberg (1996) 20. Verbruggen, W., de Vries, E., Hughes, A.: Polytypic programming in Coq. In: WGP 2008: Proceedings of the ACM SIGPLAN Workshop on Generic Programming, pp. 49–60. ACM Press, New York (2008) 21. Verbruggen, W., de Vries, E., Hughes, A.: Polytypic properties and proofs in Coq. In: WGP 2009: Proceedings of the 2009 ACM SIGPLAN Workshop on Generic Programming, pp. 1–12. ACM Press, New York (2009) 22. The Coq Development Team: The Coq Proof Assistant, http://coq.inria.fr 23. Bertot, Y.: Coq in a hurry (2006), http://hal.inria.fr/inria-00001173
A
A Very Short Introduction to Coq
The Coq proof assistant [22] is based on the calculus of inductive constructions. This calculus is a higher-order typed λ-calculus. Theorems are types and their
178
J. Tesson et al.
proofs are terms of the calculus. The Coq systems helps the user to build the proof terms and offers a language of tactics to do so. We illustrate quickly all these notions on a short example : Set Implicit Arguments. Inductive list (A:Set) : Set := | nil : list A | cons : A →list A →list A. Implicit Arguments nil [A].
The Set Implicit Arguments command indicates that we let the Coq system infer as many arguments as possible of the terms we define. If in some context the system cannot infer the arguments, the user has to specify them. In the remainder of this short introduction, the Coq system always infers them. Fixpoint foldr1 (A:Set)(default:A)(f:A→A→A)(l:list A) {struct l} : A := match l with | nil ⇒default | cons a nil ⇒a | cons h t ⇒f h (foldr1 h f t) end. Theorem foldr1 derivation : ∀(A:Set)(f:A→list A→A), (∀ default, f default nil = default) → (∀ default a, f default (cons a nil) = a) → (exists g, ∀default a l, f default (cons a l) = g a (f a l)) → exists g, ∀default l, f default l = foldr1 default g l. Proof. intros A f H H0 H1. destruct H1 as [g Hg]. exists g. intros d l. generalize dependent d. induction l. (∗ Case nil ∗) assumption. (∗ Case cons ∗) intro d. rewrite Hg. destruct l. (∗ Case nil ∗) rewrite <− (Hg a). apply H0. (∗ Case cons ∗) rewrite IHl. reflexivity. Qed.
In this example, we first define a new inductive type, the type of lists. This type is a polymorphic type since it takes as an argument a type A, the type of the elements of the list. A has type Set which means it belongs to the computational realm of the Coq language similar to ML data structures. The definition of this new type introduces two new functions, the constructors of this type: nil and cons. Then we define a recursive function: foldr1. In this definition we specify the decreasing argument (here the fourth argument) as all functions must be
Program Calculation in Coq
179
terminating in Coq and we give its type (after “:”) as well as a term (after “:=”) of this type. We then define a theorem named foldr1 derivation stating that: ∀(A:Set)(f:A→list A→A), (∀ default, f default nil = default) → (∀ default a, f default (cons a nil) = a) → (exists g, ∀default a l, f default (cons a l) = g a (f a l)) → exists g, ∀default l, f default l = foldr1 default g l.
If we check (using the Check command of Coq) the type of this expression, we would obtain Prop meaning that this expression belongs to the logical realm. To define foldr1 derivation we also should provide a term of this type, that is a proof of this theorem. We could write directly such a term, but it is usually complicated and Coq provides a language of tactics to help the user to build a proof term. We refer to [23] for a quick yet longer introduction to Coq, including the presentation of basic tactics. Mixing logical and computational parts is possible in Coq. For example a function of type A→B with a precondition P and a postcondition Q corresponds to a constructive proof of type: ∀x:A, (P x) →exists y:B →(Q x y). This mix could be expressed in Coq by using the inductive type sig: Inductive sig (A:Set) (Q:A→Prop) : Set := | exist: ∀(x:A), (Q x) →(sig A Q).
It could also be written, using syntactic sugar, as {x:A|(Q x)}. This feature is used in the definition of the function head. The specification of this function is:∀(A:Set) (l:list A), l<>nil →{ a:A | exists t, l = cons a t } and we build it using tactics: Definition head (A:Set)(l:list A) : l<>nil →{ a:A | exists t, l = cons a t }. Proof. intros. destruct l as [ | h t]. (∗ case nil ∗) elim H. reflexivity. (∗ case cons ∗) exists h. exists t. reflexivity. Defined.
The command Extraction head would extract the computational part of the definition of head. We could obtain a certified implementation of the head function (here in Objective Caml): (∗∗ val head : ’a1 list →’a1 ∗∗) let head = function | Nil →assert false (∗ absurd case ∗) | Cons (a, l0) →a
Cooperation of Algebraic Constraint Domains in Higher-Order Functional and Logic Programming Rafael del Vado V´ırseda Dpto. de Sistemas Inform´ aticos y Computaci´ on Universidad Complutense de Madrid
[email protected]
Abstract. This paper presents a theoretical framework for the integration of the cooperative constraint solving of several algebraic domains into higher-order functional and logic programming on λ-abstractions, using an instance of a generic Constraint Functional Logic Programming (CFLP ) scheme over a so-called higher-order coordination domain. We provide this framework as a powerful computational model for the higherorder cooperation of algebraic constraint domains over real numbers and integers, which has been useful in practical applications involving the hybrid combination of its components, so that more declarative and efficient solutions can be promoted. Our proposal of computational model has been proved sound and complete with respect to the declarative semantics provided by the CFLP scheme, and enriched with new mechanisms for modeling the intended cooperation among the algebraic domains and a novel higher-order constraint domain equipped with a sound and complete constraint solver for solving higher-order equations. We argue the applicability of our approach describing a prototype implementation on top of the constraint functional logic system T OY.
1
Introduction
The effort to identify suitable theoretical frameworks for higher-order functional logic programming has grown in recent years [2,8,11,12,14]. The high number of approaches in this area and their different scopes and objectives indicate the high potential of such a paradigm in modeling complex real-world problems [11]. Functional logic programming is the result of integrating two of the most successful declarative programming styles, namely functional and logic programming, in a way that captures the main advantages of both [3]. Whereas higher-order programming is standard in functional programming, logic programming is in large part still tied to the first-order world. Only a few higher-order logic programming languages, most notably λ-Prolog [9], use higher-order logic for logic programming and have shown its practical utility, although the definition of
The work of this author has been partially supported by the Spanish projects TIN2005-09207-C03-03 (MERIT-FORMS), TIN2008-06622-C03-01 (FASTSTAMP), S-0505/TIC/0407 (PROMESAS-CAM), S2009/TIC-1465 (PROMETIDOS), and UCM-BSCH-GR58/08-910502 (GPD-UCM).
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 180–200, 2011. c Springer-Verlag Berlin Heidelberg 2011
Cooperation of Algebraic Constraint Domains
181
evaluable functions is not supported. Moreover, higher-order constructs such as function variables and λ-abstractions of the form λx. e (the syntax stands for an anonymous function which, when given any actual parameter in place of the formal parameter x, will return the value resulting from the evaluation of the body expression e) are widely used in functional programming and higher-order logic programming languages, where λ-terms are used as data structures to obtain more of the expressivity of higher-order functional programming. Within this research area, we have proposed in [13,14] a complete theoretical framework for higher-order functional logic programming as an extension to the setting of the simply typed lambda calculus of a first-order rewriting logic, where programs are presented by Conditional Pattern Rewrite Systems (CPRS for short) on lambda abstractions. For a first impression of our higher-order programming framework, the following CPRS illustrates the syntax of patterns on lambda abstractions to define a classical higher-order function map for the application of a given function to a list of elements. map (λu. F (u), [ ]) = [] map (λu. F (u), [ X | Xs ]) = [ F (X) | map (λu. F (u), Xs) ] The first contribution of this paper is to present a theoretical framework for the integration of higher-order functional logic programming with constraint solving, extending our programming language with the capacity of solving constraints over a given algebraic constraint domain. The term constraint is intuitively defined as a relationship required to hold among certain entities as variables and values (e.g., X + Y ≤ 0). We can take for instance the set of integers or the set of real numbers with addition, multiplication, equality, and perhaps other functions and predicates. Among the formalisms for the integration of constraints in functional logic programming we use in this work the Constraint Functional Logic Programming scheme CFLP (D) [6] which supports a powerful combination of functional and constraint logic programming over D and can be instantiated by any constraint domain D given as parameter which provides specific data values, constraints based on specific primitive operations, and a dedicated constraint solver. There are different instances of the scheme for various choices of D, providing a declarative framework for any chosen domain D. Useful constraint domains include the Herbrand domain H which supplies equality and disequality constraints over symbolic terms, the algebraic domain R which supplies arithmetic constraints over real numbers, and the algebraic domain F D which supplies arithmetic and finite domain constraints over integers. As a concrete example of a CPRS integrating higher-order functional logic programming with algebraic constraints in R, we can consider the following variant of a classical higher-order function diff to compute the differential of a function f at some numeric value X under some arithmetic constraints over real numbers in the conditional part of program rules. diff :: (real → real ) → real → real
182
R. del Vado V´ırseda
diff (λu. u, X) =1 diff (λu. sin (F (u)), X) = cos (F (X)) ∗ diff (λu. F (u), X) ⇐ π/4 ≤ F (X) ≤ π/2 diff (λu. ln (F (u)), X) = diff (λu. F (u), X)/F (X) ⇐ F (X) = 0 In contrast to first-order programming, we can easily formalize functions to be differentiated, or to compute the inverse operation of the differentiation (integration) by means of narrowing [13] as a suitable operational semantics, a transformation rule which combines the basic execution mechanism of functional and logic languages, namely rewriting with unification. For instance, we can compute by narrowing the substitution {F → λu. sin (u)} as a solution of the goal λx. diff (λu.ln (F (u)), x) == λx. cos (x)/sin (x) because the constraint λx. (π/4 ≤ x ≤ π/2 → sin (x) = 0) is evaluated to true by an R-constraint solver. Practical applications in higher-order functional logic programming, however, often involve more than one “pure” domain (i.e., H, R, F D, etc.), and sometimes problem solutions have to be artificially adapted to fit a particular choice of domain and solver. The cooperative combination of constraint domains and solvers has evolved during the last decade as a relevant research issue that is raising an increasing interest in the constraint programming community. An important idea emerging from the research in this area is that of “hybrid” constraint domain (e.g., H ⊕ R ⊕ F D [1]), built as a combination of simpler “pure” domains and designed to support the cooperation of its components, so that more declarative and efficient solutions for practical problems can be promoted.
2
Higher-Order Algebraic Constraint Cooperation
The second contribution of this work is to present a formal framework for the cooperation of the algebraic constraints domains F D and R in an improved version of the CFLP (D) scheme [6], now useful for higher-order functional and logic programming on lambda abstractions. As a result, we provide a powerful theoretical framework for higher-order constraint functional logic programming with lambda abstractions and decidable higher-order unification in a new higherorder constraint domain , which leads to greater expressivity. As a motivation for the rest of the paper, we present in this section a couple of examples of CPRS programs involving the cooperation of the algebraic constraint domains F D and R to illustrate the different cooperation mechanisms that are supported by our theoretical framework, as well as the benefits resulting from the cooperation in the higher-order functional logic programming setting. As a first simple example, we consider the following CPRS (adapted from [1] to the higher-order setting on λ-abstractions) to solve the problem of searching for a two-dimensional point lying in the intersection of a discrete grid and a continuous region. bothIn :: (real → real → real ) → int → int → bool
Cooperation of Algebraic Constraint Domains
183
bothIn (λu, v. F (u, v), X, Y ) = true ⇐ X RX , Y RY , F (RX , RY ) ≤ 0, domain [X, Y ] 0 N , labeling [ ] [X, Y ] The higher-order function bothIn is intended to check if a given discrete point (X, Y ) belongs to the intersection of the continuous region given by the Rconstraint F (RX , RY ) ≤ 0 and the discrete grid given by the F D-constraints domain [X, Y ] 0 N , labeling [ ] [X, Y ], ensuring that the variables X and Y are bound to integer values in the interval [0..N ]. In order to model the intended cooperation and communication between the constraint domains F D and R we use a special kind of hybrid constraints called bridges, as a key tool for communicating constraints between different algebraic constraint domains. More precisely, the two communicating constraints X RX and Y RY ensure that the discrete point (X, Y ) and the continuous point (RX , RY ) are equivalent. Different goals can be posed and solved using the small program just described. For instance, the goal bothIn (λu, v. v − 4 ∗ u + u2, X, Y ) == true with N = 4 asks for points in the intersection of the square grid with the inner side of the parabola Y = 4 ∗ X − X 2 . We can compute by narrowing 15 solutions (see Fig.1.): {X → 2, Y → 3}, {X → 1, Y → 2}, {X → 2, Y → 2}, {X → 3, Y → 2}, etc. In this process, cooperation between the R-constraint solver and the F D-solver is crucial for the efficiency of the computation. Initially, we reduce the problem of solving the goal to the problem of solving the hybrid constraint system { X RX , Y RY , RY − 4 ∗ RX + RX 2 ≤ 0, domain [X, Y ] 0 4, labeling [ ] [X, Y ] }. When the communication constraints are disabled, the last F D-constraints force the enumeration of all possible values for X and Y within their domains, eventually finding all the solutions after O(N 2 ) steps. When the communication constraints are enabled, we can use both constraints to project the R-constraint RY − 4 ∗ RX + RX 2 ≤ 0 into equivalent integer F Dconstraints (X = 0) ∧ (Y = 0), (X = 1) ∧ (0 ≤ Y ≤ 3), (X = 2) ∧ (0 ≤ Y ≤ 4), (X = 3) ∧ (0 ≤ Y ≤ 3), (X = 4) ∧ (Y = 0). Now, using this new information the F D-solver can prune the domains of X and Y, and solving the labeling constraint leads to the solutions with minor effort. The expected speedup in execution time corresponds to the improvement from O(N 2 ) to O(N ) or O(1) steps according to the lambda abstraction encoded in the goal and the possibilities offered by the constraint solver. We present now a second example, intended to illustrate new possibilities and mechanisms of our higher-order cooperative constraint model. In engineering, a common problem is the approximation of a complicated continuous function by a simple discrete function (e.g., the approximation of GPS satellite coordinates). Suppose we know a real function (given by a lambda abstraction λu. F (u)) but it is too complex to evaluate efficiently. Then we could pick a few approximated (integer) data points from the complicated function, and try to interpolate those data points to construct a simpler function, for example, a polynomial λu. P (u). Of course, when using this polynomial function to calculate new (real) data points we usually do not receive the same result as when using the original
184
R. del Vado V´ırseda
f
4
3
f
f
f
3
2
f
f
f
2
1
f
f
f
1
f
f
f
f
1
2
3
4
f 0
f
4
f
f
f 0
f 1
2
3
4
Fig. 1. Cooperation and interpolation of a parabolic region in a discrete square grid
function, but depending on the problem domain and the interpolation method used the gain in simplicity might offset the error. disc :: (real → real ) → (int → int) disc (λu. F (u)) = λu. P (u) ⇐ domain [X] 0 N , labeling [ff ] [X], X RX , Y RY , | F (RX ) − RY | < 1, collection [X, Y ] C, interpolation [lg] C P Therefore, the aim of this example is to approximate a continuous function represented by a lambda abstraction λu. F (u) over real numbers by a discrete polynomial function λu. P (u) over integer numbers. In this case, we use the F D-constraints domain [X] 0 N , labeling [ff ] [X] to generate each value of the discrete interval [0..N ], according to a first-fail (or ff) labeling option (see [1,5,6] for more details). The first bridge constraint X RX maps each integer value of X into an equivalent real value in RX . By applying the higher-order functional variable F to RX we obtain the R-constraint | F (RX ) − RY | < 1. From this constraint, the R-solver computes (infinite) real values for RY . However, because of the second bridge constraint Y RY , each real value assigned to RY by the constraint solving process causes the variable Y to be bound only to an equivalent integer value. By means of the primitive constraint collection [X, Y ] C we can collect all the pairs (X, Y ) generated by the labeling-solving process into a set C. Finally, interpolation [lg] C P finds a polynomial which goes exactly through the points collected in C by means of the Lagrange interpolation (lg) method. For instance, we can consider the following goal disc (λu. 4 ∗ u − u2 ) == λu. P (u) involving the continuous function F as λu. 4 ∗ u − u2 with N = 4. We obtain the set of integer pairs (xi , yi ) in C = {(0, 0), (1, 3), (2, 4), (3, 3), (4, 0)} (see again Fig. 1.). For this particular case, it is easy to check that this computed answer is simply {P → λu. 4 ∗ u − u2 }. As we have commented before, the generic scheme CFLP (D) presented in [6] serves in this work as a logical and semantic framework for lazy Constraint Functional Logic Programming over a parametrically given constraint domain
Cooperation of Algebraic Constraint Domains
185
D. In order to model the coordination of algebraic constraint domains in our higher-order functional logic programming framework [13,14], we propose the construction of a higher-order coordination domain C, as a special kind of hybrid domain tailored to the cooperation of the algebraic domains R and F D with a new higher-order constraint domain which supplies lambda abstractions as data values and equalities over lambda terms as constraints. Following the methodology of [1], we obtain a suitable theoretical framework for the cooperation of algebraic constraint domains with their respective solvers in higherorder functional and logic programming using instances CFLP (C). Moreover, thanks to this fact, we can describe a prototype implementation following the techniques summarized in our previous work [1] in the T OY system [5], which is in turn implemented on top of SICStus Prolog. The former system is extended, including special stores for bridges and lambda abstractions, and implementing mechanism for computing bridges, projections, and interpolations according to the new CFLP (C) computation model. To finish this introduction and motivation to the work, we summarize the organization of the rest of the paper. In Section 3 we give a mathematical formalization of the higher-order constraint domain and a solver to solve higher-order strict equations tailored to the needs of the CFLP (D) generic scheme. Followed by a brief presentation of the algebraic constraint domains R and F D, in Section 4 we discuss bridge constraints and the construction of the coordination domain C tailored to the cooperation of the algebraic constraint domains R and F D with . In Section 5 we present our proposal of a sound and complete computational model for cooperative higher-order declarative programming in CFLP (C), and we sketch the implementation on top of the T OY system. Finally, Section 6 summarizes some conclusions and presents a brief outline of related and planned future work.
3
A Higher-Order Constraint Domain
Taking the generic scheme CFLP (D) as a formal basis for foundational and practical issues concerning the cooperation of algebraic constraint domains, in this section we focus on the formalization of a higher-order constraint domain which supplies λ-abstractions and equality constraints over λ-terms in the instance CFLP (). First, we introduce the basic preliminary notions of our higher-order theoretical framework to formalize the constraint domain along with a suitable -constraint solver based on an approach similar to the Hu`et’s procedure of higher-order pre-unification [8,11]. 3.1
Preliminary Notions
We assume the reader is familiar with the notions and notations pertaining to λ-calculus (see e.g., [4,11] for more examples and motivations). The set of types for simply typed λ-terms is generated by a set B of base types (as e.g., bool , real , int) and the function type constructor “→”. Simply typed λ-terms are generated
186
R. del Vado V´ırseda
in the usual way from a signature F of function symbols and a countably infinite set V of variables by successive operations of abstraction and application. We also consider the enhanced signature F⊥ = F ∪ Bot, where Bot = {⊥b | b ∈ B} is a set of distinguished B-typed constants. The constant ⊥b is intended to denote an undefined value of type b. We employ ⊥ as a generic notation for a constant from Bot. A sequence of syntactic objects o1 , . . . , on , where n ≥ 0, is abbreviated by on . For instance, the simply typed λ-term λx1 , . . . , λxk . (· · · (a t1 ) · · · tn ) is abbreviated by λxk . a(tn ). Substitutions γ ∈ Subst(F⊥ , V) are finite type-preserving mappings from variables to λ-terms, denoted by {Xn → tn }, and extended homomorphically to λ-terms. By convention, we write {} for the identity substitution, tγ instead of γ(t), and γγ for the function composition γ ◦ γ. The long βη-normal form of a λ-term t, denoted by tηβ , is the η-expanded form of the β-normal form of t. It is well-known that s =αβη t if sηβ =α tηβ [4]. Since βη-normal forms are always defined, we will in general assume that λ-terms are in long βη-normal form and are identified modulo α-conversion. For brevity, we may write variables and constants from F in η-normal form, e.g., X instead of λxk . X(xk ). We assume that the transformation into long βη-normal form is an implicit operation, e.g., when applying a substitution to a λ-term. With these conventions, every λ-term t has an unique long βη-normal form λxk . a(tn ), where a ∈ F⊥ ∪ V and a() coincides with a. The symbol a is called the root of t and is denoted by hd (t). We distinguish between the set T (F⊥ , V) of partial λ-terms and the set T (F, V) of total λ-terms. The set T (F⊥ , V) is a poset with respect to the approximation ordering , defined as the least partial ordering such that: s 1 t1 · · · s n tn λxk . ⊥ λxk . t tt λxk . a(sn ) λxk . a(tn ) A pattern [8] is a λ-term t for which all subterms t|p = X(tn), with X ∈ F V(t) a free variable of t and p ∈ MPos(t) a maximal position in t, satisfy the condition that t1 ↓η , . . . , tn ↓η is a sequence of distinct elements of the set BV(t, p) of bound variables abstracted on the path to position p in t. Moreover, if all such subterms of t satisfy the additional condition BV(t, p) \ {t1 ↓η , . . . , tn ↓η } = ∅, then the pattern t is fully extended. It is well known that unification of patterns is decidable and unitary [8]. Therefore, for every t ∈ T (F⊥ , V) and pattern π, there exists at most one matcher between t and π, which we denote by matcher (t, π). 3.2
The Higher-Order Constraint Domain
Intuitively, a constraint domain D provides data values and constraints oriented to some particular application domain. In our higher-order setting, we need to formalize a special higher-order constraint domain to support computations with symbolic equality over λ-terms of any type. Formally, it is defined as follows: Definition 1 (-constraint domain). The higher-order domain is a structure D , == such that the carrier set D coincides with the set of ground patterns (i.e., patterns without free variables) over any type, and the function
Cooperation of Algebraic Constraint Domains
187
symbol == is interpreted as strict equality over D , so that for all t1 , t2 , t ∈ 2 D , one has == ⊆ D × D , where t1 == t2 → t (i.e., (t1 , t2 , t) ∈ == ) iff some of the following three cases hold: (1) t1 and t2 are one and the same total λ-term in D , and true t. (2) t1 and t2 have no common upper bound in D with respect to the approximation ordering , and false t. (3) t = ⊥. From this definition, it is easy to check that the equality function == satisfies the conditions required to a constraint domain D for the CFLP (D) scheme [6]: (1) Polarity: t1 == t2 → t behaves monotonically with respect to the arguments t1 and t2 , and antimonotonically with respect to the result t. Formally, for all t1 , t1 , t2 , t2 , t, t ∈ D such that t1 == t2 → t, t1 t1 , t2 t2 , and t t , t1 == t2 → t also holds. (2) Radicality: As soon as the arguments given to == have enough information to return a result, the same arguments suffice already for returning a total result. Formally, for all t1 , t2 , t ∈ D , if t1 == t2 → t then t = ⊥ or else there is some total t ∈ D such that t1 == t2 → t and t t. An equality constraint (or simply, -constraint) is a multiset {{s, t}}, written s == t, where s, t ∈ T (F⊥ , V) are λ-terms of the same type. The set of solutions of an equality constraint s == t is defined as follows: Soln(s == t) = {γ ∈ Subst(F⊥ , V) | tγ == sγ → true}. Any set E of strict equations is interpreted as conjunction, and therefore Soln(E) = (s == t) ∈ E Soln(s == t). 3.3
The -Constraint Solver
Solving equality constraints in first-order term algebras (which is also known as unification) is the most famous symbolic constraint solving problem. In the higher-order case, higher-order unification is a powerful method for solving equality -constraints between λ-terms and is currently used in theorem provers [10]. Other applications of higher-order unification include program synthesis and machine learning [11]. However, one of the major obstacles for reasoning in the higher-order case is that unification is undecidable. However, in this subsection we examine a decidable higher-order unification case of patterns by means of the development of a -constraint solver for the higher-order constraint domain , now supporting an improved treatment of the strict equality == as a built-in primitive function symbol, rather than a defined function [3]. Definition 2 (states). The constraint solver Solver for the higher-order domain acts on states of the form P ≡ E | K, where E is a set of strict equality constraints s == t between λ-terms s, t, and K is a set of patterns intended to represent and store computed values in the sense of [13,14] during the constraint solving process. The meaning of a state P ≡ E | K is as follows: [[E | K]] = {γ ∈ Soln(E) | Kγ is a set of values }. We note that [[E | K]] = ∅ whenever K is not a set of values. In the sequel, we denote this state by fail and call it failure state.
188
R. del Vado V´ırseda
Solving a set of strict equality -constraints amounts to computing -derivations, i.e., sequences of transformation steps. Definition 3 (derivations). A -derivation of a set E of strict equality constraints is a maximal finite sequence of transformation steps: P0 ≡ E | ∅ ≡ E0 | K0 ⇒σ1 P1 ≡ E1 | K1 ⇒σ2 · · · ⇒σm Pm ≡ Em | Km , between states P0 , P1 , . . ., Pm , such that Pm = fail is a final state, i.e., a non failure state which can not be transformed anymore. Definition 4 (λ-constraint solver) (1) Each transformation step in a -derivation Π corresponds to an instance of some transformation rule of the -constraint solver Solver described below. We abbreviate Π by P0 ⇒∗σ Pm , where σ = σ1 . . . σm . (2) Given such a set E of strict equality -constraints, the set of computed answers produced by the -constraint solver Solver is A(E) = { σγ F V(E) | E | ∅ ⇒∗σ P is a -derivation and γ ∈ [[P ]] }, where F V(E) is the set of free variables of E. In the sequel, we will describe the transformation rules of the -constraint solver and analyze its main properties. (an) annotation {{s == t, E}} | K ⇒{} {{s ==H t, E}} | K ∪ {H} where H is a fresh variable of a suitable type. (sg) strict guess {{λxk .a(sn ) ==H t, E}} | K ⇒σ {{λxk .a(sn ) ==Hσ t, E}} | Kσ where a ∈ F ∪ {xk }, and σ = {H → λxk .a(Hn (xk ))}. (d) decomposition {{λxk .a(sn ) ==u λxk .a(tn ), E}} | K ⇒σ {{λxk .sn ==Hn λxk .tn , E}} | Kσ where a ∈ F ∪ {xk }, and either u ≡ H and σ = {H → λxk .a(Hn (xk ))}, or u ≡ λxk .a(Hn (xk )) and σ = {}. (i) imitation {{λxk .X(sp ) ==u λxk .f (tn ), E}} | K ⇒σ {{λxk .Xn (sp ) ==Hn λxk .tn , E}}σ | (K ∪ {X})σ where X ∈ V, and either u ≡ H and σ = {X → λyp .f (Xn (yp )), H → λxk .f (Hn (xk ))}, or u ≡ λxk .f (Hn (xk )) and σ = {X → λyp .f (Xn (yp ))}. (p) projection {{λxk .X(sp ) ==u t, E}} | K ⇒σ {{λxk .X(sp ) ==u t, E}}σ | (K ∪ {X})σ where X ∈ V, t is not flex, and σ = {X → λyp .yi (Xn (yp ))}. (fs) flex same {{λxk .X(yp ) ==H λxk .X(yp ), E}} | K ⇒σ {{E}}σ | (K ∪ {X})σ where X ∈ V, λxk .X(yp ) and λxk .X(yp ) are patterns, σ = {X → λyp .Z(zq ), H → λxk .Z(zq )} with {zq } = {yi | yi = yi , 1 ≤ i ≤ n}. (fd ) flex different {{λxk .X(yp ) ==H λxk .Y (yq ), E}} | K ⇒σ {{E}}σ | (K ∪ {X, Y })σ where X, Y ∈ V, λxk .X(yp ) and λxk .Y (yq ) are patterns, X = Y , σ = {X → λyp .Z(zr ), Y → λyq .Z(zr ), H → λxk .Z(zr )} with {zr } = {yp } ∩ {yq }. (cf ) clash failure {{λxk .a(sn ) ==u λxk .a (tm ), E}} | K ⇒{} fail if a, a ∈ Fc ∪ {xk } (where the notation Fc will be explained in Section 5), and either (i) a = a or (ii) hd(u) ∈ V ∪ {a, a }. (oc) occur check {{λxk .s ==u λxk .X(yn ), E}} | K ⇒{} fail if X ∈ V, λxk .X(yn ) is a flex pattern, hd(λxk .s) = X and (λxk .s)|p = X(zn ), where zn is a sequence of distinct bound variables and p is a maximal safe position of λxk .s (i.e., hd ((λxk .s)|q ) ∈ BV (λxk .s, q) ∪ Fc for all q ≤ p).
Cooperation of Algebraic Constraint Domains
189
In order to illustrate the overall behavior of our constraint solver Solver , we consider the following -derivation involving the function symbols given in the signature of the diff -example presented in Section 1: {{λx. sin(F (x)) == (an),(d),(i) λx. sin(cos(x))}} | ∅ ⇒{F → λx. cos(x)} ∅ | {λx. sin(cos(x)), λx. cos (x)}. Therefore, we have computed the substitution {F → λx. cos(x)} as the only answer in A(λx. sin(F (x)) == λx. sin(cos(x))). The main properties of the -constraint solver, soundness and completeness, relate the solutions of a set of strict equality -constraints to the answers computed by our system of transformation rules for higher-order unification. Theorem 1 (properties of the -constraint solver) (1) Soundness: Let E | ∅ ⇒∗σ P be a -derivation. Then, σγ ∈ Soln(E) whenever γ ∈ [[P ]]. (2) Completeness: Let E be a set of -constraints. Then, A(E) = {γF V(E) | γ ∈ Soln(E)}.
4
Higher-Order Coordination of Algebraic Domains
The higher-order domain supports computations with symbolic equality -constraints over λ-abstractions involving values of arbitrary user-defined datatypes. However, from a programmer’s viewpoint we also need to work with extended algebraic constraint domains supporting computations with arithmetic and equality -constraints over λ-terms involving numerical values. For this reason, in the context of our higher-order CFLP framework, we need to introduce extensions of both the classical domain R, which supplies arithmetic constraints over real numbers, and the finite domain F D, which supplies arithmetic and finite domain constraints over integers, to deal now with λ-abstractions defined over real numbers and integers. A convenient formal definition of the algebraic constraint domain R is as follows: R is a structure DR , {pR }, where the set of base values includes just one base type real whose values represent real numbers, the carrier set DR coincides with the set of ground patterns over real numbers, and the usual interpretations pR include as primitive function symbols the strict equality operator ==, defined as for the -domain, the arithmetical operators +, −, ∗, / :: real → real → real , and the inequality operator ≤ :: real → real → bool . Concerning the solver solver R , we expect that it is able to deal with R-derivations of R-specific constraints sets consisting of primitive constraints of the following two kinds: proper R-constraints involving an arithmetic operator, and specific higher-order R-constraints having the form t1 == t2 , where t1 and t2 are λ-terms over real constant values or variables whose type is known to be real prior to the solver invocation. We assume that Solver R is implemented as a black-box solver on top of SICStus Prolog, and solves R-specific higher-order constraints in a way compatible with the behavior of the -solver described in the previous section. Analogously, F D is a structure DF D , {pF D }, where the set of base values includes just one base type int whose values represent integer numbers, the
190
R. del Vado V´ırseda
Equalities between λ-terms
Collection + Interpolation
K
Functional variable + Application
M
U
FD
FD-constraints
-
Bridges + Projections
R
6 R-constraints
Fig. 2. The higher-order coordination domain C = M ⊕ ⊕ FD ⊕ R
carrier set DF D coincides with the set of ground patterns over integer numbers, and the usual interpretations pF D include as primitive function symbols the strict equality operator ==, the arithmetical operators +, −, ∗, / :: int → int → int, the inequality operator ≤ :: int → int → bool, and the following primitive function symbols related to computations with finite domains and constraint solving efficiency [1,6]: • domain :: [int] → int → int → bool, to fix that a non-empty list of integer variables belongs to an interval of values. • labeling :: [labelType] → [int] → bool , to select an integer variable of the list with a non-empty, non-singleton domain, selecting a value of this domain and assigning the value to the variable, where labelType is an enumerated datatype used to represent labeling strategies. Concerning the solver solver F D , we also assume that is implemented as a blackbox solver on SICStus Prolog, and we expect that it is able to deal with F Dderivations of proper F D-constraints involving arithmetic F D-operators and higher-order algebraic constraints t1 == t2 , where t1 and t2 are λ-terms over integer constant values or variables whose type is known to be int prior to the solver invocation. A coordination domain C is a kind of “hybrid” constraint domain built from various component domains (as e.g., H, , R, F D, . . .) intended to cooperate. The construction of coordination domains involves a so-called mediatorial domain M, whose purpose is to supply mechanisms for communication among the component domains via bridges, projections, functional variable applications, interpolations, and some more ad hoc operations (see Fig. 2.). In this work, the component domains will be chosen as the pure domains , R, and F D, equipped with constraint solvers, in such a way that the communication provided by the
Cooperation of Algebraic Constraint Domains
191
mediatorial domain will also benefit the solvers. In the remaining of this section we briefly explain the construction of this higher-order coordination domain C, represented as the sum C = M ⊕ ⊕ F D ⊕ R. Mathematically, the construction of the coordination domain C relies on a combined algebraic constraint domain F D ⊕ R, which represents the amalgamated sum of the two joinable algebraic domains F D and R. In this case, the joinability condition asserts that the only primitive function symbol allowed to belong to F D and R is the strict equality ==, where the interpretation of this operator will behave as defined for each algebraic constraint domain and . As a consequence, the amalgamated sum ⊕ F D ⊕ R is always possible, and gives rise to compound a higher-order algebraic domain that can profit from the higher-order -constraint solver. However, in order to construct a more interesting sum for higher-order algebraic cooperation tailored to the communication among pure domains , R, and F D, mediatorial domains are needed. The higher-order mediatorial domain M serves as a basis for useful cooperation facilities among , F D, and R, including the projection of R-constraints to the F D-solver (and vice versa) using bridges, the specialization of -constraints to become R- or F D-constraints, the definition of algebraic constraints in R and F D from the application of higher-order functional variables in the domain , the gathering of numeric data values to construct a λ-abstraction in which closely fits the data points by means of interpolation techniques, and some other special mechanisms designed for processing the mediatorial constraints occurring in functional and logic computations. • More precisely, bridge constraints X RX , with :: int → real → bool , can be used either for binding or projection purposes. Binding in solver M simply instantiates a variable occurring at one end of a bridge whenever the other end of the bridge becomes a numeric value. Projection is a more complex operation which infers constraints to be placed in R’s store from the constraints available in F D’s store (and vice versa) and the relevant bridges available in M. This enables each solver to take advantage of the computations performed by other solvers. We postulate a projection function proj F D→R such that for any set CF D of F D-constraints and any finite set M of bridge constraints, proj F D→R (CF D , M ) returns a finite disjunction CR of equivalent R-constraints (similarly, proj R→F D ). In order to maximize the opportunities for projection, we postulate a function bridges F D→R such that bridges F D→R (CF D , M ) returns a finite set of new bridge constraints M from the new variables in CF D (similarly, bridges R→F D ). • Interpolation is the process of defining a function that takes on specified values at specified points. We use this technique in our higher-order setting to support the cooperation and communication between an algebraic constraint domain (R or F D) and . We try to construct a function, represented by a λ-abstraction, which must go through the data points. In order to apply this technique, our cooperative computation model keeps a store C by means of the execution of an algebraic constraint collect [. . .] C (similar to the setof predicate in Prolog) from a finite list of real or integer variables. Then,
192
R. del Vado V´ırseda λu, v. F (u, v)
λu. F (u)
λu. P (u)
o F (RX , RY )
M
R
X RX Y RY
6
F (RX , RY ) ≤ 0
F (RX )
(X, Y )
collect. [X, Y ] C interp. [lg] C P
M
/ R
FD
6
domain [X, Y ] 0 N labeling [ ] [X, Y ]
6
|F (RX ) − RY | < 1
X RX Y RY
-
FD
6 domain [X] 0 N labeling [ff ] [X]
Fig. 3. Examples of higher-order programming with algebraic constraint cooperation
we assume an interpolation function interpolation, interpreted with respect to a mapping interp R→ (similarly, interp F D→ ), such that the higherorder constraint interpolation [. . .] C F returns in the functional variable F the λ-abstraction λu. F (u) according to a predefined list of interpolation methods (implemented in C++ and called from Prolog), so that the following interpolation condition holds: F (x) = y for all (x, y) ∈ C. For communicating information between the higher-order domain and an algebraic constraint domain D (R or F D) we can assume a mapping apply →D , defined by means of the application of the functional variable F associated to the λabstraction λu. F (u) to a type appropriate finite number of arguments x in order to compound algebraic constraints from F (x) (Fig. 3. illustrates this cooperation of the algebraic domains F D and R with for the motivating examples presented in Section 2).
5
Higher-Order Cooperative Programming in CFLP(C)
We are now ready to present our computation framework for higher-order functional and logic programming with cooperation of algebraic constraint domains within the CFLP (C) instance of the CFLP scheme. Building upon the notion of higher-order coordination domain C described in the previous section, we have designed a formal operational semantics by means of a higher-order cooperative constraint lazy narrowing calculus provided by CFLP (C) which is sound and complete, extending the formal properties presented in Section 3 for the constraint solver. The calculus embodies computation rules for creating bridges, invoking constraint solvers, and performing constraint projections as well as other more ad hoc operations for communications among the higher-order domain and the algebraic constraint domains F D and R. Moreover, the calculus uses higher-order demand-driven narrowing with definitional trees [13] for processing calls to program defined functions, ensuring that function calls are
Cooperation of Algebraic Constraint Domains
193
evaluated only as far as demanded by the resolution of the C-constraints involved in the current computation. After introducing CFLP (C)-programs and goals, we present the goal-solving rules of the calculus and results concerning the formal properties of our higher-order cooperative computation model. Finally, we sketch an implementation on top of the constraint functional logic system T OY [5]. Definition 5 (CFLP (C)-programs). A Constrained Pattern Rewrite System over the higher-order coordination domain C = M ⊕ ⊕ F D ⊕ R (CPRS (C) for short) is a finite set of C-constrained rewrite rules of the form f (ln ) = r ⇐ C, where (a) f (ln ) and r are total λ-terms of the same base type. (b) f (ln ) is a fully extended linear pattern. (c) C is a (possibly empty) finite sequence of C-constraints. More precisely, each C-constraints is exactly of one of the following cases: • -constraint (C ): equations s == t, with s, t ∈ T (F , V). • M-constraint (CM ): bridge constraints X Y , with X :: int and Y :: real . • R-constraint (CR ): arithmetic constraints over real numbers. • F D-constraint (CF D ): arithmetic and finite domain constraints over integers. A goal C for a given CPRS (C) is a set of C-constraints. Each CPRS (C) R induces a partition of the underlying signature F into Fd (defined function symbols) and Fc (data constructors): Fd = {f ∈ F | ∃(f (ln ) = r ⇐ C) ∈ R} and Fc = F \ Fd . We say that R is a constructor-based CPRS (C) if each conditional pattern rewrite rule f (ln ) = r ⇐ C satisfies the condition that l1 , . . . , ln ∈ T (Fc , V). Our higher-order cooperative computation model works by transforming initial goals C0 into final goals C, which serve as computed answers from a set of values K. We represent the computation as a CFLP (C)-derivation C0 | ∅ ⇒∗σ C | K, extending the notation previously introduced by the -constraint solver in Section 3. The core of the computational model in CFLP (C) consists of the HigherOrder Lazy Narrowing calculus with Definitional Trees HOLNDT presented in [13] for higher-order (unconstrained) functional logic programming. We can use this calculus in a modular way, ignoring at this moment algebraic domain cooperation and solver invocation, in order to deal with calls to defined functions and to apply a program rule. More precisely, if the goal includes a constraint C (analogously, CR or CF D ) of the form λxk .f (sn ) == t with f ∈ Fd , we can apply the rules of HOLNDT to perform a demand-driven evaluation of the function call, represented by λxk . f (sn ), Tf R (see [13] for more details). Then, higher-order narrowing is applied in an optimized way by using an associated higher-order definitional tree Tf to ensure an optimal choice of needed narrowing steps by means of the selection of a suitable (possibly non-deterministic) conditional pattern rewrite rule {π = ri ⇐ Ci }1≤i≤m of the CFLP (C)-program R. Therefore, we transform C into a flattened form R == t. The following three rules formalize these transformations.
194 (on) (ov )
(ev)
R. del Vado V´ırseda rigid narrowing {{λxk .f (sn ) == t, C}} | K ⇒{} {{λxk .f (sn ), Tf R, R == t, C}} | K where f ∈ Fd . flex narrowing {{λxk .X(sm ) == t, C}} | K ⇒σ {{λxk .X(sm ), Tf R, R == t, C}}σ | (K ∪ {X})σ where σ = {X → λym .f (Xn (ym ))} with f ∈ Fd . evaluation {{λxk .π , rule(π, {ri ⇐ Ci }1≤i≤m ) R, C}} | K ⇒{} {{λykq .sq == Rq , Ci , λxk .ri == R, C}} | K ∪ {Rq } if 1 ≤ i ≤ m, matcher(λxk .π , λxk .π) = {Rq → λykq .sq }, and {Rq } = F V(λxk .π).
The following two rules describe the possible transformation in a goal of a finite subset CD of D-constraints (where D is the pure domain , M, F D or R) by a D-solver’s invocation, including the detection of failure by the corresponding solver. (cs)
(sf )
constraint solving {{CD , C}} | K ⇒σ {{CD , C}}σ | K if the D-constraint solver Solver D performs a successful D-derivation CD | K ⇒∗ σ CD | K . solving failure {{CD , C}} | K ⇒σ fail if the D-constraint solver Solver D performs a failure D-derivation CD | K ⇒∗ σ fail.
The availability of the M-solver means that solving mediatorial constraints contributes to the cooperative goal-solving process, in addition to the role of bridges for guiding projections. Projected constraints improve the performance of the corresponding solver, enabling certain solvers to profit from (the projected forms) of constraints originally intended for other solvers. The last two rules take care of this idea of domain cooperation, and are used to generate new bridges and to compute projected constraints to be added to the goal. (sb)
set bridges {{CD , CM , C}} | K ⇒{} {{CD , CM , CM , C}} | K where D is the algebraic constraint domain R (resp. F D) and D the domain F D (resp. R),
(pp)
and bridges D→D (CD , CM ) = CM . propagate projections {{CD , CM , C}} | K ⇒{} {{CD , CD , CM , C}} | K where D is the algebraic constraint domain R (resp. F D) and D the domain F D (resp. R),
and proj D→D (CD , CM ) = CD .
We conclude this section with theoretical results now ensuring soundness and completeness for CFLP (C)-derivations. Both properties are presented with respect to the declarative semantics of the instance CFLP (C), provided by the generic CFLP (D) scheme [6] and the semantic framework for higher-order functional logic programs on λ-abstractions [13,14]. For instance, the set of solutions Soln(C) of a goal C and the meaning [[P ]] of a state P now refer to the existence of logical proofs in the corresponding D-instance of a generic Constrained ReWriting Logic CRWL(D), whose inference rules can be found in [6,14] for each D-constraint CD in G, where D is R, F D, M, or . Theorem 2 (properties of the goal-solving calculus in CFLP (C)) (1) Soundness: Let C | ∅ ⇒∗σ P be a CFLP (C)-derivation. Then, σγ ∈ Soln(C) whenever γ ∈ [[P ]]. (2) Completeness: Let C be an initial goal given by a finite set of C-constraints, and A(C) = {σγF V(C) | C | ∅ ⇒∗σ P is a finite CFLP (C)-derivation with γ ∈ [[P ]]}. Then, A(C) = {γF V(C) | γ ∈ Soln(C)}.
Cooperation of Algebraic Constraint Domains
195
Proof (Sketch) (1) Let γ ∈ [[P ]]. We prove by induction on the length of the CFLP (C)-derivation Π : C | ∅ ⇒∗σ P that σγ ∈ Soln(C). If |Π| = 0 then σ = {} and σγ = γ ∈ [[P ]] = [[C | ∅]] = Soln(C). If |Π| > 0 then we can write Π : C | ∅ ⇒∗σ1 P1 ⇒σ2 P . By the corresponding local soundness property of a single CFLP (C)-transformation step (see [1,13,14] and Theorem 1), we know that σ2 γ ∈ [[P1 ]]. We can now apply the induction hypothesis to the shorter CFLP (C)-derivation C | ∅ ⇒∗σ1 P and learn that σγ = σ1 σ2 γ ∈ [[C | ∅]] = Soln(C). (2) Since A(C) ⊆ Soln(C) by soundness, we must only show that Soln(C) ⊆ A(C). Let γ ∈ Soln(C), P = C | ∅, and W0 = F V(C) ∪ Dom(γ). First, we prove that, for every given admissible state P , any finite set of variables W , and γ ∈ [[P ]], there exists a CFLP (C)-derivation Φ : P ⇒∗σ P such that γ = σγ [W ] for some γ ∈ [[P ]]. The proof is by induction with respect to the well-founded ordering used in the progress property of [1,13,14] and Theorem 1. If P is final then we can choose Φ : P ⇒0{} P and γ = γ. Otherwise, we can apply the corresponding progress property of local completeness (see [1,13,14] and Theorem 1) to determine P1 = C1 | K1 , γ1 ∈ [[P1 ]], and a CFLP (C)-transformation step ϕ : P ⇒σ1 P1 with (P ) (P1 ) and γ = σ1 γ1 [W ]. Let W = W ∪ F V({Xσ1 | X ∈ W }). By induction hypothesis for (P1 ), there exists a CFLP (C)-derivation Φ : P1 ⇒∗σ P such that γ1 = σ γ W for some γ ∈ [[P ]]. Let σ = σ1 σ and Φ the CFLP (C)derivation obtained by prepending ϕ to Φ . Then, Φ : P ⇒∗σ P and σγ is a computed answer. Also, γW = σ1 γ1 W = σ1 σ γ W = σγ W , and this concludes our preliminary proof. In particular, if γ ∈ Soln(C) then γ ∈ [[P ]], where P = C | ∅. According to our preliminary result, there exists a CFLP (C)-derivation Φ : P ⇒∗σ P such that γ = σγ [F V(C)] for some γ ∈ [[P ]]. Thus, σγ F V(C) ∈ A(C), σγ F V(C) = γF V(C) , and γ ∈ A(C). Thanks to the soundness and completeness results modularly presented in Section 3 for the new constraint domain and the higher-order narrowing calculus HOLNDT in [13] for declarative programming in CFLP (), Theorem 2 can be proved in the cooperative setting of CFLP (C). We can apply the soundness and completeness results and techniques presented in [1] for the cooperation of the constraints domains H, R, and F D, now on the cooperation of the higher-order constraint domain with the algebraic constraint domains R and F D using the mediatorial domain M. More technical details can be found in [1]. Finally, we sketch a prototype implementation of the CFLP (C) computational model on top of the T OY system [5]. Fig. 4. shows the architectural components of the higher-order cooperation schema in this system. The higher-order constraint domain and the algebraic domains R and F D with a mediatorial domain M to yield the coordination domain C = M ⊕ ⊕ F D ⊕ R are supported by this implementation. The main novelty here is that compilation proceeds by performing a translation from higher-order programs on the domain to typed higher-order applicative programs in the Herbrand domain H. Following [14],
196
R. del Vado V´ırseda
T OY
FD
R
Solver FD
Solver R
?
Solver
H
M
H-Store ? ? ? ? M-Store Solver H
Solver M
?
F D-Store
R-Store
CLP(F D)
CLP(R)
?
?
SICStus Prolog Fig. 4. Architectural components of the higher-order cooperation in T OY
the idea consists in introducing an explicit application operation apply , replacing λ-abstractions (and similar constructs in our higher-order setting, such as partial applications) by means of new data constructors, and providing rewrite rules to define the proper behavior of the application operation when meeting terms where these new data constructors appear. Applicative expressions without λ-abstractions are expressive enough for many purposes. Unfortunately, this translation approach does not solve all problems. Application plays a very peculiar rˆole in higher-order languages, and treating it just as an ordinary operation leads to technical difficulties in relation to types. More precisely, the rewrite rules for apply obtained by means of this translation are not well-typed in Milner’s system. To avoid these problems, one has to give up static type discipline and perform some amount of dynamic type checking at run time. For programs and goals without higher-order variables, dynamic type checking can be avoided. The solvers and constraint stores for the algebraic domains F D and R are provided by the SICStus Prolog constraint libraries. Proper F D and R constraints, as well as and H constraints specific to F D and R are posted to the respective stores and handled by the respective SICStus Prolog solvers. On the other hand, the stores and solvers for the domains , H, and M are built into the code of the T OY implementation, rather than being provided by the underlying SICStus Prolog system. Moreover, the implementation of the fundamental mechanisms for algebraic domain cooperation: bridges, projections, and the collection and interpolation of data values among the higher-order domain and the algebraic domains R and F D are tackled by glue code integrating Prolog services with C++ and using a so-called mixed store which keeps a representation of the mediatorial constraint store as one single Prolog structure.
Cooperation of Algebraic Constraint Domains
197
More precisely, we briefly explain the main ideas used in this work for the implementation of the fundamental mechanisms for higher-order communication provided by the mediatorial domain M among the algebraic domains F D, R, and the new higher-order constraint domain : The equivalence primitive for bridges, the projections of algebraic constraints between R and F D, and the polynomial interpolations and functional variable applications between the domains R, F D, and . (1) The equivalence primitive :: int → real → bool used in this paper for building mediatorial constraints (i.e., bridges) is implemented in T OY as a Prolog predicate #== with five arguments as follows (technical details and more explanations can be found in [1,5]): #==(L, R, Out, Cin, Cout):hnf(L, HL, Cin, Cout1), hnf(R, HR, Cout1, Cout2), tolerance(Epsilon), ( (Out=true, Cout3 = [’#==’(HL,HR)|Cout2], freeze(HL, {HL - Epsilon =< HR, HR =< HL + Epsilon} ), freeze(HR, (HL is integer(round(HR))))); (Out=false, Cout3 = [’#/==’(HL,HR)|Cout2], freeze(HL, (F is float(HL), {HR =\= F})), freeze(HR, (0.0 is float_fractional_part(HR) -> (I is integer(HR), HL #\= I); true)))), cleanBridgeStore(Cout3,Cout).
(2) Following a technique similar to that applied for #==, the Prolog implementation of projections has a different piece of code (Prolog clause) for each algebraic function symbol which can be used to build projectable constraints. The code below shows the basic behavior of the implementation for the case of F D-constraints built with the inequality function symbol #<: #<(L, R, Out, Cin, Cout):hnf(L, HL, Cin, Cout1), hnf(R, HR, Cout1, Cout2), ((Out=true, HL #< HR); (Out=false, HL #>= HR)), (proj_active -> (searchVarsR(HL, Cout2, Cout3, RHL), searchVarsR(HR, Cout3, Cout, RHR), ((Out==true, { RHL < RHR }); (Out==false, { RHL >= RHR }))); Cout=Cout2).
The code excerpt below shows the basic behavior of the implementation for the case of the R-constraints built with the inequality function symbol >: >(L, R, Out, Cin, Cout):hnf(L, HL, Cin, Cout1), hnf(R, HR, Cout1, Cout),
198
R. del Vado V´ırseda (Out = true, {HR > HL} ; Out = false, {HL =< HR}), (proj_active -> (searchVarsFD(HL, Cout, BL, FDHL), searchVarsFD(HR, Cout, BR, FDHR), ((BL == true, BR == true, Out == true, FDHL (BL == true, BR == true, Out == false, FDHL (BL == true, BR == false, Out == true, FDHL (BL == true, BR == false, Out == false, FDHL (BL == false, BR == true, Out == true, FDHL (BL == false, BR == true, Out == false, FDHL true); true).
#> #=< #> #=< #> #=<
FDHR); FDHR); FDHR); FDHR); FDHR); FDHR);
(3) SICStus Prolog provides a bi-directional, procedural interface for program parts written in C/C++ and Prolog. Functions written in the C/C++ language may be called from Prolog using an interface in which automatic type conversions between Prolog terms and common C/C++ types are declared as Prolog facts. Following this approach, we have implemented in C++ an interpolation function useful to compute the values of the Lagrange form of the interpolation polynomial: float interpolation (float pos[], float val[], int degree, float desiredPos) { float retVal = 0; for (int i = 0; i < degree; ++i) { float weight = 1; for (int j = 0; j < degree; ++j) { // The i-th term has to be skipped if (j != i) { weight *= (desiredPos - pos[j]) / (pos[i] - pos[j]); } } retVal += weight * val[i]; } return retVal; }
6
Conclusions
In this work we have presented a suitable use of cooperative algebraic constraint domains and solvers in a higher-order functional and logic programming framework on λ-abstractions. We have investigated foundational and practical issues concerning a sound and complete computational framework for the cooperation of algebraic constraint domains. We have designed an improved higher-order instance CFLP (C) of an already existing generic scheme CFLP (D) for constraint functional logic programming over a higher-order coordination domain C, as well as a prototype implementation in the T OY system which supports the cooperation via bridges, projections, applications of functional variables, and polynomial interpolations on λ-abstractions.
Cooperation of Algebraic Constraint Domains
199
In addition to already mentioned works, an important related work in this area is the CFLP scheme developed by Mircea Marin in his PhD Thesis [7]. This work introduces CFLP (D, S, L), a family of languages parameterized by a constraint domain D, a strategy S which defines the cooperation of several constraint solvers over D, and a constraint lazy narrowing calculus L for solving constraints involving functions defined by user given constrained rewrite rules. The main difference w.r.t. our approach is the lack of declarative (model-theoretic and fixpoint) semantics provided by the rewriting logic CRWL(C) underlying our CFLP (C) instance (see the reference [14] for detailed information). In the future, we would like to improve some of the limitations of our current approach to higher-order algebraic domain cooperation. For instance, the computational model should be generalized to allow for an arbitrary higher-order coordination domain C in place of the concrete choice M ⊕ ⊕ R ⊕ F D, and the implemented prototype should be properly developed, maintained, and improved in various ways. In particular, the experimentation with benchmarks and application cases should be further developed.
References 1. Est´evez, S., Hortal´ a, M.T., Rodr´ıguez, M., del Vado, R., S´ aenz, F., Fern´ andez, A.J.: On the cooperation of the constraint domains H, R, and FD in CFLP . Journal of Theory and Practice of Logic Programming 9(4), 415–527 (2009) 2. Gonz´ alez, J.C., Hortal´ a, M.T., Rodr´ıguez, M.: A higher-order rewriting logic for functional logic programming. In: Proc. ICLP 1997, pp. 153–167 (1997) 3. Hanus, M.: The Integration of Functions into Logic Programming: From Theory to Practice. Journal of Logic Programming 19&20, 583–628 (1994) 4. Hindley, J.R., Seldin, J.P.: Introduction to Combinatorics and λ-Calculus. Cambridge University Press, Cambridge (1986) 5. L´ opez, F.J., S´ anchez, J.: T OY: A Multiparadigm Declarative System. In: Narendran, P., Rusinowitch, M. (eds.) RTA 1999. LNCS, vol. 1631, pp. 244–247. Springer, Heidelberg (1999), System and documentation available at http://toy.sourceforge.net 6. L´ opez, F.J., Rodr´ıguez, M., del Vado, R.: A New Generic Scheme for Functional Logic Programming with Constraints. Journal of Higher-Order and Symbolic Computation 20(1-2), 73–122 (2007) 7. M. Marin. Functional Logic Programming with Distributed Constraint Solving. PhD. Thesis, Johannes Kepler Universit¨ at Linz, (2000) 8. Miller, D.: A logic programming language with λ-abstraction, function variables, and simple unification. Journal of Logic and Computation 1(4), 497–536 (1991) 9. Nadathur, G., Miller, D.: An overview of λ-Prolog. In: Proc. Int. Conf. on Logic Programming (ICLP 1988), pp. 810–827. The MIT Press, Cambridge (1988) 10. Nipkow, T., Paulson, L.C., Wenzel, M.: Isabelle/HOL. LNCS, vol. 2283. Springer, Heidelberg (2002) 11. Prehofer, C.: Solving Higher-Order Equations. From Logic to Programming. In: Foundations of Computing. Birk¨ auser, Boston (1998)
200
R. del Vado V´ırseda
12. Suzuki, T., Nakagawa, K., Ida, T.: Higher-Order Lazy Narrowing Calculus: A Computation Model for a Higher-Order Functional Logic Language. In: Hanus, M., Heering, J., Meinke, K. (eds.) ALP 1997 and HOA 1997. LNCS, vol. 1298, pp. 99–113. Springer, Heidelberg (1997) 13. del Vado, R.: A Higher-Order Demand-Driven Narrowing Calculus with Definitional Trees. In: Jones, C.B., Liu, Z., Woodcock, J. (eds.) ICTAC 2007. LNCS, vol. 4711, pp. 169–184. Springer, Heidelberg (2007) 14. del Vado, R.: A Higher-Order Logical Framework for the Algorithmic Debugging and Verification of Declarative Programs. In: PPDP 2009, pp. 49–60. ACM, New York (2009)
Proving Termination Properties with mu-term Beatriz Alarc´ on, Ra´ ul Guti´errez, Salvador Lucas, and Rafael Navarro-Marset ELP group, DSIC, Universidad Polit´ecnica de Valencia Camino de Vera s/n, E-46022 Valencia, Spain
Abstract. mu-term is a tool which can be used to verify a number of termination properties of (variants of) Term Rewriting Systems (TRSs): termination of rewriting, termination of innermost rewriting, termination of order-sorted rewriting, termination of context-sensitive rewriting, termination of innermost context-sensitive rewriting and termination of rewriting modulo specific axioms. Such termination properties are essential to prove termination of programs in sophisticated rewriting-based programming languages. Specific methods have been developed and implemented in mu-term in order to efficiently deal with most of them. In this paper, we report on these new features of the tool.
1
Introduction
Handling typical programming language features such as sort/types and subtypes, evaluation modes (eager/lazy), programmable strategies for controlling the execution, rewriting modulo axioms and so on is outside the scope of many termination tools. However, such features can be very important to determine the termination behavior of programs. For instance, in Figure 1 we show a Maude [10] program encoding an order-sorted TRS which is terminating when the sorting information is taken into account but which is nonterminating as a TRS (i.e., disregarding sort information) [18]. The predicate is-even tests whether an integer number is even. When disregarding any information about sorts, the program EVEN is not terminating due to the last rule for is-even, which specifies a recursive call to is-even. However, when sorts are considered and the hierarchy among them is taken into account, such recursive call is not longer possible due to the need of binding variable Y of sort NzNeg to an expression opposite(Y) of sort NzPos, which is not possible in the (sub)sort hierarchy given by EVEN. The notions coming from the already quite mature theory of termination of TRSs (orderings, reduction pairs, dependency pairs, semantic path orderings, etc.) provide a basic collection of abstractions for treating termination problems. For real programming languages, though, having appropriate adaptations, methods, and techniques for specific termination problems is essential. Giving support to multiple extensions of such classical termination notions is one of the main goals for developing a new version of our tool, mu-term 5.0: http://zenon.dsic.upv.es/muterm
Partially supported by EU (FEDER) and MICINN grant TIN 2007-68093-C02-02.
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 201–208, 2011. c Springer-Verlag Berlin Heidelberg 2011
202
B. Alarc´ on et al.
fmod EVEN is sorts Zero NzNeg Neg NzPos Pos Int Bool . subsorts Zero < Neg < Int . subsorts NzNeg < Neg . subsorts Zero < Pos < Int . subsorts NzPos < Pos . op 0 : -> Zero . op is-even : Int -> Bool . op s : Pos -> NzPos . op is-even : NzPos -> Bool . op p : Neg -> NzNeg . op is-even : NzNeg -> Bool . ops true false : -> Bool . op opposite : NzNeg -> NzPos . var X : Pos . var Y : NzNeg . eq opposite(p(0)) = s(0) . eq opposite(p(Y)) = s(opposite(Y)) . eq is-even(0) = true . eq is-even(s(0)) = false . eq is-even(s(s(X))) = is-even(X) . eq is-even(Y) = is-even(opposite(Y)) . endfm Fig. 1. Maude program
mu-term [23,2] was originally designed to prove termination of Context-Sensitive Rewriting (CSR, [21]), where reductions are allowed only for specific arguments μ(f ) ⊆ {1, . . . , k} of the k-ary function symbols f in the TRS. In this paper we report on the new features included in mu-term 5.0, not only to improve its ability to prove termination of CSR but also to verify a number of other termination properties of (variants of) TRSs. In contrast to transformational approaches which translate termination problems into a classical termination problem for TRSs, we have developed specific techniques to deal with termination of CSR, innermost CSR, order-sorted rewriting and rewriting modulo specific axioms (associative or commutative) by using dependency pairs (DPs, [7]). Our benchmarks show that direct methods lead to simpler, faster and more successful proofs. Moreover, mu-term 5.0 has been rewritten to embrace the dependency pair framework [17], a recent formulation of the dependency pair approach which is specially well-suited for mechanizing proofs of termination.
2
Structure and Functionality of mu-term 5.0
mu-term 5.0 consists of 47 Haskell modules with more than 19000 lines of code. A web-based interface and compiled versions in several platforms are available at the mu-term 5.0 web site. In the following, we describe its new functionalities. 2.1
Proving Termination of Context-Sensitive Rewriting
As in the unrestricted case [7], the context-sensitive dependency pairs (CSDPs, [3]) are intended to capture all possible function calls in infinite μ-rewrite sequences. In [2], even though our quite ‘immature’ CSDP approach was one of our major assets, mu-term still used transformations [15,25] and the contextsensitive recursive path ordering (CSRPO, [9]) in many termination proofs.
Proving Termination Properties with mu-term
203
Since the developments in [2], many improvements and refinements have been made when dealing with termination proofs of CSR. The most important one has been the development of the context-sensitive dependency pair framework (CSDP framework, [3,20]), for mechanizing proofs of termination of CSR. The central notion regarding termination proofs is that of CS problem; regarding mechanization of the proofs is that of CS processor. Most processors in the standard DP-framework [17] have been adapted to CSR and many specific ones have been developed (see [3,20]). Furthermore, on the basis of the results in [28] we have implemented specific processors to prove the infiniteness of CS problems. Therefore, mu-term 5.0 is the first version of mu-term which is also able to disprove termination of CSR. In the following table, we compare the performance of mu-term 5.0 and the last reported version of the tool (mu-term 4.3 [2]) regarding its ability to prove termination of CSR over the context-sensitive category of the Termination Problem Data Base1 (TPDB) which contains 109 examples2 . The results show the power of the new CSDP framework in mu-term 5.0, not only by solving more examples in less time, but also disregarding the need of using transformations or CSRPO for solving them. Table 1. mu-term 4.3 compared to mu-term 5.0 in proving termination of CSR Termination Tool mu-term 5.0 mu-term 4.3
2.2
Total 99/109 64/109
Yes No 95 4 64 0
CSDPs CSRPO Transf. 99 0 0 54 7 3
Average (sec) 0.95s 3.44s
Proving Termination of Innermost CSR
Termination of innermost CSR (i.e., the variant of CSR where only the deepest μreplacing redexes are contracted) has been proved useful for proving termination of programs in eager programming languages like Maude and OBJ* which permit to control the program execution by means of context-sensitive annotations. Techniques for proving termination of innermost CSR were first investigated in [14,22]. In these papers, though, the original CS-TRS (R, μ) is transformed into a TRS whose innermost termination implies the innermost termination for (R, μ). In [4], the dependency pair method [7] has been adapted to deal with termination proofs of innermost CSR. This is the first proposal of a direct method for proving termination of innermost CSR and mu-term was the first termination tool able to deal with it. Our experimental evaluation shows that the use of innermost context-sensitive dependency pairs (ICSDPs) highly improves over the performance of transformational methods for proving termination of innermost CSR: innermost termination of 95 of the 109 considered CS-TRSs could be proved by using ICSDPs; in contrast, only 60 of the 109 could be proved by using (a combination of) transformations and then using AProVE [16] for proving the innermost 1 2
See http://termination-portal.org/wiki/TPDB We have used version 7.0.2 of the TPDB.
204
B. Alarc´ on et al.
termination of the obtained TRS. Another important aspect of innermost CSR is its use for proving termination of CSR as part of the CSDP framework [1]. Under some conditions, termination of CSR and termination of innermost CSR coincide [14,19]. We then switch from termination of CSR to termination of innermost CSR, for which we can apply the existing processors more successfully (see Section 2.6). Actually, we proceed like that in 30 − 50% of the CSR termination problems which are proved by mu-term 5.0 (depending on the particular benchmarks). 2.3
Proving Termination of Order-Sorted Rewriting
In order-sorted rewriting, sort information is taken into account to specify the kind of terms that function symbols can take as arguments. Recently, the ordersorted dependency pairs have been introduced and proved useful for proving termination of order-sorted TRSs [26]. As a remarkable difference w.r.t. the standard approach, we can mention the notion of applicable rules which are those rules which can eventually be used to rewrite terms of a given sort. Another important point is the use of order-sorted matching and unification. To our knowledge, mu-term 5.0 is the only tool which implements specific methods for proving termination of OS-TRSs3 . Our benchmarks over the examples in the literature (there is no order-sorted category in the TPDB yet) show that the new techniques perform quite well. For instance, we can prove termination of the OS-TRS EVEN in Figure 1 automatically. 2.4
Proving Termination of A∨C-Rewriting
Recently, we have developed a suitable dependency pair framework for proving termination of A∨C-rewrite theories [5]. An A∨C-rewrite theory is a tuple R = (Σ, E, R) where E is a set containing associative or commutative axioms associated to function symbols of the signature Σ. We have implemented the techniques described in [5] in mu-term. Even with only a few processors implemented, mu-term behaves well in the equational category of the TPDB, solving 39 examples out of 71. Obviously, we plan to investigate and implement more processors in this field. This is not the first attempt to prove termination of rewriting modulo axioms: CiME [11] is able to prove AC-termination of TRSs, and AProVE is able to deal with termination of rewriting modulo equations satisfying some restrictions. 2.5
Use of Rational Polynomials and Matrix Interpretations
Proofs of termination with mu-term 5.0 heavily rely on the generation of polynomial orderings using polynomial interpretations with rational coefficients [24]. In this sense, recent improvements which are new with respect to the previous 3
The Maude Termination Tool [12] implements a number of transformations from OS-TRSs into TRSs which can also be used for this purpose.
Proving Termination Properties with mu-term
205
versions of mu-term reported in [2,23] are the use of an autonomous SMT-based constraint-solver for rational numbers [8] and the use of matrix interpretations over the reals [6]. Our benchmarks show that polynomials over the rationals are used in around 25% of the examples where a polynomial interpretation is required during the successful proof. Matrix interpretations are used in less than 4% of the proofs. 2.6
Termination Expert
In the (CS)DP framework, a strategy is applied to an initial (CSR, innermost CSR, . . . ) problem and returns a proof tree. This proof tree is later evaluated following a tree evaluation strategy (normally, breadth-first search). With small differences depending on the particular kind of problem, we do the following: 1. We check the system for extra variables (at active positions) in the righthand side of the rules. 2. We check whether the system is innermost equivalent (see Section 2.2). If it is true, then we transform the problem into an innermost one. 3. Then, we obtain the corresponding dependency pairs, obtaining a (CS)DP problem. And now, recursively: (a) Decision point between infinite processors and the strongly connected component (SCC) processor. (b) Subterm criterion processor. (c) Reduction triple (RT) processor with linear polynomials (LPoly) and coefficients in N2 = {0, 1, 2}. (d) RT processor with LPoly and coefficients in Q2 = {0, 1, 2, 12 } and Q4 = {0, 1, 2, 3, 4, 12 , 14 } (in this order). (e) RT processor with simple mixed polynomials (SMPoly) and coefficients in N2 . (f) RT processor with SMPoly and rational coefficients in Q2 . (g) RT processor with 2-square matrices with entries in N2 and Q2 . (h) Transformation processors (only twice to avoid nontermination of the strategy): instantiation, forward instantiation, and narrowing. 4. If the techniques above fail, then we use (CS)RPO. The explanation of each processor can be found in [3,20]. Note also that all processors are new with respect to mu-term 4.3 [2]. 2.7
External Use of mu-term
The Maude Termination Tool4 (MTT [12]), which transforms proofs of termination of Maude programs into proofs of termination of CSR, use mu-term’s expert as an external tool to obtain the proofs. The context-sensitive and order-sorted features developed as part of mu-term 5.0 are essential to successfully handling 4
http://www.lcc.uma.es/~ duran/MTT
206
B. Alarc´ on et al.
Maude programs in MTT. The Knuth-Bendix completion tool mkbTT [29] is a modern completion tool that combines multi-completion with the use of termination tools. In the web version of the tool, the option to use mu-term as the external termination tool is available.
3
Conclusions
We have described mu-term 5.0, a new version of mu-term with new features for proving different termination properties like termination of innermost CSR, termination of order-sorted rewriting and termination of rewriting modulo (associative or commutative) axioms. Apart from that, a complete implementation of the CSDP framework [20] has been included in mu-term 5.0, leading to a much more powerful tool for proving termination of CSR. While transformations were used in mu-term 4.3, in mu-term 5.0 they are not used anymore. The research in the field has increased the number of examples which could be handled with CSDPs in 35 (see Table 1). Regarding proofs of termination of rewriting, from a collection of 1468 examples from the TPDB 7.0.2, mu-term 5.0 is able to prove (or disprove) termination of 835 of them. In contrast, mu-term 4.3 was able to deal with 503 only. More details about these experimental results in all considered termination properties discussed in the previous sections can be found here: http://zenon.dsic.upv.es/muterm/benchmarks/index.html Thanks to the new developments reported in this paper, mu-term 5.0 has proven to be the most powerful tool for proving termination of CSR in the contextsensitive subcategory of the 2007, 2009, and 2010 editions of the International Competition of Termination Tools5 . Moreover, in the standard subcategory, we have obtained quite good results in the 2009 and 2010 editions, being the third tool (among five) in solving more examples. We have also participated in the innermost category in the 2009 and 2010 editions and in the equational category in 2010. Note also that mu-term 5.0 has a web interface that allows inexpert users to prove automatically termination by means of the ‘automatic’ option. This is very convenient for teaching purposes, for instance. And, apart from MTT, it is the only termination tool that accepts programs in OBJ/Maude syntax. Therefore, mu-term 5.0 is no more a tool for proving termination of CSR only: We can say now that it has evolved to become a powerful termination tool which is able to prove termination of a wide range of interesting properties of rewriting with important applications to prove termination of programs in sophisticated rewriting-based programming languages like Maude or OBJ*. 5
See http://www.lri.fr/~ marche/termination-competition/2007/, where only AProVE and mu-term participated, and http://termcomp.uibk.ac.at/termcomp/ where there were three more tools in the competition: AProVE, Jambox [13] (only in the 2009 edition), and VMTL [27]. AProVE and mu-term solved the same number of examples but mu-term was much faster. The 2008 edition had only one participant: AProVE.
Proving Termination Properties with mu-term
207
References 1. Alarc´ on, B.: Innermost Termination of Context-Sensitive Rewriting. Master’s thesis, Departamento de Sistemas Inform´ aticos y Computaci´ on, Universidad Polit´ecnica de Valencia, Valencia, Spain (2008) 2. Alarc´ on, B., Guti´errez, R., Iborra, J., Lucas, S.: Proving Termination of ContextSensitive Rewriting with mu-term. Electronic Notes in Theoretical Computer Science 188, 105–115 (2007) 3. Alarc´ on, B., Guti´errez, R., Lucas, S.: Context-Sensitive Dependency Pairs. Information and Computation 208, 922–968 (2010) 4. Alarc´ on, B., Lucas, S.: Termination of Innermost Context-Sensitive Rewriting Using Dependency Pairs. In: Konev, B., Wolter, F. (eds.) FroCos 2007. LNCS (LNAI), vol. 4720, pp. 73–87. Springer, Heidelberg (2007) 5. Alarc´ on, B., Lucas, S., Meseguer, J.: A Dependency Pair Framework for A∨C¨ Termination. In: Olveczky, P. (ed.) WRLA 2010. LNCS, vol. 6381, pp. 35–51. Springer, Heidelberg (2010) 6. Alarc´ on, B., Lucas, S., Navarro-Marset, R.: Proving Termination with Matrix Interpretations over the Reals. In: Geser, A., Waldmann, J. (eds.) Proc. of the 10th International Workshop on Termination, WST 2009, pp. 12–15 (2009) 7. Arts, T., Giesl, J.: Termination of Term Rewriting Using Dependency Pairs. Theoretical Computer Science 236(1-2), 133–178 (2000) 8. Borralleras, C., Lucas, S., Navarro-Marset, R., Rodr´ıguez-Carbonell, E., Rubio, A.: Solving Non-linear Polynomial Arithmetic via SAT Modulo Linear Arithmetic. In: Schmidt, R.A. (ed.) CADE 2009. LNCS, vol. 5663, pp. 294–305. Springer, Heidelberg (2009) 9. Borralleras, C., Lucas, S., Rubio, A.: Recursive Path Orderings can be ContextSensitive. In: Voronkov, A. (ed.) CADE 2002. LNCS (LNAI), vol. 2392, pp. 314– 331. Springer, Heidelberg (2002) 10. Clavel, M., Dur´ an, F., Eker, S., Lincoln, P., Mart´ı-Oliet, N., Meseguer, J., Talcott, C.: All About Maude - A High-Performance Logical Framework. LNCS, vol. 4350. Springer, Heidelberg (2007) 11. Contejean, E., March´e, C., Monate, B., Urbain, X.: Proving Termination of Rewriting with CiME. In: Rubio, A. (ed.) Proc. of the 6th International Workshop on Termination, WST 2003, pp. 71–73 (2003) 12. Dur´ an, F., Lucas, S., Meseguer, J.: MTT: The Maude Termination Tool (System Description). In: Armando, A., Baumgartner, P., Dowek, G. (eds.) IJCAR 2008. LNCS (LNAI), vol. 5195, pp. 313–319. Springer, Heidelberg (2008) 13. Endrullis, J.: Jambox, Automated Termination Proofs For String and Term Rewriting (2009), http://joerg.endrullis.de/jambox.html 14. Giesl, J., Middeldorp, A.: Innermost Termination of Context-Sensitive Rewriting. In: Ito, M., Toyama, M. (eds.) DLT 2002. LNCS(LNAI), vol. 2450, pp. 231–244. Springer, Heidelberg (2003) 15. Giesl, J., Middeldorp, A.: Transformation Techniques for Context-Sensitive Rewrite Systems. Journal of Functional Programming 14(4), 379–427 (2004) 16. Giesl, J., Schneider-Kamp, P., Thiemann, R.: AProVE 1.2: Automatic Termination Proofs in the Dependency Pair Framework. In: Furbach, U., Shankar, N. (eds.) IJCAR 2006. LNCS (LNAI), vol. 4130, pp. 281–286. Springer, Heidelberg (2006) 17. Giesl, J., Thiemann, R., Schneider-Kamp, P., Falke, S.: Mechanizing and Improving Dependency Pairs. Journal of Automatic Reasoning 37(3), 155–203 (2006)
208
B. Alarc´ on et al.
18. Gnaedig, I.: Termination of Order-sorted Rewriting. In: Kirchner, H., Levi, G. (eds.) ALP 1992. LNCS, vol. 632, pp. 37–52. Springer, Heidelberg (1992) 19. Gramlich, B., Lucas, S.: Modular Termination of Context-Sensitive Rewriting. In: Proc. of the 4th ACM SIGPLAN International Conference on Principles and Practice of Declarative Programming, PPDP 2002, pp. 50–61. ACM Press, New York (2002) 20. Guti´errez, R., Lucas, S.: Proving Termination in the Context-Sensitive Dependency ¨ Pair Framework. In: Olveczky, P.C. (ed.) WRLA 2010. LNCS, vol. 6381, pp. 18–34. Springer, Heidelberg (2010) 21. Lucas, S.: Context-Sensitive Computations in Functional and Functional Logic Programs. Journal of Functional and Logic Programming 1998(1), 1–61 (1998) 22. Lucas, S.: Termination of Rewriting With Strategy Annotations. In: Nieuwenhuis, R., Voronkov, A. (eds.) LPAR 2001. LNCS (LNAI), vol. 2250, pp. 666–680. Springer, Heidelberg (2001) 23. Lucas, S.: mu-term: A tool for proving termination of context-sensitive rewriting. In: van Oostrom, V. (ed.) RTA 2004. LNCS, vol. 3091, pp. 200–209. Springer, Heidelberg (2004) 24. Lucas, S.: Polynomials over the Reals in Proofs of Termination: from Theory to Practice. RAIRO Theoretical Informatics and Applications 39(3), 547–586 (2005) 25. Lucas, S.: Proving Termination of Context-Sensitive Rewriting by Transformation. Information and Computation 204(12), 1782–1846 (2006) 26. Lucas, S., Meseguer, J.: Order-Sorted Dependency Pairs. In: Antoy, S., Albert, E. (eds.) Proc. of the 10th International ACM SIGPLAN Sympsium on Principles and Practice of Declarative Programming, PPDP 2008, pp. 108–119. ACM Press, New York (2008) 27. Schernhammer, F., Gramlich, B.: VMTL - A Modular Termination Laboratory. In: Treinen, R. (ed.) RTA 2009. LNCS, vol. 5595, pp. 285–294. Springer, Heidelberg (2009) 28. Thiemann, R., Sternagel, C.: Loops under Strategies. In: Treinen, R. (ed.) RTA 2009. LNCS, vol. 5595, pp. 17–31. Springer, Heidelberg (2009) 29. Winkler, S., Sato, H., Middeldorp, A., Kurihara, M.: Optimizing mkbTT . In: Lynch, C. (ed.) In: Lynch, C. (ed.) Proc. of the 21st International Conference on Rewriting Techniques and Applications, RTA 2010. Leibniz International Proceedings in Informatics (LIPIcs), vol. 6, pp. 373–384. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik (2010), http://drops.dagstuhl.de/opus/volltexte/2010/2664
BAL Tool in Flexible Manufacturing Systems Diego P´erez Le´andrez, M. Carmen Ruiz, J. Jose Pardo, and Diego Cazorla Escuela Superior de Ingenieria Informatica Univ. de Castilla-La Mancha Campus Univ. s/n. 02071. Albacete, Spain {Diego.Perez2,MCarmen.Ruiz,Juan.Pardo,Diego.Cazorla}@uclm.es
Abstract. A tool for performance evaluation will be presented in this paper. The tool is named BAL and is based on the timed process algebra BTC incorporating resources as well. BAL begins by first making the syntactic analysis of the system specification, and then draws up its relevant transition graph by applying the rules of the operational semantics and solves a performance optimization problem relevant to optimize of the maximum throughout. Usage of BAL tool is illustrated on a Flexible Manufacturing Systems (FMS) to find a suitable number of resources/machines to maximize system throughput. Keywords: Tool, Performance Evaluation, Available Resources, Timed Process Algebra, Flexible Manufacturing Systems.
1
Motivation
Time-saving methods and increased performance are currently two of the overriding concerns in performance evaluation. Not so long ago, these features were considered important in critical systems, but now their relevance has become significant in all systems. The aim of obtaining the best running time using the minimum amount of resources goes without question and it would be extremely beneficial to have a tool (in this case a tool based on formal methods) that is able to calculate the time needed to reach a certain state (or the final state for the total time), taking into account the time actions need for their execution, the time invested in process synchronizations and the fact that the (limited amount of) resources have to be shared by all the system processes, which could cause delays when resources are not available. Bearing in mind these factors, we have developed a timed process algebra, called BTC (for Bounded True Concurrency) [RCC+ 04], which allows us to evaluate the performance of a system as well as a tool called BAL (not an acronym) which is able to carry out this task automatically. BAL tool can be found in http://www.dsi.uclm.es/retics/bal. BTC is based on CSP [Hoa85] and extends CSP syntax in order to consider the duration of actions by means of a timed prefix operator. Likewise, the operational
This work has been partially supported by CICYT project TIN2009-14312-C02-02, and JCCM project PEII09-0232-7745.
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 209–215, 2011. c Springer-Verlag Berlin Heidelberg 2011
210
D. Perez Le´ andrez et al.
semantics has also been extended to consider the context (resources) in which processes are executed. BTC is able to take into account the fact that system resources must be shared by all the processes. So, if there are more processes than resources then not all of them can be simultaneously executed. A process has to wait until it allocates the resources needed to continue its execution. This means that we can find two kinds of delays in the execution of a process: delays related to the synchronization of processes, and delays related to the allocation of resources. The former is usual in a (theoretical) concurrent context, but the latter is only taken into account if we consider a limited amount of available resources. BAL tool is able to make the syntactic analysis of systems specified in BTC, build its relevant transition graph by applying the rules of the operational semantics and calculate the minimum time needed to reach the final state from the initial one. This evaluation made by BAL can be used in two different ways. On the one hand, if we have a fixed number of resources we can ascertain the time needed to evolve from the initial state to the final one (or between states), so we can check different configurations for a system before using it, thus saving an immense amount of time and money. On the other hand, if we start with some specification, we can find the appropriate number of resources of any type that we need in order to fulfil some time requirements. We can even calculate the minimum number of resources needed to obtain the best performance from a system. Apart from the usefulness of the analysis that BAL performs, the main advantage of this tool is the fact that it is capable of dealing with real-world systems of a considerable size. This has been achieved through painstaking work linking different implementation strategies. In this paper, first BAL tool will be presented. Secondly, BAL is used in a case study which will highlight how BAL works and how the results obtained can be useful in Flexible Manufacturing Systems (FMS). Finally, conclusions and outlines for current and future work will be presented.
2
BAL Tool
With this formal resource-aware model the timed characteristics of the system are captured. The next step is concerned with carrying out performance evaluation: we want to be able to estimate the minimum time needed to reach a given state. By applying the rules of the operational semantics, we build a transition graph where we can abstract the information about actions and consider only the information about time (duration of actions). This graph is a weighted directed graph, where weights are always positive numbers, and the problem to solve is finding the shortest path from the initial node. Usually, the number of states in the transition graph for a real system is huge so the ability of a tool to perform this task automatically becomes relevant. With this idea in mind the
BAL Tool in Flexible Manufacturing Systems
211
BAL tool has been developed, which, moreover, has been improved with some other useful features: a syntactic analyzer and a graphical interface with a user assistant. Thus, the BAL tool can be divided into three stages. First, a syntactic analyzer checks the specification of the system to be studied. If the system specification is correct, the second stage initiates. Here the tool builds the relevant transition graph by applying the rules of the operational semantics and lastly the minimum time needed to reach the final state from the initial one is calculated and the path obtained shown. Evidently, the most delicate part in the tool development has been that relative to graph analysis where it was necessary to join different algorithms of pruning, dynamic load balancing and parallelization. This is so, since one of the main aims is for the BAL tool to be able to deal with real systems and not be limited to toy examples. A general view of BAL tool can be found in Figure 1.
Specification Wizard
File System Specification
BTC Syntax
Syntax Analyzer
Syntax Error
Graph Generator
Performance Evaluator
BTC Operational Semantics
Branch-and-bound techniques DBL Scheme Parallel computing Grid computing
Results
Fig. 1. Structure of the BAL tool
2.1
Graphical Interface and Assistant
Besides being effective and efficient, the BAL tool should also be user-friendly and to that end, BAL includes a graphical interface and assistant. The BAL
212
D. Perez Le´ andrez et al.
tool can thus be executed interactively in the command line or by making use of the graphical interface. Working in text mode it is necessary to supply a file with the specification of the system to be analyzed as an argument to the command BAL. The graphical interface requires little or no explanation because our standardization of the tools makes it easily recognisable for the user, and the new functions are also quite intuitive. More attention needs to be given to the assistant, which has been developed to make specification tasks easier and avoid syntax errors whenever possible. This assistant consists of two main parts: – Resource Assistant Zi : This helps to set the number of shared resources of any type in the system and the sets of actions that need these resources during their execution. Depending on the type of resource, the type of the action will be different. For preemptable resources the actions will be timed action while for non-preemptable ones the actions belonging to set Zi will be special actions to request and release the resource. – Process Assistant: When the actions that make up the process must be supplied, it is easy to make a syntax error, so the assistant deals with the syntax details and the user just needs to give the action name and its type.
3
Industrial Case Study - Flexible Manufacturing System
A field where BTC has proved to be useful is in performance evaluation of Flexible Manufacturing Systems (FMS). A competitive manufacturing system is expected to be flexible enough to respond to small batches of customer demand. Due to the fact that the construction of any new production line requires a large investment, current production lines must be able to be reconfigured to keep up with increased frequency of new product designs. Therefore, in the development of a FMS, particular attention must be paid to the design phase, where the performance of the system has to be accurately evaluated so that the most appropriate number and type of resources are selected from the early stages of the design. In this context, the BAL tool is of great utility due to the fact that it allows us to check different configurations for a FMS before being set, or for future improvements. Several FMS have been evaluated by means of BAL, in this section we will present a real system set in a local industry (Metalurgica y Plasticos de la Mancha, S.A [MPM]). In this system (Figure 3), pieces made up of two different parts (A and B) are created. These parts must be created by injecting the metal, filing it, vibrating or polishing it and then finally assembling it to form the final piece that will be packed for its distribution. In this system we have seventeen types of resources, i.e., seven machines (M 1M 7), two buffers (Buf 1, Buf 1), four conveyors (C1-C2) and four robots (R1R4). The resources that model the machines are preemptable and the resources robot, buffer and conveyor are non-preemptable. For each we define a set consisting of the actions that need (at least) one resource of this type for their execution. For preemptable resources (M 1-M 7) we define as follows:
BAL Tool in Flexible Manufacturing Systems
Z1 = {inject A} Z2 = {inject B}
Z3 = {f ile} Z4 = {vibrate}
Z5 = {polish} Z6 = {assemble}
213
Z7 = {pack}
which include the actions executed in any machine. The actions that need the non-preemptable resource Z8 = {r b1} Z9 = {r b2}
Z10 Z11 Z12 Z13
= {r = {r = {r = {r
c1} c2} c3} c4}
Z14 Z15 Z16 Z17
= {r = {r = {r = {r
r1} r2} r3} r4}
The number of resources of any type is defined as: N = {2, 4, 2, 2, 2, 4, 2, 8, 9, 12, 12, 1, 1, 1, 2, 2, 1} The BTC specification for this system is shown in Figure 2.
[[sys manu]]Z,
N
≡ [[< new, 2 > .GEN A < new, 2 > .GEN B ASSEM BLE]]Z,
N
GEN A ≡< r r1 > . < mov in m1, 2 > .< r r1 >.(P ROC A < new, 2 > .GEN A) GEN B ≡< r r1 > . < mov in m2, 2 > .< r r1 >.(P ROC B < new, 2 > .GEN B) P ROC A ≡< inject A, 5 > . < r b1 > . < r r2 > . < mov m1 b1, 2 > . < r r2 >. < r r2 > . < mov b1 m4, 2 > .< r b1 >.< r r2 >. < vibrate, 4 > .syn A. < r c1 > P ROC B ≡< inject B, 7 > . < r c3 > . < f ile, 3 > .< r c3 >. < r b2 > . < r r3 > . < mov m3 b2, 2 > .< r r3 >. < r r3 > . < mov b2 m5, 2 > . < r b2 >.< r r3 >. < polish, 8 > .syn B. < r c2 > ASSEM BLE ≡ (< syn A >< sync B >). < assemble, 3 > . < r c4 > . ASSEM BLE (< r c4 >.(< pack, 7 > . < r r4 > . < mov m7 out, 2 > .< r r4 >)
Fig. 2. Specification for the system
The performance evaluation of this system has been carried out using the BAL tool. Now, using the results obtained by the BAL tool, we can aim to work in either of two different ways: we can find the minimum number of resources needed to obtain a required system performance; or we can evaluate the performance of an established system to research any improvement. In this example, we already have an established system, which we are going to study to see if it can be improved by achieving a better completion time i.e. a bigger number of completion parts per unit of time (throughput). Different configurations (decreasing/increasing the number of robots, conveyor capacity and so on) have been studied and the conclusions are the following: After the first system evaluation a throughput of 0.137 is obtained, which means 13.70 pieces per 100 time units are obtained. Observing the behaviour of
214
D. Perez Le´ andrez et al.
Fig. 3. Flexible Manufacturing System
the system, we discovered that the R1 robot becomes a bottle neck, since there is a unique robot of this type and it must provide raw pieces for 6 machines (2 M1 + 4 M2). The system is analyzed again but now with 2 R1 robots and surprisingly no improvement is observed. In fact a throughput of 0.119 is obtained which is inferior to the previous one. This problem improves partially when increasing up to 4 the number of robots R1, but thrown data (throughput 0.14) are not significant enough to be worth the trouble of increasing the number of robots to this number. The problem arises because the storage capacity of the C3 conveyor is not prepared for this modification. By changing this capacity (N = {2, 4, 2, 2, 2, 4, 2, 8, 9, 12, 12, 10, 1, 2, 2, 2, 1}) an important improvement is achieved, 23 pieces per 100 time units being obtained (throughput 0.23). Thus, we propose using 2 R1 robots and increasing the C3 conveyor capacity up to 50 elements. Another important conclusion obtained is that with an extra M7 machine the system produces twenty percent more parts per time unit.
4
Conclusions and Future Work
In this work we have presented the BAL tool which has been developed to carry out performance evaluation. Besides this main goal, BAL tool has been improved using an syntactic analyzer and a graphical interface including a help assistant
BAL Tool in Flexible Manufacturing Systems
215
when the system specification is written. Therefore, the BAL tool consists of the following parts: – – – – –
Graphical interface Assistant for system specification System analyzer Graph generator Performance Evaluator
A case study has been made to show how the results obtained by BAL can be used; with a fixed number of resources we can ascertain the time needed to evolve from the initial state to the final one, so we can check different configurations for a system before using it, hence saving a huge amount of time and money. We can even calculate the minimum number of resources needed to obtain the best performance from a system. Currently, our work is focused on two different lines. The first one is relative to improving the tool and the second one focuses on its application. As stated above, the tool uses parallel and grid computers with distributed memory, but with the recent improvements in hardware architecture where multicore machines are a reality, we have decided to take advantage of such architectures and used threads which will communicate by means of shared memory. The main field where BTC is proving its worth is in the performance evaluation of FMS. A competitive manufacturing system is expected to be flexible enough to respond to small batches of customer demand. Due to the fact that the construction of any new production line is a large investment, current production lines must be able to be reconfigured to keep up with increased frequency of new product designs. Therefore, in the development of a FMS, particular attention must be paid to the design phase, where the performance of the system has to be accurately evaluated so that the most appropriate number and type of resources are selected from the initial design stages. In this context, BTC and BAL are of great utility given that they allow us to check different configurations for a flexible manufacturing system before being established or for future improvements.
References [Hoa85] [MPM] [RCC+ 04]
Hoare, C.A.R.: Communicating Sequential Processes. Prentice-Hall, Englewood Cliffs (1985) Metalurgica y Plasticos de la Mancha S.A., http://www.mpm-ab.com Carmen Ruiz, M., Cazorla, D., Cuartero, F., Pardo, J.J., Maci` a, H.: A Bounded True Concurrency Process Algebra for Performance Evaluation. In: N´ un ˜ez, M., Maamar, Z., Pelayo, F.L., Pousttchi, K., Rubio, F. (eds.) FORTE 2004. LNCS, vol. 3236, pp. 143–155. Springer, Heidelberg (2004)
A Complete Declarative Debugger for Maude Adri´an Riesco, Alberto Verdejo, and Narciso Mart´ı-Oliet Facultad de Inform´atica, Universidad Complutense de Madrid, Spain
[email protected], {alberto,narciso}@sip.ucm.es
Abstract. We present a declarative debugger for Maude specifications that allows to debug wrong answers (a wrong result is obtained) and missing answers (a correct but incomplete result is obtained) due to both wrong and missing statements and wrong search conditions. The debugger builds a tree representing the computation and guides the user through it to find the bug. We present the debugger’s latest commands and features, illustrating its use with several examples. Keywords: Declarative debugging, Maude, missing answers, wrong answers.
1 Introduction Declarative debugging [8] is a semi-automatic debugging technique that focuses on results, which makes it specially suited for declarative languages, whose operational details may be hard to follow. Declarative debuggers represent the computation as a tree, called debugging tree, where each node must be logically inferred from the results in its children. In our case, these trees are obtained by abbreviating proof trees obtained in a formal calculus [4,5]. Debugging progresses by asking questions related to the nodes of this tree (i.e., questions related to subcomputations of the wrong result being debugged) to an external oracle (usually the user), discarding nodes in function of the answers until a buggy node—a node with an erroneous result and with all its children correct—is located. Since each node in the tree has associated a piece of code, when this node is found, the bug, either a wrong or a missing statement,1 is also found. Maude [2] is a declarative language based on both equational and rewriting logic for the specification and implementation of a whole range of models and systems. Functional modules define data types and operations on them by means of membership equational logic theories that support multiple sorts, subsort relations, equations, and assertions of membership in a sort, while system modules specify rewrite theories that also support rules, defining local concurrent transitions that can take place in a system. As a programming language, a distinguishing feature of Maude is its use of reflection, that allows many metaprogramming applications. Moreover, the debugger is implemented on top of Full Maude [2, Chap. 18], a tool completely written in Maude which 1
Research supported by MICINN Spanish project DESAFIOS10 (TIN2009-14599-C03-01) and Comunidad de Madrid program PROMETIDOS (S2009/TIC-1465). It is important not to confuse wrong and missing answers with wrong and missing statements. The former are the initial symptoms that indicate the specifications fails, while the latter is the error that generated this misbehavior.
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 216–225, 2011. c Springer-Verlag Berlin Heidelberg 2011
A Complete Declarative Debugger for Maude
217
includes features for parsing and pretty-printing terms, improving the input/output interaction. Thus, our declarative debugger, including its user interactions, is implemented in Maude itself. We extend here the tool presentation in [7], based on [1,4], for the debugging of wrong answers (wrong results, which correspond in our case to erroneous reductions, sort inferences, and rewrites) with the debugging of missing answers (incomplete results, which correspond here to not completely reduced normal forms, greater than expected least sorts, and incomplete sets of reachable terms), showing how the theory introduced in [5,6] is applied. The reasons the debugger is able to attribute to these errors are wrong and missing statements and, since missing answers in system modules are usually found with the search command [2, Chap. 6] that performs a reachability analysis, wrong search conditions. With this extension we are able to present a state-of-the-art debugger, with several options to build, prune, and traverse the debugging tree. Following the classification in [9], these are the main characteristics of our debugger. Although we have not implemented all the possible strategies to shorten and navigate the debugging tree, like the latest tree compression technique or the answers “maybe yes,” “maybe not,” and “inadmissible,” our trees are abbreviated (in our case, since we obtain the trees from a formal calculus, we are able to prove the correctness and completeness of the technique), statements can be trusted in several ways, a correct module can be used as oracle, undo and don’t know commands are provided, the tree can be traversed with different strategies, and a graphical interface is available. Furthermore, we have developed a new technique to build the debugging tree: before starting the debugging process, the user can choose between different debugging trees, one that leads to shorter sessions (from the point of view of the number of questions) but with more complex questions, and another one that presents longer, although easier, sessions. We have successfully applied this approach to both wrong and missing answers. Complete explanations about our debugger, including a user guide [3] that describes the graphical user interface, the source files for the debugger, examples, and related papers, are available in the webpage http://maude.sip.ucm.es/debugging.
2 Using the Debugger We make explicit first what is assumed about the modules introduced by the user; then we present the new available commands. Assumptions. A rewrite theory has an underlying equational theory, containing equations and memberships, which is expected to satisfy the appropriate executability requirements, namely, it has to be terminating, confluent, and sort decreasing. Rules are assumed to be coherent with respect to the equations; for details, see [2]. The tool allows to shorten the debugging trees in several ways: statements and complete modules can be trusted, a correct module can be used as oracle, constructed terms (terms built only with constructors, indicated with the attribute ctor) are considered to be in normal form, and constructed terms of some sorts or built with some operators can be pointed out as final (they cannot be further rewritten). This information, as well
218
A. Riesco, A. Verdejo, and N. Mart´ı-Oliet
as the answers given during the debugging process and the signature of the module, is assumed to be correct. Commands. The debugger is started by loading the file dd.maude, which starts an input/output loop that allows the user to interact with the tool. Since the debugger is implemented on top of Full Maude, all modules and commands must be introduced enclosed in parentheses. Debugging of missing answers uses all the features already described for wrong answers: use of a correct module as oracle, trusting of statements and modules, and different types of debugging tree; see [3,7] for details about the corresponding commands. When debugging missing answers we can select some sorts as final (i.e., they cannot be further rewritten) with (final [de]select SORTS .), that only works once the final mode has been activated with (set final select on/off .). When debugging missing answers in system modules, two different trees can be built: one whose questions are related to one-step searches and another one whose questions are related to many-steps searches; the user can switch between these trees with the commands (one-step missing-tree .), which is the default one, and (many-steps missing-tree .), taking into account that the many-steps debugging tree usually leads to shorter debugging sessions but with likely more complicated questions. The user can prioritize questions related to the fulfillment of the search condition from questions involving the statements defining it with (solutions prioritized on/off .). The debugging tree can be navigated by using two different strategies: the more intuitive top-down strategy, that traverses the tree from the root asking each time for the correctness of all the children of the current node, and then continues with one of the incorrect children; and the more efficient divide and query strategy, that each time selects the node whose subtree’s size is the closest one to half the size of the whole tree, discarding the whole subtree if the inference in this node is correct and continuing the process with this subtree in other case. The user can switch between them with the commands (top-down strategy .) and (divide-query strategy .), being divide and query the default strategy. The debugging process is started with: (missing [in MODULE-NAME :] INIT-TERM -> NF .) (missing [in MODULE-NAME :] INIT-TERM : LS .) (missing [[depth]] [in MODULE-NAME :] INIT-TERM ARROW PATTERN [s.t. COND] .)
The first command debugs normal forms, where INIT-TERM is the initial term and NF is the obtained unexpected normal form. Similarly, the second command starts the debugging of incorrect least sorts, where LS is the computed least sort. The last command refers to incomplete sets found when using search, where depth indicates the bound in the number of steps, which is considered unbounded when omitted; ARROW is =>* for searches in zero or more steps, =>+ for searches in one or more steps, and =>! for final terms; and COND is the optional condition to be fulfilled by the results. Finally, when no module name is specified in a command, the default one is used.
A Complete Declarative Debugger for Maude
219
When the divide and query strategy is selected, one question, that can be either correct or wrong (w.r.t. the intended behavior the user has in mind), is presented in each step. The different answers are transmitted with the commands (yes .) and (no .). Instead of just answering yes, we can also trust some statements on the fly if, once the process has started, we decide the bug is not there. To trust the current statement we type the command (trust .). If a question refers to a set of reachable terms and one of these terms is not reachable, the user can point it out with the answer (I is wrong .) where I is the index of the wrong term in the set; in case the question is related to a set of reachable solutions, if one of the terms should not be a solution the user can indicate it with (I is not a solution .). Information about final terms can also be given on the fly with (its sort is final .), which indicates that the least sort of the term currently displayed is final. If the current question is too complicated, it can be skipped with the command (don’t know .), although this answer can, in general, introduce incompleteness in the debugging process. When the top-down strategy is used, several questions will be displayed in each step. The answer in this case is transmitted with (N : ANSWER), where N is the number of the question and ANSWER the answer the user would give in the divide and query strategy. As a shortcut to answer yes to all nodes, the tool also provides the answer (all : yes .). Finally, we can return to the previous state by using the command (undo .). Graphical User Interface. The graphical user interface allows the user to visualize the debugging tree and navigate it with more freedom. More advanced users can take advantage of visualizing the whole tree to select the questions that optimize the debugging process (keeping in mind that the user is looking for incorrect nodes with all its children correct, he can, for example, search for erroneous nodes with few children or even erroneous leaves), while average users can just follow the implemented strategies and answer the questions in a friendlier way. Moreover, the interface eases the trusting of statements by providing information about the statements in each module and the subsort relations between sorts; provides three different navigation strategies (topdown, divide and query, and free); and implements two different modes to modify the tree once the user introduces an answer: in the prune mode only the minimum amount of relevant information is depicted in each step, while in the keep mode the complete tree is kept through the whole debugging process, coloring the nodes depending on the information given by the user. The first mode centers on the debugging process, while the second one allows the user to see how different answers modify the tree.
3 Debugging Sessions We illustrate how to use the debugger with two examples, the first one shows how to debug an erroneous normal form due to a missing statement and the second one how to debug an incomplete set of reachable terms due to a wrong statement. Complete details about both examples are available in the webpage. Example 1. We want to implement a function that, given an initial city (built with the operator city, that receives a natural number as argument), a number of cities, and a
220
A. Riesco, A. Verdejo, and N. Mart´ı-Oliet
cost graph (i.e., a partial function from pairs of cities to natural numbers indicating the cost of traveling between cities), returns a tour around all the cities that finishes in the initial one. First, we specify a priority queue where the states of this function will be kept: sorts Node PNodeQueue . subsort Node < PNodeQueue . op node : Path Nat -> Node [ctor] . op mtPQueue : -> PNodeQueue [ctor] . op _ _ : PNodeQueue PNodeQueue -> PNodeQueue [ctor assoc id: mtPQueue] .
where a Path is just a list of cities. We use this priority queue to implement the function travel in charge of computing this tour: op travel : City Nat Graph -> TravelResult . ceq travel(C, N, G) = travel(C, N, G, R, node(C, 0)) if R := greedyTravel(C, N, G) .
This function uses an auxiliary function travel that, in addition to the parameters above, receives a potential best result, created with the operator result and computed with the greedy function greedyTravel, and the initial priority queue, that only contains the node node(C, 0). This auxiliary function is specified with the equations: ceq [tr1] : travel(C, N, G, R, ND PQ) = travel(C, N, G, R, PQ’) if not isResult(ND, N) /\ N’ := getCost(R) /\ N’’ := getCost(ND) /\ N’’ < N’ /\ PQ’ := expand(ND, N, G, PQ) . ceq [tr2] : travel(C, N, G, R, node(P C’, N’) PQ) = travel(C, N, G, result(P C’ C, UB), PQ) if isResult(node(P C’, N’), N) /\ N’’ := getCost(R) /\ UB := N’ + (G [road(C, C’)]) /\ UB < N’’ . ceq [tr3] : travel(C, N, G, R, node(P C’, N’) PQ) = travel(C, N, G, R, PQ) if isResult(node(P C’, N’), N) /\ N’’ := getCost(R) /\ UB := N’ + (G [road(C, C’)]) /\ UB >= N’’ . ceq [tr4] : travel(C, N, G, R, ND PQ) = R if N’ := getCost(R) /\ N’’ := getCost(ND)
/\ N’’ >= N’ .
However, we forget to specify the equation that returns the result when the queue is empty: eq [tr5] : travel(C, N, G, R, emptyPQueue) = R .
Without this equation, Maude is not able to compute the desired normal form in an example for 4 cities. To debug it, we first consider that GTRAVELER, the module defining the greedy algorithm, is correct and can be trusted:
A Complete Declarative Debugger for Maude
221
Maude> (set debug select on .) Debug select is on. Maude> (debug exclude GTRAVELER .) Labels cheap1 cheap2 cheap3 gc1 gc2 gc3 gt1 gt2 in1 in2 in3 mi1 mi2 sz1 sz2 are now trusted.
The command indicates that all the statements in the GTRAVELER module, whose labels are shown, are now trusted (unlabeled statements are trusted by default). Now, we debug a wrong reduction with the following command, where the term on the left of the arrow is the initial term we tried to reduce, the term on the right is the obtained result, and G abbreviates the cost graph: (missing travel(city(0), 3, generateCostMatrix(3)) -> travel(city(0),3,G, result(city(0)city(3)city(1)city(2)city(0),15),emptyPQueue) .)
With this command the debugger builds the debugging tree, that is navigated with the default divide and query strategy. The first question is: Is travel(city(0), 3, G, result(city(0) city(3) city(1) city(2) city(0), 15), emptyPQueue) in normal form? Maude> (no .)
Since we expected travel to be reduced to a result, this term is not in normal form. The next question is: Is PNodeQueue the least sort of emptyPQueue ? Maude> (yes .)
In fact, emptyPQueue—the empty priority queue—has as least sort PNodeQueue, the sort for priority queues. With this information the debugger locates the error, indicating that either another equation is required for the operator travel or that the conditions in the current equations are wrong: The buggy node is: travel(city(0), 3, G, result(city(0) city(3) city(1) city(2) city(0), 15), emptyPQueue) is in normal form. Either the operator travel needs more equations or the conditions of the current equations are not written in the intended way.
When a missing statement is detected, the debugger indicates that either a new statement is needed or the conditions can be changed to allow more matchings (e.g., the user can change N > 2 by N >= 2). Note that the error may be located in the conditions but not in the statements defining them, since they are also checked during the debugging process. Example 2. Assume now we have specified the district in charge of a firefighters brigade. Buildings are represented by their street, avenue (New York alike coordinates), time to collapse (all of them natural numbers), and status, that can be ok or fire. When the status is fire the time to collapse is reduced until it reaches 0, when the fire cannot be extinguished, and the fire can be propagated to the nearest buildings:
222
A. Riesco, A. Verdejo, and N. Mart´ı-Oliet
sorts Building Neighborhood Status FFDistrict . subsort Building < Neighborhood . ops ok fire : -> Status [ctor] . op < av :_, st :_, tc :_, sts :_> : Nat Nat Nat Status -> Building [ctor] .
The neighborhood is built with the empty neighborhood and the juxtaposition operator: op empty : -> Neighborhood [ctor] . op _ _ : Neighborhood Neighborhood -> Neighborhood [ctor assoc comm id: empty] .
Finally, the district contains the buildings in the neighborhood and two natural numbers indicating the avenue and the street where the firefighters are currently located: op [_]_ _ : Neighborhood Nat Nat -> FFDistrict [ctor] .
The firefighters travel with the rule: crl [go] : [NH] Av1 St1 => [NH’’] Av2 St2 if extinguished?(NH, Av1, St1) /\ B NH’ := NH /\ fire?(B) /\ Av2 := getAvenue(B) /\ St2 := getStreet(B) /\ N := distance(Av1, St1, Av2, St2) /\ NH’’ := update(NH, N) .
where the conditions check that the building in the current location is not in fire with extinguished?, search for a building in fire B with the matching condition and the condition fire?, extract the avenue Av2 and the street St2 of this building with getAvenue and getStreet, compute the distance between the current and the new location with distance, and finally update the neighborhood (the time to collapse of the buildings is reduced an amount equal to the distance previously computed) with update. We are interested in the equational condition fire?(B), because the equation f2 specifying fire?, a function that checks whether the status of a building is fire, is buggy and returns false when it should return true: op fire? : Building -> Bool . eq [f1] : fire?(< av : Av, st : St, tc : TC, sts : ok >) = false . eq [f2] : fire?(< av : Av, st : St, tc : TC, sts : fire >) = false .
If we use the command search to find the final states where the fire in both buildings has been extinguished, in an example with two buildings on fire with time to collapse 4 located in (1, 2) and (2, 1), and the firefighters initially in (0, 0), we realize that no states fulfilling the condition are found: Maude> (search nh =>! F:FFDistrict s.t. allOk?(F:FFDistrict) .) search in TEST : nh =>! F:FFDistrict . No solution.
A Complete Declarative Debugger for Maude
223
Fig. 1. Initial command for the firefighters example
Fig. 2. Debugging tree for the firefighters example
where nh is a constant with value [< av : 1, st : 2, tc : 4, sts : fire > < av : 2, st : 1, tc : 4, sts : fire >] 0 0. We can debug this behavior with the command shown in Figure 1, that builds the default one-step tree. We show in Figure 2 how the corresponding tree, with only two levels expanded, is depicted by the graphical user interface. Although this graphical interface has both the divide and query and the top-down navigation strategies implemented, advanced users can find more useful the free navigation strategy, that can greatly reduce the number of questions asked to the user. To achieve this, the user must find either correct nodes (which are removed from the tree) rooting big subtrees or wrong nodes (which are selected as current root) rooting small trees. In the tree depicted in Figure 2 we notice that Maude cannot apply the rule go (the rule is shown once the node is selected) to a configuration with some buildings with status fire, which is indicated by the interface by showing that the term is rewritten / Since this is incorrect, we can use the button Wrong to indicate it to the empty set, 0. and the tree in Figure 3 is shown. Note that this node is associated to a concrete rule
224
A. Riesco, A. Verdejo, and N. Mart´ı-Oliet
Fig. 3. Debugging tree for the firefighters example after one answer
Fig. 4. Bug found in the firefighters example
and then the debugger allows the user to use the command Trust, that would remove all the nodes associated to this same rule from the debugging tree; trusting the most frequent statements is another strategy that can easily be followed by the user when using the graphical interface. However, since the question is incorrect, we cannot use this command to answer it. In this case, the first three nodes are correct (the notation _:s_ appearing in the second and third nodes indicates that the term has this sort as least sort), but we can select any of the other nodes as erroneous; the interested reader will find that both of them lead to the same error. In our case, we select the fourth node as erroneous and the debugger shows the error, as shown in Figure 4.
4 Conclusions We have implemented a declarative debugger of wrong and missing answers for Maude, that is able to detect wrong and missing statements and wrong search conditions. Since
A Complete Declarative Debugger for Maude
225
one of the main drawbacks of declarative debugging is the size of the debugging trees and the complexity of the questions, we provide several mechanisms to shorten and ease the tree, including a graphical user interface. As future work, we plan to implement a test case generator, that integrated with the debugger will allow to test Maude specifications and debug the erroneous cases.
References 1. Caballero, R., Mart´ı-Oliet, N., Riesco, A., Verdejo, A.: A declarative debugger for Maude functional modules. In: Ros¸u, G. (ed.) Proceedings of the Seventh International Workshop on Rewriting Logic and its Applications, WRLA 2008. ENTCS, vol. 238(3), pp. 63–81. Elsevier, Amsterdam (2009) 2. Clavel, M., Dur´an, F., Eker, S., Lincoln, P., Mart´ı-Oliet, N., Meseguer, J., Talcott, C.: All About Maude - A High-Performance Logical Framework. LNCS, vol. 4350. Springer, Heidelberg (2007) 3. Riesco, A., Verdejo, A., Caballero, R., Mart´ı-Oliet, N.: A declarative debugger for Maude specifications - User guide. Technical Report SIC-7-09, Dpto. Sistemas Inform´aticos y Computaci´on, Universidad Complutense de Madrid (2009), http://maude.sip.ucm.es/debugging 4. Riesco, A., Verdejo, A., Caballero, R., Mart´ı-Oliet, N.: Declarative debugging of rewriting logic specifications. In: Corradini, A., Montanari, U. (eds.) WADT 2008. LNCS, vol. 5486, pp. 308–325. Springer, Heidelberg (2009) 5. Riesco, A., Verdejo, A., Mart´ı-Oliet, N.: Declarative debugging of missing answers for Maude specifications. In: Lynch, C. (ed.) Proceedings of the 21st International Conference on Rewriting Techniques and Applications (RTA 2010). Leibniz International Proceedings in Informatics, vol. 6, pp. 277–294. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik (2010) 6. Riesco, A., Verdejo, A., Mart´ı-Oliet, N.: Enhancing the debugging of Maude specifications. ¨ In: Olveczky, P.C. (ed.) WRLA 2010. LNCS, vol. 6381, pp. 226–242. Springer, Heidelberg (2010) 7. Riesco, A., Verdejo, A., Mart´ı-Oliet, N., Caballero, R.: A declarative debugger for Maude. In: Meseguer, J., Ros¸u, G. (eds.) AMAST 2008. LNCS, vol. 5140, pp. 116–121. Springer, Heidelberg (2008) 8. Shapiro, E.Y.: Algorithmic Program Debugging. In: ACM Distinguished Dissertation. MIT Press, Cambridge (1983) 9. Silva, J.: A comparative study of algorithmic debugging strategies. In: Puebla, G. (ed.) LOPSTR 2006. LNCS, vol. 4407, pp. 143–159. Springer, Heidelberg (2007)
An Assume Guarantee Approach for Checking Quantified Array Assertions Mohamed Nassim Seghir University of Freiburg
Abstract. We present an assume-guarantee method for the verification of quantified array assertions based on a program transformation. A quantified array assertion expresses a property over an array segment such as ”all elements of an array are sorted”. Given a program Pn annotated with assertion ϕn , our method rewrites Pn to either Pn−1 ; C or C; Pn−1 where C is a code fragment. The validity of the assertion is then proven by induction: assuming that ϕn−1 holds for Pn−1 and proving that ϕn holds for Pn . The program transformation allows to reduce the complexity of the code as well as the assertion to be verified. Experimental results on both text book and real life examples taken from system code show performance improvement compared to our previous approach for checking quantified assertions. Moreover, this new technique enables us to verify challenging programs which are not handled by our previous method and many exiting tools as well.
1
Introduction
Software model checking is a promising approach for the automatic verification of programs. Several tools based on this technique (such as SLAM [1], BLAST [8], MAGIC [3] and TERMINATOR [5]) have been successfully applied to real world programs. Most of these tools share two common points: they combine the predicate abstraction technique [6] with the so-called CEGAR1 paradigm [4]; they are based on abstract domains of quantifier-free predicates. While this technology is suitable for the verification of control-oriented properties, it is limited when it comes to data-oriented properties. An example of such properties is quantified assertions over arrays such as ”an array is sorted”. Expressing this property requires quantification over array elements. Recently, various techniques have been developed to either generalize or extend existing abstract domains (including the predicate abstraction domain) to domains that can express quantified properties [2, 7, 10, 11, 14]. Alternatively, we have shown in a previous work [13] that careful adaptation of the existing technology is sufficient to verify many interesting properties over arrays.
1
The author was supported in part by the German Federal Ministry of Education and Research (BMBF) in the framework of the VerisoftXT project under grant 01 IS 07 008. CEGAR stands for counter example guided abstraction refinement.
M. Johnson and D. Pavlovic (Eds.): AMAST 2010, LNCS 6486, pp. 226–235, 2011. c Springer-Verlag Berlin Heidelberg 2011
An Assume Guarantee Approach for Checking Quantified Array Assertions
227
In this paper, we treat the modularity aspect regarding the verification of quantified array assertions. Our aim is to simplify the verification task. The simplification affects the code as well as the assertion to be verified. Our idea consists of exhibiting the recursion which is implicit in array-manipulating programs via the application of a source-to-source transformation. Given a program Pn annotated with a quantified array assertion (postcondition) ϕn and having ϕ as precondition, our method applies a transformation that rewrites Pn either to Pn−1 ; C (pre-recursive form) or to C; Pn−1 (post-recursive form) such that C is a code fragment. This transformation enables us to prove the Hoare triple {ϕ } Pn {ϕn } by induction: proving {ϕ } P0 {ϕ0 } (generally trivial); assuming {ϕ } Pn−1 {ϕn−1 } and proving {ϕ } Pn {ϕn }. The gain obtained is – for the pre-recursive case, the proof of {ϕ } Pn−1 ; C {ϕn } is reduced to {ϕn−1 } C {ϕn } where ϕn−1 results from the induction hypothesis. – for the post-recursive case, the proof of {ϕ } C; Pn−1 {ϕn } is reduced to {ϕ } Pn ; assume(ϕn−1 ) {ϕn } or simply {ϕ } Pn {ϕn−1 ⇒ ϕn }. Hence, our technique naturally inherits the assume-guarantee reasoning which is inherent to induction. The postcondition of the induction hypothesis substitutes a piece of code (Pn−1 ) in the pre-recursive case, and weakens the postcondition in the post-recursive case. In both cases, we end up with a verification task which is fairly simpler than the original one. We have implemented our technique in the ACSAR software model checker [12]. Using our implementation, we successfully verified quantified array assertions for text book examples as well as real world examples taken from the Linux operating system kernel and the Xen hypervisor. We compared this technique with our previous approach for checking quantified array assertions [13]. Results show performance improvement for most of the benchmarks (specially selection sort). Moreover, we are now able to verify challenging programs which are out of the scope for our previous method, such as bubble sort and insertion sort. To the best of our knowledge, apart from the method presented in [14], there is no tool in the literature able to verify the functional property of the three mentioned sorting examples automatically.
2
Examples
We illustrate our method by considering two sorting algorithms (insertion sort and selection sort) which represent challenging examples for automatic verification tools. 2.1
Pre-recursive Case (insertion sort)
Let us consider the insertion sort example illustrated in Figure 1(a). This program sorts elements of array a in the range [0..n[. We want to verify the assertion specified at line 18, which states that elements of array a are sorted in ascending order. Let us call L the outer while loop of example (a) in Figure 1 together
228
M.N. Seghir
with the assignment at line 4 which just precedes the loop. We also call ϕ the assertion at line 18. We write L(1, n) such that the first parameter represents the initial value of the loop iterator i and the second parameter is its upper bound (here upper bound means strictly greater). We also write ϕ(n) the parameterized form of ϕ through n. In a Floyd-Hoare style [9], the verification task is expressed as a Hoare triple {true} L(1, n) {ϕ(n)} (1) In the above Hoare triple, the initial value 1 for the loop iterator is fixed but its upper bound n can have any value. To prove (1) via structural induction on the iterator range, where the well-founded order is range inclusion, it suffices to use induction on n. We proceed as follows – prove that {true} L(1, 1) {ϕ(1)} holds – assume {true} L(1, n − 1) {ϕ(n − 1)} and prove {true} L(1, n) {ϕ(n)} By unrolling the outer while loop in Figure 1(a) backward, we obtain the code in Figure 1(b). One can observe that the code fragment from line 4 to 16 in Figure 1(b) represents L(1, n − 1). If we call C the remaining code i.e., from line 18 to 25 in Figure 1(b), then the Hoare triple (1) is rewritten to
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
void insertion sort (int a [], int n) { int i , j , index; i = 1; while(i < n) { index = a[i ]; j = i; while ((j > 0) && (a[j−1] > index)) { a[ j ] = a[j−1]; j = j − 1; } a[ j ] = index; i++; }
}
assert(∀ x y. (0 ≤ x < n ∧0≤y
(a)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
void insertion sort (int a [], int n) { int i , j , index; i = 1; while(i < n−1) { index = a[i ]; j = i; while ((j > 0) && (a[j−1] > index)) { a[ j ] = a[j−1]; j = j − 1; } a[ j ] = index; i++; } index = a[n−1]; j = n−1; while ((j > 0) && (a[j−1] > index)) { a[ j ] = a[j−1]; j = j − 1; } a[ j ] = index;
}
assert(∀ x y. (0 ≤ x < n ∧0≤y
(b)
Fig. 1. Insertion sort program (a) and the corresponding pre-recursive form (b)
An Assume Guarantee Approach for Checking Quantified Array Assertions
229
{true} L(1, n − 1); C {ϕ(n)} According to Hoare’s rule of composition we have {true} L(1, n − 1) {ϕ(n − 1)} {ϕ(n − 1)} C {ϕ(n)} comp {true} L(1, n − 1); C {ϕ(n)} The first (left) premise of the rule represents the induction hypothesis, thus, assumed to be valid. It suffices then to prove the second premise in order to conclude the validity of (1). Hence, the verification problem in Figure 1(a) is reduced to the one illustrated in Figure 2. The assumption at line 1 in Figure 2 represents the postcondition of the induction hypothesis. One can clearly see that the final code is much simpler than the original one. We start with a loop having two levels of nesting, we end up with a code containing a loop that has just one level of nesting, which means less loop invariants to compute.
1 2 3 4 5 6 7 8 9 10 11 12
assume(∀ x y. 0 ≤ x,y < n−1 ∧ x < y ⇒ a[x] ≤ a[y]); index = a[n−1]; j = n−1; while ((j > 0) && (a[j−1] > index)) { a[ j ] = a[j−1]; j = j − 1; } a[ j ] = index; assert(∀ x y. 0 ≤ x,y < n ∧ x < y ⇒ a[x] ≤ a[y]);
Fig. 2. Result obtained by replacing the code fragment of the induction hypothesis with the corresponding post condition (which is here used as assumption)
2.2
Post-Recursive Case (selection sort)
Now, we consider example selection sort which is illustrated in Figure 3(a). As previously, we write L(k, n) to denote the outer loop together with the assignment at line 6 in Figure 3(a). Unlike the previous example, here by unrolling L(k, n) backward, the remaining (first) n − 1 iterations do not represent L(k, n − 1). In fact, the iterator j of the inner loop in L(k, n) has n as upper bound in the first n − 1 iterations of L(k, n), whereas j has n − 1 as upper bound in L(k, n − 1). However, by unrolling L(k, n) forward, we obtain the code in Figure 3(b). The code portion from line 20 to 35 represents L(k + 1, n). In this case, the recursion occurs at the end, i.e., L(k, n) = C; L(k + 1, n), where C is the code fragment from line 5 to 18 in Figure 3(b).
230
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
M.N. Seghir
void selection sort (int a [], int n) { int i , j , k, s ; k = 0; i = k; while(i < n) { s = i; for(j = i+1; j < n; ++j) { if (a[ j ] < a[s]) { s = j; } } t = a[i ]; a[ i ] = a[s ]; a[s ] = t; i++; }
}
assert(∀ x y. (k ≤ x < n ∧k≤y
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
void selection sort (int a [], int n) { int i , j , k, s ; k = 0; i = k; s = k; for(j = 1; j < n; ++j) { if (a[ j ] < a[s]) { s = j; } } t = a[k]; a[k] = a[s ]; a[s ] = t; i = k + 1; while(i < n) { s = i; for(j = i+1; j < n; ++j) { if (a[ j ] < a[s]) { s = j; } } t = a[i ]; a[ i ] = a[s ]; a[ s ] = t; i++; }
}
assert(∀ x y. (k ≤ x < n ∧k≤y
(a)
(b)
Fig. 3. Selection sort (a) and the corresponding post-recursive form (b)
For both L(k, n) and L(k+1, n) the iterator upper bound n is fixed, but the iterator initial value varies (k and k+1). Thus, this time we apply induction on the iterator initial value. Hence, we prove the basic case {true} L(n, n) {ϕ(n)}, then we assume that {true} L(k+1, n) {ϕ(k+1)} holds and prove {true} L(k, n){ϕ(k)}. Replacing L(k, n) with C; L(k + 1, n) in the last formula, we obtain {true} C; L(k + 1, n) {ϕ(k)}
(2)
Introducing the assumption ϕ(k + 1) resulting from the induction hypothesis in (2), we obtain {true} C; L(k + 1, n) ; assume(ϕ(k + 1)) {ϕ(k)}
An Assume Guarantee Approach for Checking Quantified Array Assertions
231
Finally, by moving the assumption into the postcondition we obtain {true} C; L(k + 1, n) {ϕ(k + 1) ⇒ ϕ(k)} The last formula is simply written {true} L(k, n) {ϕ(k + 1) ⇒ ϕ(k)}. In this case, the simplification does not affect the code but weakens the target assertion. Based on which criterion we transform programs to pre- or post-recursive form? The answer to this question as well as the transformation algorithm are described in an extended version of this paper which is available from the author.
3
Experimental Results
The technique presented in this paper allowed us to enhance performances of our software model checker ACSAR [12]. Moreover, thanks to modular reasoning, we are now able to verify challenging programs such as bubble sort and insertion sort which were out of the scope for our previous approach. In this section, we present a comparative study between our previous approach for checking quantified assertions [13] and the new approach based on program transformation. Implementation. We have implemented our transformation technique in ACSAR. As input ACSAR takes the source code of a C program annotated with the assertion to be verified. The code is transformed and can be either internally (as AST) transferred to the verification engine of ACSAR or pretty printed in C code again. In the later case, ACSAR can serve as backend for other C verification tools. The time taken by the transformation procedure is a few milliseconds, thus, it is insignificant with regards to the whole verification time. Experiments. We performed tests using an X41 Thinkpad laptop with 1 GB of RAM and a 1.6 GHz CPU, running Linux. ACSAR uses the Yices2 and Simplify3 theorem provers for computing the abstraction and analyzing counterexamples. The communication with Yices is performed through its API Lite while it is performed through pipes with Simplify. As already mentioned, the verification engine of ACSAR receives the C program annotated with the assertion to be verified which both result from the program transformation. The output is either an invariant that implies the correctness of the assertion or a counterexample trace. The results of our experiments are illustrated in Table 1. The column “Property” contains an informal description of the universally quantified assertion that we verify. Column “Transform” indicates the type of transformation which is applicable, “PR” stands for pre-recursive and “PS” stands for post-recursive. If both transformations are applicable, we choose the one that delivers the best results and mark it with the • superscript as illustrated in the table. As Table 1 shows, we found out in our experiments that, apart form program cyber init, we always obtain better performances when the program is transformed 2 3
http://yices.csl.sri.com http://www.hpl.hp.com/downloads/crl/jtk/index.html
232
M.N. Seghir
Table 1. Experimental results for academic and industrial examples. The upper part of the table refers to examples taken from literature. The part in the middle refers to examples taken from system code. Examples marked with superscript ∗ are from the Linux kernel and driver code. Examples marked with ∗∗ are taken from the Xen hypervisor code. The lower part of the table refers to known sorting algorithms. Program
Property
string copy scan
0 terminal string s1 is copied to s2 array entries before actual entry are not null array entries are initialized each array entry is initialized with its index array a is copied to array b for every array entry i of array a we have a[i] = 2 ∗ i + 3
array init loop1 copy1 num index dvb net feed stop∗ cyber init∗ perfc copy info∗∗ do enoprof op∗∗
selection sort insertion sort bubble sort
entries different from 0 in their pre-state are set to 0 for every i, if i modulo 4 is equal to 0, 1, 2 or 3 then a[i] is initialized to v0 , v1 , v2 or v3 respectively for each entry i of array a if a[i] has some property then b[i] and c[i] should be equal if variable op has value v1 and variable s has value v2 then array a is copied to array b array is sorted array is sorted array is sorted
Transform Iter. S T PS/PR• 2 2 PS/PR• 3 2
Pred. S T 5 4 3 2
Time S 0.39 0.27
(s) T 0.41 0.14
PS/PR• 2 1 PS/PR• 3 1
6 2 5 2
0.50 0.51
0.13 0.21
PS/PR• 2 1 PS/PR• 2 1
6 2 6 2
0.70 0.68
0.23 0.21
PS/PR• 3 2
8 3
3.41
0.30
PS• /PR 8 8 13 12
9.47
5.60
PS/PR• 4 5 16 8
10.57
1.50
PS/PR• 3 2 14 6
8.9
0.54
PS PR PS
3 5 37 35 409.87 173.50 - 5 - 39 145.60 - 8 - 42 188.90
to pre-recursive form. Column “Iter” refers to the number of refinement steps performed by the CEGAR process until a safe invariant is computed. Finally, column “Pred” refers to the number of inferred predicates. Our tool is based on lazy abstraction [8], we therefore provide the average number of predicates per location instead of the total number of predicates. This number gives an idea about the memory consumption as predicates are encoding program states. Therefore, with a minimum number of predicates we have better performance. Some columns are themselves divided into two columns “S” which stands for the simple (our previous [13]) approach and “T” which represents the modular approach based on code transformation. Our benchmarks are classified in three categories. The first category (upper part of the table) concerns academic examples taken from the literature. The second class of examples covers typical use of arrays in real world system code. The programs are code fragments taken from the Linux kernel and driver code as well as the Xen hypervisor4 code. All examples belonging to the first and second class are transformable to both forms: post- and pre-recursive form. 4
A hypervisor is a software that permits hardware virtualization. It allows multiple operating systems to run in parallel on a computer. The Xen hypervisor is available at http://www.xen.org/
An Assume Guarantee Approach for Checking Quantified Array Assertions
233
Fig. 4. Comparison between the method without transformation and the transformation based method in terms of verification time. Symbol “S” refers to the simple approach and “T” refers to the transformation based approach.
Comparing the transformation-based approach with our previous technique, to the exception of string copy, we notice a performance improvement for most examples as illustrated by diagrams in Figure 4 and Figure 5. Sorting algorithms. The last category of benchmarks (sorting algorithms), represents the most challenging examples in terms of complexity. Given an array a, we verified that upon termination of these programs the array a is sorted. Both selection sort and bubble sort are transformed to post-recursive form while insertion sort is transformed to pre-recursive form. The selection sort example is handled by both approaches (simple and modular). However, the difference in terms of execution time is considerable. We can clearly see the important gain of time as the verification time drops from almost 7 minutes (409 seconds) to less than 3 minutes (173 seconds). For bubble sort and insertion sort, the simple technique seems to diverge as it is unable to terminate within a fixed time bound. However, the modular approach is able to handle both examples in a fairly acceptable time regarding the complexity of the property. To the best of our knowledge, apart from the method presented in [14], no other technique in the literature can handle all these three sorting examples automatically.
234
M.N. Seghir
Fig. 5. Comparison between the method without transformation and the transformation based method in terms of the average number of predicates per location. Symbol “S” refers to simple approach and “T” refers to the transformation based approach.
4
Conclusion
We presented an assume-guarantee approach for the verification of quantified array assertions. Using this approach, we were able to enhance the performance of our model checker. Moreover, thanks to modular reasoning, we are now able to verify challenging programs which were out of the scope for our previous method. An interesting aspect of our technique is its flexibility as it is based on a source-to-source transformation. Thus, it can be seamlessly integrated into other software verification tools as black-box. We think that many other interesting properties can be better handled by applying similar code manipulation techniques rather than directly checking the original code.
References 1. Ball, T., Rajamani, S.K.: The SLAM project: debugging system software via static analysis. In: POPL, pp. 1–3 (2002) 2. Beyer, D., Henzinger, T.A., Majumdar, R., Rybalchenko, A.: Invariant synthesis for combined theories. In: Cook, B., Podelski, A. (eds.) VMCAI 2007. LNCS, vol. 4349, pp. 378–394. Springer, Heidelberg (2007) 3. Chaki, S., Clarke, E.M., Groce, A., Jha, S., Veith, H.: Modular verification of software components in C. In: ICSE, pp. 385–395 (2003)
An Assume Guarantee Approach for Checking Quantified Array Assertions
235
4. Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000) 5. Cook, B., Podelski, A., Rybalchenko, A.: Terminator: Beyond safety. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 415–418. Springer, Heidelberg (2006) 6. Graf, S., Sa¨ıdi, H.: Construction of abstract state graphs with PVS. In: Grumberg, O. (ed.) CAV 1997. LNCS, vol. 1254, pp. 72–83. Springer, Heidelberg (1997) 7. Gulwani, S., McCloskey, B., Tiwari, A.: Lifting abstract interpreters to quantified logical domains. In: POPL, pp. 235–246 (2008) 8. Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Lazy abstraction. In: POPL, pp. 58–70 (2002) 9. Hoare, C.A.R.: An axiomatic basis for computer programming. Commun. ACM 12(10), 576–580 (1969) 10. Lahiri, S.K., Bryant, R.E.: Constructing quantified invariants via predicate abstraction. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 267–281. Springer, Heidelberg (2004) 11. Podelski, A., Wies, T.: Boolean heaps. In: Hankin, C., Siveroni, I. (eds.) SAS 2005. LNCS, vol. 3672, pp. 268–283. Springer, Heidelberg (2005) 12. Seghir, M.N., Podelski, A.: ACSAR: Software model checking with transfinite refinement. In: Boˇsnaˇcki, D., Edelkamp, S. (eds.) SPIN 2007. LNCS, vol. 4595, pp. 274–278. Springer, Heidelberg (2007) 13. Seghir, M.N., Podelski, A., Wies, T.: Abstraction refinement for quantified array assertions. In: Palsberg, J., Su, Z. (eds.) SAS 2009. LNCS, vol. 5673, pp. 3–18. Springer, Heidelberg (2009) 14. Srivastava, S., Gulwani, S.: Program verification using templates over predicate abstraction. In: PLDI, pp. 223–234 (2009)
Author Index
Alarc´ on, Beatriz
Maeder, Christian 60 Mart´ı-Oliet, Narciso 216 McCusker, Guy 111 M¨ oller, Bernhard 76 Mossakowski, Till 60
201
Bolduc, Claude 28 Brodo, Linda 44 Cazorla, Diego Codescu, Mihai
209 60
del Vado V´ırseda, Rafael Ding, Jie 1 Ellison, Chucky
Navarro-Marset, Rafael 180
142
Gl¨ uck, Roland 76 Guti´errez, Ra´ ul 201
Loulergue, Fr´ed´eric 163 Lucas, Salvador 201
128 209
Riesco, Adri´ an 60, 216 Ro¸su, Grigore 142 Ruiz, M. Carmen 209
Hashimoto, Hideki 163 Hillston, Jane 1 Hinze, Ralf 92 Hu, Zhenjiang 163 Komendantskaya, Ekaterina Ktari, B´echir 28
Panangaden, Prakash Pardo, J. Jose 209 P´erez Le´ andrez, Diego Power, John 111
201
Sadrzadeh, Mehrnoosh 128 Schulte, Wolfram 142 Seghir, Mohamed Nassim 226 Sintzoff, Michel 76 111 Takeichi, Masato 163 Tesson, Julien 163 Verdejo, Alberto
216