Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5779
Saddek Bensalem Doron A. Peled (Eds.)
Runtime Verification 9th International Workshop, RV 2009 Grenoble, France, June 26-28, 2009 Selected Papers
13
Volume Editors Saddek Bensalem Verimag Centre Equation, 2 avenue de Vignate, 38610 Gières, France E-mail:
[email protected] Doron A. Peled Department of Computer Science, Bar Ilan University, Ramat Gan, 52900, Israel E-mail:
[email protected]
Library of Congress Control Number: 200935005 CR Subject Classification (1998): D.2, D.3, F.3, K.6, C.4, D.2.8, D.4.8 LNCS Sublibrary: SL 2 – Programming and Software Engineering ISSN ISBN-10 ISBN-13
0302-9743 3-642-04693-2 Springer Berlin Heidelberg New York 978-3-642-04693-3 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12756500 06/3180 543210
Preface
The RV series of workshops brings together researchers from academia and industry that are interested in runtime verification. The goal of the RV workshops is to study the ability to apply lightweight formal verification during the execution of programs. This approach complements the offline use of formal methods, which often use large resources. Runtime verification methods and tools include the instrumentation of code with pieces of software that can help to test and monitor it online and detect, and sometimes prevent, potential faults. RV 2009 was held during June 26–28 in Grenoble, adjacent to CAV 2009. The program included 11 accepted papers. Two invited talks were given by Amir Pnueli, on “Compositional Approach to Monitoring Linear Temporal Logic Properties” and Sriram Rajamani on “Verification, Testing and Statistics.” The program also included three tutorials. We would like to thank the members of the Program Committee and additional referees for the reviewing and participation in the discussions. July 2009
Saddek Bensalem Doron Peled
Table of Contents
Rule Systems for Runtime Verification: A Short Tutorial . . . . . . . . . . . . . . Howard Barringer, Klaus Havelund, David Rydeheard, and Alex Groce
1
Verification, Testing and Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sriram K. Rajamani
25
Type-Separated Bytecode – Its Construction and Evaluation . . . . . . . . . . Philipp Adler and Wolfram Amme
26
Runtime Verification of Safety-Progress Properties . . . . . . . . . . . . . . . . . . . Yli`es Falcone, Jean-Claude Fernandez, and Laurent Mounier
40
Monitor Circuits for LTL with Bounded and Unbounded Future . . . . . . . Bernd Finkbeiner and Lars Kuhtz
60
State Joining and Splitting for the Symbolic Execution of Binaries . . . . . Trevor Hansen, Peter Schachte, and Harald Søndergaard
76
The LIME Interface Specification Language and Runtime Monitoring Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kari K¨ ahk¨ onen, Jani Lampinen, Keijo Heljanko, and Ilkka Niemel¨ a A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis and Runtime Healing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bohuslav Kˇrena, Zdenˇek Letko, Yarden Nir-Buchbinder, Rachel Tzoref-Brill, Shmuel Ur, and Tom´ aˇs Vojnar Bridging the Gap between Algebraic Specification and Object-Oriented Generic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isabel Nunes, Ant´ onia Lopes, and Vasco T. Vasconcelos Runtime Verification of C Memory Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . Grigore Rosu, Wolfram Schulte, and Traian Florin Serb˘ anut˘ , , ,a A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stavros Tripakis
93
101
115 132
152
Hardware Supported Flexible Monitoring: Early Results . . . . . . . . . . . . . . Atonia Zhai, Guojin He, and Mats P.E. Heimdahl
168
DMaC: Distributed Monitoring and Checking . . . . . . . . . . . . . . . . . . . . . . . . Wenchao Zhou, Oleg Sokolsky, Boon Thau Loo, and Insup Lee
184
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
203
Rule Systems for Runtime Verification: A Short Tutorial Howard Barringer1, Klaus Havelund2, David Rydeheard1, and Alex Groce3 1
School of Computer Science University of Manchester Oxford Road Manchester, M13 9PL, UK {Howard.Barringer,David.Rydeheard}@manchester.ac.uk 2 Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109, USA
[email protected] 3 School of Electrical Engineering and Computer Science Oregon State University Corvallis, USA
[email protected]
Abstract. In this tutorial, we introduce two rule-based systems for on and offline trace analysis, RULE R and L OG S COPE . RULE R is a conditional rule-based system, which has a simple and easily implemented algorithm for effective runtime verification, and into which one can compile a wide range of temporal logics and other specification formalisms used for runtime verification. Specifications can be parameterized with data, or even with specifications, allowing for temporal logic combinators to be defined. We outline a number of simple syntactic extensions of core RULE R that can lead to further conciseness of specification but still enabling easy and efficient implementation. RuleR is implemented in Java and we will demonstrate its ease of use in monitoring Java programs. L OG S COPE is a derivation of RULE R adding a simple very user-friendly temporal logic. It was developed in Python, specifically for supporting testing of spacecraft flight software for NASA’s next 2011 Mars mission MSL (Mars Science Laboratory). The system has been applied by test engineers to analysis of log files generated by running the flight software. Detailed logging is already part of the system design approach, and hence there is no added instrumentation overhead caused by this approach. While post-mortem log analysis prevents the autonomous reaction to problems possible with traditional runtime verification, it provides a powerful tool for test automation. A new system is being developed that integrates features from both RULE R and L OG S COPE . Keywords: Runtime verification, rule systems, temporal logic, code instrumentation, log file analysis, Java, AspectJ, Python.
Part of the research described in this publication was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 1–24, 2009. c Springer-Verlag Berlin Heidelberg 2009
2
H. Barringer et al.
1 Introduction This brief tutorial introduces the reader to two related rule-based systems for runtime monitoring via a number of simple examples. The first is RULE R: a system that started as a low-level rule system into which one can compile different temporal specification logics for efficient trace conformance checking but then assumed a life of its own as a specification logic. The second is L OG S COPE: a derivation of a subset of RULE R that includes a simple, user-friendly, higher-level temporal pattern language, and illustrates the idea behind RULE R as a target of translation from a form of temporal logic. The presentation assumes some familiarity with the basic notions of runtime monitoring/verification. Section 2 introduces the basic ideas underlying RULE R and its associated trace conformance checking algorithm. Section 3 starts the brief tour of RULE R beginning with the RV world’s equivalent of “Hello World” specifications, putting them to use to monitor Java programs using AspectJ for instrumentation, and then gives a glimpse of one of the more powerful monitor combinator features. Section 4 introduces the ideas underlying LogScope and its higher-level temporal pattern language and then in Section 5 we explore the log file analysis system through a case study. Section 6 concludes with a brief review of our approaches and comments on future work.
2 Underlying Principles of RULE R 2.1 A Little History In the beginning, there was linear-time propositional temporal logic, PTL, and the stage for verification of behavioural properties of concurrent programs was set [17]. But then there were concerns voiced over the lack of expressiveness for real system specification, in particular not being able to express classes of regular properties, such as an event occurs on every even moment. And so PTL begat the family of extended temporal logics, including for example, Wolper’s ETL [21], the fixed point temporal calculus νTL [3], fixed point calculi extended with chop [14], etc. Even though model checking was gaining much ground, these richer, more expressive, temporal logics raised significant challenges for such automated verification techniques. However, there was still much interest in using richer logics as a basis for formal specification. Techniques for “executing” temporal specifications were developed, for example, Moskowski’s Tempura based on Interval Temporal Logic [15,16]. With a small change in view of logic, from the declarative to the imperative, the fixed point temporal logics gave rise to M ETATE M [4], in which one might program directly with (recursive) temporal rules of the form “declarative past implies imperative future”. The interpretation mechanism for M ETATE M was basically as follows. Given the current state of computation, i.e. the execution history of assignments to variables, determine which of the rules whose antecedent (declarative past formulas) conditions hold true, and then use their associated consequents (imperative future formulas) to build the next computation state. A fundamental property of these logics, the separation property [12], enables one to separate the conjunction of future time consequent formulas into atomic facts that have to hold now and pure future time formulas representing obligations for the future. Thus one could build an execution
Rule Systems for Runtime Verification: A Short Tutorial
3
trace that conformed to future time obligations, subject to the not so insignificant issue of non-determinism and looping that is potentially present. The techniques underlying M ETATE M’s evaluation had a significant influence on our development of runtime verification temporal logics. The plethora of different trace languages being used for property specification in runtime verification, e.g. future-time linear temporal logics, past-time temporal logics, extended regular expressions, interval logics, etc., led us to the development of the general purpose, rule-based, temporal system, E AGLE [5], which presents a natural rule/equation based language for defining, and even programming, monitors for complex temporal behavioural patterns. Here are a few examples. max Always(Form F) = F ∧ Always(F) min Match(Form C, Form R) = (C · Match(C, R) · R · Match(C, R)) ∨ Empty() min HappensBefore(Form F, double u) = clock < u ∧ (F ∨ (¬F ∧ HappensBefore(F, u))) Elsewhere, e.g. [8], we have argued that whilst E AGLE is elegant and expressively rich, there is a potentially high computation cost through (i) the potential non-determinism in specification and (ii) the symbolic manipulation that was required in the evaluation algorithm. As an example of (i), the concatenation operator in the Match predicate definition above is non-deterministic and requires considerable care in use. The temporal predicate Match(call, return) for formulas call and return, specifies the behaviour that every call has a matching return. In order to achieve this expected temporal behaviour pattern, the formulas passed to Match should specify single state sequences. If that is not the case, the concatenation operator may choose an arbitrary cut point, and therefore skip unmatched Cs or Rs in order to give a positive result. Regarding point (ii) above, we should comment that this may possibly just reflect our inability to get a good implementation. Rather than trying to improve E AGLE’s implementation, we developed a lower-level rule-based system, RULE R. 2.2
RULE R Rule Systems and Evaluation
The core of a RULE R rule system is a collection of named rules. A rule is formed from a condition part (antecedent) and a body part (consequent). Unlike the rules in M ETATE M, the antecedent and consequent are restricted to be non-temporal. The rule’s condition may be a conjunctive set of state expressions, whereas the body is a disjunctive set of conjunctive sets of state expressions. In the example below, Start is a rule, which, if active itself, will either activate the rule Track with argument f when an openFile observation holds for some object f or, if such an observation doesn’t occur, will deactivate itself. Start: openFile(f:obj) -> Track(f) The simplest form of state expression is an observation, e.g. openFile(f:obj), or a positive or negative occurrence of a rule, e.g. Track(f). We will show in Section 3 some
4
H. Barringer et al.
of the more complex expressions that can be used. At the basic level, rules in RULE R have no default persistence, an active rule is just a single shot rule. A rule gets activated for the next evaluation step, then gets used and automatically deactivated. Whilst this mechanism is good for encoding grammars and temporal logics, it may not appear so natural for user-level rule specification where one might expect a rule’s activation to persist until it has been used. We call this the state view in that a system remains in a particular state until some transition can be taken to move the system into another state. One could also have adopted a view that rules, once active should persist for ever, or at least until they are forcibly switched off. We chose the single-shot view purely because it is easy to translate other logics into this form (and that was an original goal for RULE R, and the other notions of persistence can easily be encoded using the single-shot rules). Given a collection of named rules and some initial conditions, a trace of input observations can be checked for conformance against the rule system as outlined below in Figure 1. For ease, we call a set of rule literals and observation literals a rule activation state and hence a frontier is a set of rule activation states. The frontier represents a choice of possible states. Overall, all the traces allowed by the set of rules are explored, in a breadth-first fashion, against the given trace of sets of observations. In a single monitoring step, the algorithm computes a new frontier (a set of sets of observation obligations and rule activations) according to the given input observations and the current frontier of states. The initial frontier is defined by the specified initial conditions. The step computation is repeated until either the monitoring input is exhausted or a conflict between the constraints of the rule system and the input has been determined. The breadth-first exploration of traces allowed by the rule system is undertaken in order to avoid backtracking when a conflict between input observations and rule system obligations occurs. 1: form an initial frontier of rule activation states 2: WHILE input observations exist DO 3: obtain the next set of observations 4: add the observations into each of the the frontier’s states 5: report failure if there’s no consistent resultant state 6: FOREACH of the current and consistent resultant states, 7: use all active rules to form a successor set of activation states 8: create the next frontier of rule activation states by taking the union of all successor sets 9: OD 10: yield success iff last frontier has an acceptable final state Fig. 1. The basic monitoring algorithm
As a quick demonstration of the evaluation mechanism, consider the following four rules that, if run with the rules Start, Close and Continue initially active, will check that only opened files are closed. Start: openFile(f:obj) -> Track(f); Close: closeFile(f:obj), !Track(f) -> print("Error: closing unopened file" + f);
Rule Systems for Runtime Verification: A Short Tutorial
5
// to ensure persistence of the rules Track(f:obj): !closeFile(f) -> Track(f); Continue: -> Start, Close, Continue; Before we proceed with an example evaluation of these rules, we must first explain a little of the notation used above. The first rule has name Start, an antecedent as the observation openFile(f:obj) and with its consequent as the rule Track(f). The argument f:obj to openFile defines f as a formal argument name of type obj. During rule evaluation, the variable f will be bound to all values of type obj, say fv, that will cause openFile(fv) to match an input observation. The consequent Track(f), on the other hand, uses the bound value for the variable f. In other words, the variable f in openFile(f:obj) is free whereas the variable f in Track(f) is a bound occurrence (bound by the matching of openFile). The name of the third rule Track also has a formal argument f of type obj; the value of the variable f in the observation expression !closeFile(f) is bound to the defining occurrence given in the rule name. The exclamation mark ! denotes negation. Let us assume the following trace of input observations, where f1, f2 and f3 denote different file object values. { openFile(f1) },{ openFile(f2),closeFile(f1) },{ closeFile(f3) } Initially, the following frontier of active rule expressions is created. { { Start, Close, Continue } } There is an input observation, hence by line 3 of the algorithm openFile(f1) is added across the states of the frontier, yielding the following. { { Start, Close, Continue, openFile(f1) } } There is no logical inconsistency in this frontier and the loop at line 6 of Figure 1 generates the consequences of all the active rules. The rule Start is active and as openFile(f:obj) unifies with openFile(f1), creating a binding of the variable f to the file object value f1, the rule consequent, i.e. Track(f1), is added into the “next” set of states. The rule Close is active, however, its condition doesn’t hold as there is no closeFile observation, and hence contributes nothing to the successor set of states. The rule Track for some file object value is not active and hence plays no role at this stage. The rule Continue is active and as its antecedent condition is empty will cause the rule’s consequent to be added to the successor set of states. These rules thus generate the successor set of states. { { Track(f1), Start, Close, Continue } } As there was only one state in the frontier initially, the above set of states becomes the next frontier. The process now repeats with the next set of input observations, i.e. { openFile(f2), closeFile(f1) } and line 4 of the algorithm thus generates the frontier.
6
H. Barringer et al.
{ { Track(f1), Start, Close, Continue, openFile(f2), closeFile(f1) } } The rule application part of the algorithm will now generate the next frontier, given below. { { Track(f2), Start, Close, Continue } } Note that the rule expression Track(f1) is not present; there was a closeFile(f1) observation and hence the condition !closeFile(f1) fails to hold with the result that the consequent of the rule, which is itself, does not included in the successor set of states. On the other hand, Track(f2) does appear as a consequence on the Start rule. For the final set of input observations, i.e. { closeFile(f3) }, the system generates { { print("Error: closing unopened file" + f3), Track(f2), Start, Close, Continue } } as the next frontier. The condition of the rule Close was satisfied this time and hence gave rise to the special predicate print being added to the frontier, which then gets treated as an actual print statement by the RULE R interpreter before the next input is read — this is just one way of reporting failures. Notice that the system continues to track the opened file f2. If one assumes that this is the end of the observation trace, the system will actually check to see whether there are any rules that have been explicitly forbidden from terminal states. In this case, one might well have indicated that any Track rule should not be present since this indicates the occurrence of a file still being open.
3 RuleR By Example RULE R is a Java-based program that implements the monitoring of systems using specifications presented as rule systems. In this section, we focus on practical issues by developing a few simple monitors in RULE R, showing how it can easily be used to monitor Java programs at runtime, using AspectJ to instrument the monitored program and invoke the RULE R monitor. RULE R is very much an experimental system, providing a basis for us to try out different specification language concepts and monitoring features. As such, it is not a stable system, nor a publically released system (though that will change). From its very simple beginning, which implemented only the propositional rule system presented in [6] into which one might compile different temporal logical specifications, RULE R has become a strongly typed, almost stream-functionallike, system for user-level programming and combining trace monitors. 3.1 Example 1: A Simple Response Property We begin our brief tour by writing a monitor to check the validity of responses given in answer to a simple arithmetic quiz. Our system is to observe and monitor a sequence of question and answer events. To keep matters simple, the question event has two integer
Rule Systems for Runtime Verification: A Short Tutorial
7
arguments, and the answer event has a single integer argument. The property we require of the sequence, in terms of temporal logic is as follows. 2 ∀x, y : int · question(x, y) ⇒ ((¬∃u, v, z : int · answer(z) ∨ question(u, v)) U answer(x + y)) This (first order) temporal formula expresses the constraint that every question is followed by a correct answer event summing the two arguments of the question, and there should also be no intervening question or answer event between the question and correct answer. In terms of RULE R, we can specify the required behaviour via the following rule system. ruler SumCheck{ observes question(int, int), answer(int); state Check{ question(x:int, y:int) -> Response(x+y); } state Response(required:int){ answer(z:int) {: z != required -> print("Wrong answer! Expected " + required + " but given " + z), Check; default -> Check; :} question(x:int, y:int) -> print("Unexpected question! Previous one unanswered"), Response(required); } initials Check; forbidden Response; } The rule system is named SumCheck. It defines, using the observes keyword, that question is an observation event with two integer arguments and that answer takes a single integer argument. Two rules are then specified. Both are introduced with the keyword state and indicates that the rules use state persistence, namely, once the rule is active, it will remain active until it is successfully used. RULE R has two other persistence attributes for rules, always persistence and the underlying single shot persistence, introduced by the keywords always and step. We have found that state persistence is most common when users write RULE R specifications directly, which undoubtedly reflects a state machine oriented view of writing specifications, and so RULE R will actually assume state persistence by default, reducing the amount that needs to be written. For clarity, we will maintain its use in the specifications we present in this short overview. The first rule above is named Check. Its body, enclosed by the parentheses { }, comprises just a single rule, which will activate the Response rule as soon as a question observation occurs. Of course, note that when such happens, the state persistence means
8
H. Barringer et al.
that the Check rule is deactivated. On the other hand, if a question observation doesn’t occur, then the state persistence of Check will keep the rule active for the next monitoring step. The second rule defines the Response rule. It is supplied with the integer value that is expected to be given in the first occurence of an answer that follows activation of the Response rule. The rule has two parts, the first of which is what we’ve termed a factored rule; its precondition answer(z:int) has been factored out of the subsequent two sub-rules. Following the precondition, there are two rules enclosed within the parentheses {: :}. Such parentheses indicate that the enclosed rules are to be evaluated in the given serial order. This is not the usual interpretation of multiple rules in RULE R, which are evaluated, effectively, in parallel. The condition part of the first sub-rule, z != required, checks for an invalid response. If the condition holds, the special print event is activated, together with the Check rule in order to continue checking answers against questions. If the condition of the first sub-rule doesn’t hold, then the next rule in the list is attempted. That particular rule has default as its antecedent, which means it will always be able to be used; the sub-rule’s consequent just restarts the checking process by activating the Check rule. The second part of the Response rule handles an occurrence of an undesired question event. The last two elements of the rule system define which rules are initially active, the line with the keyword initials, and which rules are not allowed to be active at the end of monitoring a finite sequence of observations. We have expressly forbidden Response since its activity represents an unanswered question. Finally, we have to create a monitor based on the SumCheck rule system. monitor{ uses SC: SumCheck; run SC . } The text of both the rule system SumCheck and the monitor definition are then the specification input to the RULE R system. The monitor definition creates an instance of the SumCheck rule system, which is really a rule system schema, names it SC, and then runs it. Later we will expose one of the other ways of creating monitor expressions. 3.2 Hooking RULE R Up to Java via AspectJ The current prototype of RULE R provides a simple Java interface to enable its direct use from other Java applications. The Java class RuleR.java provides a constructor for the creation of a RULE R monitor, a method for dispatching a single event to the monitor, and a method for dispatching an “end of input stream” event to the monitor. public RuleR(String fileName, boolean timing){ ... } public Signal dispatch(String eventName, Object[] argList){ ... } public Signal dispatch(String eventName){ ... } public Signal dispatchEnd(){ ... }
Rule Systems for Runtime Verification: A Short Tutorial
9
The first argument of the constructor provides the basename for the input file (the constructor adds a “.ruler” extension) containing the rule system schema definitions and monitor definition, and for the output file (the constructor adds a “.output” extension). The RULE R monitor will send (the final) monitoring status and output events (not yet described) to the output file. The second argument specifies whether events dispatched to the monitor should be accompanied by a real-time stamp event (true for timing on). The first argument of the first dispatch method is the string that represents the name of the observation event. The second argument provides the list of arguments that are to be associated with the event. A second version is supplied for when there are no arguments. All the dispatch methods return a five-valued status result of the following type. public enum Signal {TRUE, STILL_TRUE, STILL_FALSE, FALSE, UNKNOWN} The status Signal.TRUE means that all the constraints imposed by the monitor have now been satisfied and there are no further monitoring rules active. Whereas, the status Signal.STILL_TRUE means that no monitoring constraints have yet been falsified, however, there are still monitoring rules active. This status condition will arise, typically, during the monitoring of a safety property. The status Signal.STILL_FALSE means that the monitoring constraints have not yet been satisfied, but further input may indeed do so. This status condition will arise, typically, during the monitoring of a liveness property when one is waiting for some eventuality to occur. The status Signal.FALSE means that the monitoring constraints have definitely been falsified and no further input can change the status. The status Signal.UNKNOWN means that the monitor has been unable to resolve the status of monitoring against one of the above values. This may arise when the results of two monitors are composed in parallel, for example when one yields Signal.STILL_TRUE and the other yields Signal.STILL_FALSE. Using AspectJ for instrumentation. We now show how the above interface can be used from within an AspectJ instrumentation of a Java application. The application will be the following program that we wish to monitor using the rule system SumCheck given above. public class SumCheck{ static void question (int x, int y){ /* what ever */ } static void answer (int x){ /* what ever */ } static void end() { /* just to indicate the end */ } public static void main(String[] args) { question(1,1); answer(2); question(2,3); answer(5); question(4,5); answer(9);
10
H. Barringer et al.
question(1,0); answer(10); question(2,1); answer(3); end(); }
}
The aspect SumCheckInst defines an instance of a RULE R monitor, using the constructor mentioned above, from the file “src/examples/SumCheck.ruler” which contains the example ruler definitions from above. Two pointcuts are defined corresponding to calls of the methods question(...) and answer(...) in the application SumCheck.java. before-advices define the instrumentation code, each one calling the dispatch method of the RULE R monitor instance ruler. In this example, if the dispatch method returns a Signal.FALSE status, an appropriate error message is printed on System.err and the system is terminated cleanly. The dispatchEnd method is invoked before the main application calls its end() method. import ruler.*; public aspect SumCheckInst{ RuleR ruler = new RuleR("src/examples/SumCheck", false); pointcut question(int x, int y) : call(static void question(int, int)) && args(x, y); pointcut answer(int z) : call(static void answer(int)) && args(z);
}
before(int x, int y) : question(x, y){ ruler.dispatch("question", new Object[]{x,y}); } before(int z) : answer(z){ ruler.dispatch("answer", new Object[]{z}); } before() : call(void end()){ ruler.dispatchEnd(); }
Running SumCheck as a Java/AspectJ application then results in the following output. Rule system SC.SumCheck running... Step = 0 On monitoring step 8: Wrong answer! Expected 1 but given 10 End of monitoring on step 11: status of SC.SumCheck is still_true 3.3 A Java API Example In the Java collections framework, the Iterator interface provides three methods to support iteration over a collection, hasNext, next and remove. The method detail from the Sun Java 1.6 documentation [20] for the remove method states the following.
Rule Systems for Runtime Verification: A Short Tutorial
11
Removes from the underlying collection the last element returned by the iterator (optional operation). This method can be called only once per call to next. The behavior of an iterator is unspecified if the underlying collection is modified while the iteration is in progress in any way other than by calling this method. Throws: UnsupportedOperationException - if the remove operation is not supported by this Iterator. IllegalStateException - if the next method has not yet been called, or the remove method has already been called after the last call to the next method. Furthermore, when an iteration has no further elements a call of the next method will raise the NoSuchElementException. We will write a RULE R monitor that pre-empts erroneous calls of the iterator methods. We will use the following AspectJ instrumentation code to dispatch an event on each and every occurrence of any iterator method call. In the current version of RULE R the dispatch method returns only the status value (Signal); other pertinent information, such as rules used, error locations, etc. will be included in subsequent versions. We therefore currently have to include in the AspectJ instrumentation knowledge about errors, such as the error messages passed to the Check method called from the beforeadvices. public aspect iteratormonitor{ RuleR ruler = new RuleR("src/examples/safeIteration", false); static void check(Signal s, String message){ if (s==Signal.FALSE){ System.err.println("Failure on " + message); System.exit(0); } } pointcut call(* pointcut call(* pointcut call(*
hasNext(Iterator i) : java.util.Iterator+.hasNext()) && target(i); next(Iterator i) : java.util.Iterator+.next()) && target(i); remove(Iterator i) : java.util.Iterator+.remove()) && target(i);
before(Iterator i): hasNext(i){ ruler.dispatch("hasNext", new Object[]{i}); } before(Iterator i): next(i){ check(ruler.dispatch("next", new Object[]{i}), "No preceding call of hasNext");
12
H. Barringer et al.
}
} before(Iterator i): remove(i){ check(ruler.dispatch("remove", new Object[]{i}), "Must call next before remove on iterator"); }
Below we present a RULE R monitor which observes events issued immediately before calls to the iterator methods hasNext, next and remove. The events are supplied with the actual iterator object so that tracking of multiple uses of iterators can take place at the same time. RULE R handles object references as weak references so that the monitoring is transparent as far as garbage collection in the original application is concerned. ruler SafeIteratorCheck{ observes hasNext(obj), next(obj), remove(obj); always Start{ hasNext(i:obj) -> Next(i); } state Next(i:obj){ next(i) -> Remove(i); } state Remove(i:obj){ remove(i) -> Ok; } assert Start, Next, Remove; initials Start; } Of the three rules defined in the above RULE R specification, the Start rule is of always activation persistence. Whenever a hasNext event occurs for some iterator i, the state rule Next is activated to track uses of the next and remove methods on that specific iterator. If an associated next event occurs, then the state rule Remove is activated. The latter rule simply checks for an associated remove or hasNext event, taking no action (the Ok keyword) other than satisfying the rule and deactivating itself if either event occurs. So how does this system not accept traces with two remove events with no intervening next event for the same iterator, or indeed, two next events with no intervening hasNext event? There is one further rule defined by the line starting with the keyword assert. It specifies that on every monitoring step one of the listing rule names must be applied successfully; if there is a monitoring step in which none of the rules applies then the dispatch method returns a status signal of FALSE. Suppose, therefore, the monitoring has rules Start and Remove(i) active for some iterator i, which means a remove event for iterator i is allowed. The Remove(i) rule will disappear as soon as remove event occurs, in which case only the Start rule remains active. If another remove(i) were to occur, no rule would be successfully used, breaking the constraint imposed by the assert directive. There are of course several different ways to specify
Rule Systems for Runtime Verification: A Short Tutorial
13
the desired behaviour and this one was chosen simply to highlight the use of the assert directive. The Java code fragment below contains a deliberate coding error, one that could easily occur in practice. myIt iterates over a list of numbers. Each list element is checked against another array of integers to see whether it is to be removed from the list (whether value is a multiple of an element of the removes array). Unfortunately, the programmer forgot to stop the inner loop after removing an undersirable value. Iterator
myIt = myList.iterator(); while (myIt.hasNext()){ int value = myIt.next(); for (int i = 0; i
14
H. Barringer et al.
recursively defined method, find, for search. We assume that the application is instrumented to report calls to, and returns from, the method find. We define two monitors Trace and StatsGatherer. The first will determine the maximum depth reached in the associated structure to find an item and send the ratio of maximum depth to size of tree to a second monitor that will collect and analyse statistics of use of the structure. The second monitor will make reports when undesirable performance occurs. ruler Trace(traceOut:obs){ observes call(obj,int), return(obj,int); state Top{ call(tree:obj,size:int) -> Stack(tree, 1, 1, size); } state Stack(tree:obj, level:int, max:int, size:int){ call(x:obj, y:int) -> Stack(tree, level+1, max+1, size); return(x:obj, b:int) {: level==1 {: b==1 -> traceOut(tree, 1.0*max/size), Top; default -> Top; :} default -> Stack(tree, level-1, max, size); :} } initials Top; outputs traceOut; } The Trace monitor observes call and return events. The first argument of both event types is the tree object being searched. The second argument of a call event is the size of the tree structure, whereas the second argument of the return event gives the return value of the associated find method (1 means item found, 0 not found). Given an initial call event, the subsequent calls and returns relate to recursive invocations of the instrumented application’s search method. The Trace monitor thus matches calls and returns and keeps track of the maximum depth reached. When a top-level return event occurs, a traceOut event is output if the search call was successful. The traceOut event has two items of data, the tree reference and the ratio of maximum depth to the size of the tree. ruler StatsGatherer(traceIn:obs, G:double, B:double){ locals report(obj,int, int); always Start{ traceIn(t:obj, ratio:double), !Track(t,x:int,y:int) {: ratio < G -> Track(t,1,0); ratio > B -> Track(t,0,1); default -> Track(t,0,0); :} }
Rule Systems for Runtime Verification: A Short Tutorial
15
state Track(tree:obj, Gs:int, Bs:int){ traceIn(tree, ratio:double) {: ratio < G -> Track(tree, Gs+1, Bs); ratio > B -> Track(tree, Gs, Bs+1); default -> Track(tree, Gs, Bs); :} (Bs != 0) && (Gs != 0 ) && (2*Bs > 3*Gs) -> report(tree, Bs, Gs); } initials Start; outputs report; } The events output by the Trace rule system schema are to be consumed by a monitor based on the rule system StatsGatherer given above. The schema has two arguments G and B of type double which are used to categorize the ratio of depth of search over size of tree as good, ok and bad performance. The monitor is designed to report when twice the number of bad searches exceeds three times the number of good searches. StatsGatherer’s initial rule has always activation persistence but it only starts tracking results for a particular tree if it is not tracking that tree already. The variables t, ratio are bound by matching traceIn(t,ratio) against a (grounded) observation traceIn(...). Note that traceIn is a RULE R schema argument and will be replaced by the actual name of the event on instantiation of the schema. Thus the variable t appearing in the expression !Track(t, x:int, y:int) is already bound, however, the other two variables will be bound through attempted matches against an appropriate observation event. The second rule body of the state rule Track outputs a report when the desired performance criteria are not met. Again, instead of simply reporting the situation, one would want, at that stage, to instigate some change in the application structure to improve the desired performance, e.g. re-shaping the tree, for which, undoubtedly, rather more data would need to be gathered. Finally, the monitor to be run is constructed by chaining together an instance of the Trace rule system schema with an instance of the StatsGatherer rule system schema, the latter requiring its parameters G and B to be instantiated with actual values (0.33 and 0.5). monitor{ uses T: Trace, S: StatsGatherer; locals info( obj, double); run (T(info) >> S(info, 0.33, 0.5)) . } 3.5 Other Features RULE R includes a number of other features to simplify and shorten the writing of monitoring specifications. These include: non-positional parameter naming; the notion of “success” rule sets, almost dual to the notion of “forbidden” rule sets; conditional, looping and parallel combinations of monitors; set and list types and associated operations;
16
H. Barringer et al.
and so on. The RULE R tutorial [7] documents the majority of these, however, RULE R is an experimental system and it changes at the whim of its authors. As briefly mentioned before, the expansion of these user-oriented features brings our specification language very close to a stream-functional programming language, enabling generic, compact, but sometimes obscure, specifications. Some masters-level student-driven case studies with RULE R have been undertaken in the areas of (i) off-line monitoring of logs from an on-line assessment system, (ii) active on-line, or supervisory, monitoring in which monitoring results trigger evolutionary action in the application system, and (iii) in providing fault explanation. These small studies indicate that the underlying approach with RULE R holds promise for specialised applications, as has been developed with the L OG S COPE system that we describe in the following section.
4 The L OG S COPE System 4.1 A Little History The L OG S COPE monitoring system was developed in Python after the RULE R system in order to specifically support testing of NASA’s next Mars Rover, the Mars Science Laboratory (MSL), to be launched in 2011. MSL is a compact car size rover, developed at the Jet Propulsion Laboratory. It is programmed in more than 3 million lines of code (including auto-generated code), more than the combined code onboard all previous Mars missions together. This trend in software growth is expected to continue also for future Mars missions, making it a challenge to apply traditional formal methods. The system is highly multi-threaded with approximately 160 threads. It is programmed in C by a team of 30+ programmers. The system is tested by a team of 10+ testers. Testing is complicated by the fact that the programming team has little time for activities not directly related to development of new software, and hence cannot be disturbed too much by a testing effort. This prevents for example an approach based on new testspecific code instrumentation, be it automated or not. In addition, the system is difficult to execute due to its tight connection with hardware. This makes test-input generation and multiple automated re-runs a challenging task. However, to our advantage and independently of the effort described in this paper, the MSL flight software produces rich logs, which are stored in SQL databases. A log is a collection of time-stamped events, where an event is a mapping from fields to values (a record). Test scripts are written in Python to test these events. A script typically consists of a sequence of sub-tests, where a sub-test consists of submitting a command to the rover, and then checking that as a consequence certain events happen or do not happen thereafter. Checking the occurrence (or non-occurrence) of events is done through various API calls. These calls also check the values of various arguments to the events, which have to co-relate in certain ways. Attempts have been made to run the Python test scripts concurrently with the MSL flight software, hence checking the occurrence of logging events on-the-fly as they happen. However, this online approach turned out to be problematic due to the fact that events are not necessarily observed in the order generated due to delays in the system. It was therefore decided instead to perform off-line analysis of the logs stored in the SQL databases. It was furthermore perceived advantageous to construct a specification
Rule Systems for Runtime Verification: A Short Tutorial
17
language for writing properties about these logs. At this point a decision could have been made to apply RULE R. However, two constraints caused us to re-develop a variation of RULE R in Python. First, engineers seemed to experiment with an informal notion of temporal logic in their comments to the Python code, explaining what the code was supposed to do. We concluded that it might be useful to support a notation close to this, a notation that is not directly supported by RULE R. Second, it should be possible to integrate the specification framework with Python in a painless manner, supporting a mixture of Python and the domain specific specification language, and minimizing the number of programming languages involved. It should be stated that it would have been perfectly possible to map the temporal logic to RULE R. This effort led to the development of L OG S COPE. L OG S COPE is a Python program that supports analysis of logs for testing purposes. The tool in principle takes as input a log and a specification of expectations wrt. the format of the log, and produces as output a report on violations of the specification. A log is a Python sequence containing Python dictionaries (maps from fields to values) as events. L OG S COPE supports an automaton language that conceptually forms a subset of the RULE R language. The automaton language has also been influenced by the graphical R CAT state machine language [18] and by the textual state machine language R MOR [11] for monitoring C programs. L OG S COPE furthermore adds a temporal logic with sequencing, and translates it into the automaton subset. Several systems exist supporting different logics, such as past time temporal logic [13], future time temporal logic [19], regular expressions [1], and state charts [10]. The MOP system [9] implements a series of such traditional logics as separate plugins, but within the same framework. Data parameterization is in MOP handled separately from the interpretation of the logics. This is in contrast to RULE R and L OG S COPE, where data parameterization is an integral part of the logic. Andrews and Zhang [2] offer a parameterized state machine framework similar to L OG S COPE’s parameterized state machines. Their state machines are compiled into Prolog. 4.2 Overview of L OG S COPE The L OG S COPE specification language consists of two sub-languages: (i) a higher-level pattern language, much resembling a temporal logic, and (ii) a lower-level, but more expressive, RULE R-like automaton language. Patterns are automatically translated to automata. It is the intention that the user should mostly write patterns, but automata can become necessary in cases where the extra expressive power is needed. The pattern specification language can be characterized by the following incomplete grammar (the non-terminal event is not defined): pattern → pattern NAME ":" event "=>" consequence consequence → event | "!" event | "["consequence1, . . . ,consequencen "]" | "{"consequence1, . . . ,consequencen "}"
18
H. Barringer et al.
A pattern has a name that can be referred to in violation reporting. On the occurrence of an event (left hand side of the => symbol), a consequence is expected to follow. A consequence can either be an event that should occur eventually, or the event should not (‘!’) occur eventually. Consequences can also be composed, ordered ([ ... ]) or unordered ({ ... }). An ordered sequence of consequences have to be full-filled in the order given, in contrast to unordered sequences. Examples below will illustrate these concepts as well as the details of events. The automaton language is conceptually a simple subset of the RULE R specification language. What is referred to as a rule system in RULE R is in L OG S COPE referred to as an automaton. Each individual rule of an automaton is attached to a source state, and can only be triggered by the occurrence of a single monitored event. The result of a rule can only be the conjunction of target states (no disjunction). States cannot be negated: a state is active until left, as in traditional state machines. States can be parameterized with data as in RULE R. However, automata cannot be parameterized, and there are no operations defined on automata (such as chaining). The main difference from traditional state machines is the data parameterization of states; the fact that a transition can enter multiple target states with conjunctive semantics (they must all lead to success); the predicates and actions on labels; and the different forms of states as will be illustrated. L OG S COPE’s automaton language represents an interesting practically effective subset of RULE R.
5
L OG S COPE by Example
5.1 Running L OG S COPE On a Log A log is a sequence of events. L OG S COPE more specifically analyzes a Python sequence of events, where an event is assumed to be a Python dictionary: a mapping from field names (strings) to values (in any of Python’s data formats). A special field named "OBJ TYPE" must be defined in all events, and must be mapped to a string indicating what kind of event it concerns. In MSL five such kinds of events are considered: – – – – –
COMMAND: commands issued to the spacecraft (input to the system). PRODUCT: science results produced by the software/hardware (output of the system). EVR: internal transitions (EVent Report). CHANNEL: samplings of the spacecraft state. CHANGE: delta changes to the spacecraft state.
We shall as an example consider the following simplified script file creating a log of 5 events, and then calling L OG S COPE to analyze it against the specification in a file "specs/msl-test": import logscope log = [ {"OBJ_TYPE" : "COMMAND", "Type" : "FSW", "Stem" : "PIC_4", "Number" : 231, "Bit" : 1, "Size" : 2000},
Rule Systems for Runtime Verification: A Short Tutorial
19
{"OBJ_TYPE" : "EVR", "Dispatch" : "PIC_4", "Number" : 231}, {"OBJ_TYPE" : "CHANNEL", "DataNumber" : 5}, {"OBJ_TYPE" : "EVR", "Success" : "PIC_4", "Number" : 231}, {"OBJ_TYPE" : "PRODUCT", "ImageSize" : 1200} ] logscope.monitor(log,"specs/msl-test") Note that normally the log will be produced by the running system to be monitored. The log corresponds to a command "PIC 4" (take a picture) being fired, followed by a dispatch of that command, then a channel observation, then a success of the command, and finally a data product. Commands in the log are numbered consecutively for identification, and related events are given the same number as the command causing them. In the subsequent sub-sections we shall compose the specification in the file "specs/msl-test". 5.2 Simple Properties Assume we want to express the property: R1 : “Whenever a flight software command is issued, then eventually an EVR should indicate success of that command”. Our log satisfies this specification since event number 1 (the command) is matched by event number 4, the success. The following L OG S COPE pattern formalizes this requirement: pattern P1: COMMAND{Type:"FSW", Stem:x, Number:y} => EVR{Success:x, Number:y} This pattern states that if a flight software ("FSW") command is observed in the log with the Stem (name) field having some unknown value x, and the Number field having some unknown value y; then later in that log, an EVR should occur with a Success field having x as value and a Number field having y as value. In between the {...} brackets occur zero, one or more constraints, each consisting of a field name (without quotes), and a range specification. We saw two forms of range specifiations: the string "FSW" for the field Type and the names x and y for the other fields. A string constant represents a concrete constraint: the field in the event has to match this value exactly (by Python equality ==). One can also provide an integer as such a concrete range constraint. A name (x and y in this case) has one of two meanings: (i) either the name has not been bound before, and it is bound to the value of the field in the current event, or (ii) it has been bound before, and it functions as a constraint: the field now has to have the value the name was bound to by the previously binding event. A consequence can also be the negation (‘!’) of an event. Suppose we want to state the following property: R2 : “Whenever a flight software command is issued, then thereafter no EVR indicating failure of that command should occur”. This can be expressed by the following pattern, also satisfied by our log: pattern P2: COMMAND{Type:"FSW", Stem:x, Number:y} => !EVR{Failure:x, Number:y}
20
H. Barringer et al.
5.3 Composite Properties As an example, consider the following requirement: R3 : “Whenever a flight software command is issued, there should follow a dispatch of that command, and then exactly one successful execution. There should be no dispatch failure before the dispatch and no failure between dispatch and success”. This property can be stated as follows. pattern P3 : COMMAND{Type:"FSW", Stem:x, Number:y} => [ !EVR{DispatchFailure:x, Number:y}, EVR{Dispatch:x, Number:y}, !EVR{Failure:x, Number:y}, EVR{Success:x, Number:y}, !EVR{Success:x, Number:y} ] The consequence consists of a sequence (in square brackets [...]) of (sub) consequences: events and negations of events. The ordering means that the dispatch should occur before the success, and the negations state what should not happen in between the non-negated events. It is also possible to indicate an un-ordered arrangement of events. For example, suppose we are not concerned about the order in which events occur, except that after a success there should not follow another success. This can be formulated as follows: pattern P4 : COMMAND{Type:"FSW", Stem:x, Number:y} => { EVR{Dispatch:x, Number:y}, [EVR{Success:x, Number:y},!EVR{Success:x, Number:y}], !EVR{DispatchFailure:x, Number:y}, !EVR{Failure:x, Number:y} } The curly brackets { . . . } indicate an un-ordered collection of consequences. The fact that they are un-ordered means that the non-negated events can occur in any order, and negations have to hold at all time after the triggering command. However, nested inside the { . . . } construct we have an ordered sequence [ . . . ] expressing that after one success should not follow another success. 5.4 Event Predicates and Actions Events can be associated with predicates and actions. Predicates perform more sophisticated checks on values of bound variables and consequently restrict the matching. Actions are executed with side effects on a global state when events and their predicates match. Event predicates as well as actions may refer to user-introduced Python
Rule Systems for Runtime Verification: A Short Tutorial
21
code. The following example formalizes the following requirement: R4 : “After a picture command (commands with names starting with "PIC") should follow, before the occurrence of the next flight software command, a channel reading of the DataNumber bitvector (integer) variable where bit 0 is the value of the command’s Bit field, and subsequently should follow exactly one image product, and it should be of a size less than the command’s Size field”. {: def bit(p,n): return (int(n) >> int(p)) & 1 :} pattern P5 : COMMAND{Type:"FSW", Stem:x, Bit:y, Size:z} where {: x.startswith("PIC") :} => [ CHANNEL{DataNumber:d} where {: bit(0,d) == y :}, PRODUCT{ImageSize:s} do {: assert s < z :}, !PRODUCT{} ] upto COMMAND{Type:"FSW"} The specification starts with a definition in Python of the function bit(p,n), returning the bit in position p (counted from the right) of the number n. The Python code must be enclosed with the symbols {: . . . :} (and occur at the beginning of the specification file). An atomic predicate in the pattern definition can be an arbitrary Python expression, also delimited by the symbols {: . . . :}, as in: {: bit(0,d) == y :}. Predicates can be composed using the traditional Boolean operators: and, or, not, and brackets ( . . . ). The do statement associated with the PRODUCT event expresses that when a product is observed, that Python statement is executed. In this case it simply executes an assert statement testing the size of the product. Note that this is different from an event of the form: PRODUCT{Stem:x, ImageSize:s} where {: s < z :} This latter event would cause the monitor to wait until a product was observed matching the condition, consequently ignoring any badly sized product before that. 5.5 Translation to Automata Patterns are translated into the RULE R-like automaton language. The following automaton is the result of translating the last introduced pattern P5 above. automaton P5 { always S1 { COMMAND{Type:"FSW", Stem:x, Bit:y, Size:z} where {: x.startswith("PIC") :} => S2(y,z) }
22
H. Barringer et al.
hot state S2(y,z) { CHANNEL{DataNumber:d} where {: bit(0,d) == y :} => S3(z) COMMAND{Type:"FSW"} => error } hot state S3(z) { PRODUCT{ImageSize:s} do {: assert s < z :} => S4 COMMAND{Type:"FSW"} => error }
}
state S4 { PRODUCT{} => error COMMAND{Type:"FSW"} => done }
An automaton is expressed in terms of states and transitions between states triggered by events. Events are exactly as in patterns, including predicates and actions. Just as events can be parameterized with values as we have seen above, so can states, as in RULE R, hence carrying values produced by incoming transitions. The automaton has four states: S1, S2, S3 and S4. State S1 has one exiting transition, labelled with the command event, and entering the parameterized state S2(y,z). State S1 is an always state, meaning that the state remains active after the detection of a command, in order to allow for detection of further commands. State S2 is a hot state, meaning that this state must be left before the end of the log, otherwise an error is reported. The state has a transition to state S3(z) labelled with a channel event predicated with the bitconstraint on the DataNumber. The transition COMMAND{} => error comes from the scoping upto-construct. It represents the property that the channel observation must be observed before the next flight software command. The other COMMAND transitions in states S3 and S4 also derive from the upto-construct. State S4 is an un-parameterized normal state (not an always state and not a hot state) monitoring that no second product occurs before the next command.
6 Concluding Remarks The two systems presented in this tutorial, RULE R and L OG S COPE, are based on the same underlying principle of a specification consisting of a set of rules, operating on a collection of states (disjuncts), each of which represents a possible path of success, and each being a collection of facts (conjuncts), where a fact is a named record of field-value pairs. In L OG S COPE, disjunction is not supported and only one state is active at any point in time. While the RULE R language focuses on defining a general and powerful rule-language, the L OG S COPE language focuses on temporal logic, and its translation into an automaton-like subset of this general rule-language. RULE R has grown considerably from the simple target language for interpreting different temporal logics, but its basic core preserves the simplicity of the original propositional system.
Rule Systems for Runtime Verification: A Short Tutorial
23
RULE R has the flavour of a stream-functional programming language. This raises the question of the relationship between runtime verification languages and programming languages. For example, is a functional programming language better suited for runtime verification, or is it preferable to use the programming language of the application being monitored with some additional monitoring oriented features, in a sense creating an integrated specification and programming language? The work presented here explores the first choice (a functional language), leaving the alternative an open question. In this paper RULE R was applied to monitor Java programs online using aspects, whereas L OG S COPE was defined for monitoring logs offline. However, RULE R can just as well be used for log analysis, and supports both “infinite” trace online monitoring and finite trace offline monitoring. Similarly, since the monitoring algorithm in L OG S COPE is equivalent to the one in RULE R, L OG S COPE can be used for online monitoring, although L OG S COPE only makes two-valued verdicts about traces. Current work in progress consists of defining a new system, RULE R V2.0, which incorporates lessons learned from both these systems. Indeed, the users of L OG S COPE have found the temporal logic pattern language intuitively easy and natural. One goal is to integrate a variant of the temporal logic in RULE R V2.0. An important aspect of the new system will be optimization of the algorithms used, in particular, finding efficient event-driven rule indexing methods.
Acknowledgements The first two authors would like to thank the Royal Academy of Engineering for the award of a Distinguished Visiting Fellowship, which enabled Klaus Havelund spend time at the School of Computer Science in Manchester in order to progress elements of the work presented in this paper.
References 1. Allan, C., Avgustinov, P., Christensen, A.S., Hendren, L., Kuzins, S., Lhot´ak, O., de Moor, O., Sereni, D., Sittamplan, G., Tibble, J.: Adding trace matching with free variables to AspectJ. In: OOPSLA 2005. ACM Press, New York (2005) 2. Andrews, J.H., Zhang, Y.: General test result checking with log file analysis. IEEE Transactions on Software Engineering 29(7), 634–648 (2003) 3. Banieqbal, B., Barringer, H.: Temporal logic with fixed points. In: Banieqbal, B., Pnueli, A., Barringer, H. (eds.) Temporal Logic in Specification. LNCS, vol. 398, pp. 62–74. Springer, Heidelberg (1989) 4. Barringer, H., Fisher, M., Gabbay, D., Gough, G., Owens, R.: MetateM: an introduction. Formal Aspects of Computing 7(5), 533–549 (1995) 5. Barringer, H., Goldberg, A., Havelund, K., Sen, K.: Rule-based runtime verification. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 44–57. Springer, Heidelberg (2004) 6. Barringer, H., Rydeheard, D., Havelund, K.: Rule systems for run-time monitoring: from Eagle to RuleR. In: Sokolsky, O., Tas¸ıran, S. (eds.) RV 2007. LNCS, vol. 4839, pp. 111–125. Springer, Heidelberg (2007) 7. Barringer, H., Rydeheard, D., Havelund, K.: RuleR: A tutorial guide (2008), http://www.cs.man.ac.uk/˜howard/LPA.html
24
H. Barringer et al.
8. Barringer, H., Rydeheard, D., Havelund, K.: Rule systems for run-time monitoring: from Eagle to RuleR. Journal of Logic and Computation (2009), Advance Access published on November 21 (2008), doi:10.1093/logcom/exn076 9. Chen, F., Ros¸u, G.: MOP: An efficient and generic runtime verification framework. In: Object-Oriented Programming, Systems, Languages and Applications, OOPSLA 2007 (2007) 10. Drusinsky, D.: Modeling and Verification using UML Statecharts, 400 pages. Elsevier, Amsterdam (2006) 11. Havelund, K.: Runtime verification of C programs. In: Suzuki, K., Higashino, T., Ulrich, A., Hasegawa, T. (eds.) TestCom/FATES 2008. LNCS, vol. 5047, pp. 7–22. Springer, Heidelberg (2008) 12. Hodkinson, I.M., Reynolds, M.: Separation - past, present, and future. In: Art¨emov, S.N., Barringer, H., d’Avila Garcez, A.S., Lamb, L.C., Woods, J. (eds.) We Will Show Them! vol. 2, pp. 117–142. College Publications (2005) 13. Kim, M., Kannan, S., Lee, I., Sokolsky, O.: Java-MaC: a run-time assurance tool for Java. In: Proc. of the 1st International Workshop on Runtime Verification (RV 2001). ENTCS, vol. 55(2). Elsevier, Amsterdam (2001) 14. Lange, M.: Alternating context-free languages and linear time mu-calculus with sequential composition. Electr. Notes Theor. Comput. Sci. 68(2) (2002) 15. Moszkowski, B.: Executing temporal logic programs. Cambridge University Press, Cambridge (1980) 16. Moszkowski, B.C., Manna, Z.: Reasoning in interval temporal logic. In: Clarke, E.M., Kozen, D. (eds.) Logic of Programs 1983. LNCS, vol. 164, pp. 371–382. Springer, Heidelberg (1984) 17. Pnueli, A.: The temporal logic of programs. In: 18th Annual Symposium on Foundations of Computer Science, pp. 46–57. IEEE Computer Society, Los Alamitos (1977) 18. Smith, M., Havelund, K.: Requirements capture with RCAT. In: 16th IEEE International Requirements Engineering Conference (RE 2008). IEEE Computer Society, Barcelona (2008) 19. Stolz, V., Bodden, E.: Temporal assertions using AspectJ. In: Proc. of the 5th International Workshop on Runtime Verification (RV 2005). ENTCS, vol. 144(4). Elsevier, Amsterdam (2005) 20. Sun Microsystems, Inc. Java Platform, Standard Edition 6, API Specification (2009), http://java.sun.com/javase/6/docs/api/ 21. Wolper, P.: Temporal logic can be more expressive. Information and Control 56 (1983)
Verification, Testing and Statistics Sriram K. Rajamani Microsoft Research India [email protected]
Though formal verification is the holy grail of software validation, practical applications of verification run into two major challenges. The first challenge is in writing detailed specifications, and the second challenge is in scaling verification algorithms to large software. In this talk, we present possible approaches to address these problems: – We propose using statistical techniques to raise the level of abstraction, and automate the tedium in writing detailed specifications. We present our experience with the Merlin project [4], where we have used probabilistic inference to infer specifications for secure information flow, and discovered several vulnerabilities in web applications. – We propose combining testing with verification to help scalability, an reducing false errors. We present our experience with the Yogi project [5,2,1,3], where we have built a verifier that combines static analysis with testing to find bugs and verify properties of low-level systems code. Acknowledgment. We thank our collaborators Anindya Banerjee, Nels Beckman, Bhargav Gulavani, Patrice Godefroid, Tom Henzinger, Yamini Kannan, Ben Livshits, Aditya Nori, Rob Simmons, Sai Tetali and Aditya Thakur.
References 1. Beckman, N.E., Nori, A.V., Rajamani, S.K., Simmons, R.J.: Proofs from tests. In: ISSTA 2008: International Symposium on Software Testing and Analysis, pp. 3–14. ACM Press, New York (2008) 2. Godefroid, P., Nori, A.V., Rajamani, S.K., Tetali, S.: Compositional May-Must Program Analysis: Unleashing The Power of Alternation. Microsoft Research Technical Report MSR-TR-2009-2, Microsoft Research (2009) 3. Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: SYNERGY: A new algorithm for property checking. In: FSE 2006: Foundations of Software Engineering, pp. 117–127. ACM Press, New York (2006) 4. Livshits, B., Nori, A.V., Rajamani, S.K., Banerjee, A.: Merlin: Specification Inference for Explicit Information Flow Problems. In: PLDI 2009: Programming Language Design and Implementation. ACM Press, New York (to appear, 2009) 5. Nori, A.V., Rajamani, S.K., Tetali, S., Thakur, A.V.: The Yogi Project: Software Property Checking via Static Analysis and Testing. In: Kowalewski, S., Philippou, A. (eds.) TACAS 2009: Tools and Algorithms for Constuction and Analysis of Systems. LNCS, vol. 5505, pp. 178–181. Springer, Heidelberg (2009)
S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, p. 25, 2009. c Springer-Verlag Berlin Heidelberg 2009
Type-Separated Bytecode – Its Construction and Evaluation Philipp Adler and Wolfram Amme Institute of Computer Science Friedrich-Schiller-University Jena Jena, Germany {Philipp.Adler,Wolfram.Amme}@uni-jena.de
Abstract. A lot of constrained systems still use interpreters to run mobile applications written in Java. These interpreters demand for only a few resources. On the other hand, it is difficult to apply optimizations during the runtime of the application. Annotations could be used to achieve a simpler and faster code analysis, which would allow optimizations even for interpreters on constrained devices. Unfortunately, there is no viable way of transporting annotations to and verifying them at the code consumer. In this paper we present type-separated bytecode as an intermediate representation which allows to safely transport annotations as type-extensions. We have implemented several versions of this system and show that it is possible to obtain a performance comparable to Java Bytecode, even though we use a type-separated system with annotations.
1
Introduction
Platform-independent mobile code like Java Bytecode is executed using Just-inTime (JIT) compilation or interpreters. Since mobile code, in particular Java Bytecode, is mostly unoptimized when it arrives at the runtime system, a JIT compiler often applies several analysis and optimization techniques to run the mobile program faster and more efficiently. Performed optimizations rely on analyses which are done simultaneously to program execution. As a consequence, such optimizations can become very expensive in terms of used memory and CPU performance. The memory used by the execution environment also increases because the infrastructure for the optimization must be implemented. On the other hand, a lot of runtime information about the program and target architecture is known which enables powerful optimizations, even global ones regarding the whole program. Common optimizations are escape and side-effect analysis [1,2], which can be used for other optimizations as stack allocation [3] or load-store-elimination [4]. Program annotations have been suggested to improve the code generation or verification process of transmitted mobile code. The term program annotation is used as a synonym for code information added to the mobile code during its generation. This information can be used by the consumer side of a mobile system to speed-up optimizations or increase security of a given program. The S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 26–39, 2009. c Springer-Verlag Berlin Heidelberg 2009
Type-Separated Bytecode – Its Construction and Evaluation
27
main challenge, after transferring mobile code to the runtime environment, is the verification of transmitted annotations. Since annotations are additional information which are derived from the program and do not belong to the underlying mobile code the verification of program annotations is complicated. Therefore, in most projects program annotations are assumed to be sound [5,6] and will not come under further scrutiny. However, if the code consumer is relying on the annotations but cannot prove their correctness, a semantically incorrect transformation of the program code may occur and harmful behavior could be the result. A number of different approaches have been suggested for a safe transport of program annotations, e.g. grammar-based compression [7], type-extension [8] or annotations of data-flow results [9]. The type-separated approach, as presented in [10], was originally used for the construction of the mobile code format SafeTSA [11], and tries to express program annotations through an extension of its underlying type model. The advantage of such a technique is, that the verification of program annotations can be accomplished by simple type checking. On computers like desktop machines, the additional use of memory and CPU power for optimizations and annotations is in general not a problem. For constrained devices, e.g. cell-phones or PDAs, this issue can be more problematic, since available memory and CPU power is much more restricted during program execution. As a consequence, instead of JIT and adaptive compilers, on such systems interpreters are utilized most of the time. In principle, optimizations of programs could be also performed during interpretation. However, with constrained devices, restructuring of programs is often even not possible since mobile code must not be changed in the device. In our project, we are interested in the development of optimizations for the improvement of interpreter applications. In particular, we are searching for optimizations that are optimistically performed at program compile time, and which can be undone, if necessary, during program interpretation without program restructuring. In this context, program annotations are delivering information that is useful for resetting incorrectly performed optimizations. For a safe transport of such program annotations, we have developed the intermediate representation type-separated bytecode. As its name suggests, annotations are expressed as simple type-extensions in this intermediate representation and therefore can be verified by ordinary type-checking. In this paper, we introduce the basic concepts of type-separated bytecode and present experimental results that we gathered when mapping our type-separated machine model to a real virtual machine. The paper is structured as follows: It begins with a short overview of related work in Section 2. Section 3 describes the structure of type-separated bytecode and shows its application by examples. Section 4 presents different possible implementations of type-separated bytecode and gives measurements concerning its runtime behavior. Section 5 concludes the paper and gives a discussion about future project activities.
28
2
P. Adler and W. Amme
Related Work
There is a wide range of intermediate representations solving different aspects of mobile code. Beside stack-based techniques used in bytecodes there are mainly two other representations: syntax-oriented and proof-carrying. Syntax-oriented representations encode the abstract syntax tree of a program in a special way which allows an efficient transport of high-level program information. This high-level information can be used in particular by a JIT compiler for a better code generation. Like nearly each intermediate representation syntax-oriented mobile code formats can be also used as input for an interpreter. However, such kind of interpretation is often inefficient in the presence of recursive function calls [12]. Proof-carrying techniques transfer proofs together with program code. Proofs are especially used for guaranteeing safety properties and must be successfully validated before program execution [13]. Although this technique makes the transport of mobile programs quite secure, it is not often used in interpreters because of its complexness and high memory requirements. There is some research that suggests the use of program annotations in a JIT compiler. In [14] and [15], annotation frameworks are introduced where the JIT compiler utilizes annotations generated by the Java front-end. These annotations carry information concerning optimizations. Thus, high-performance native code can be produced without performing costly analysis and transformations. One major problem of this approach is, that these annotations are not verifiable and assumed to be sound. Nearly all other available annotation techniques, like those presented in [5], [16], and [6], have this shortcoming, too. Annotations are simply transported as code attributes or, similarly, without protection against manipulation. If a code consumer relies on provided but manipulated information, semantically incorrect transformations may occur which can result in serious security issues. For example, while transmitting information for bound check removal [6] or interprocedural side-effect optimizations [16], malicious code could lead to the elimination of checks which would fail in an unoptimized version. In the worst case, computer attacks and data loss may be the consequence. With other annotations, the threat is not as high as mentioned before. While transmitting helpful information for program improvements like virtual register assignment [5], manipulation results in worse runtime behavior, but apart from that program semantic is not altered. Nevertheless, manipulations made in this way are usable for denial-of-service attacks and similar threats. To reduce the overhead for executing mobile programs, a reduction in verification time especially for constrained devices is appropriate. In [17], the verification process is split-up into two parts, one performed at the code producer and the other at the code consumer. At the code producer, verification information is constructed and transmitted as a verification certificate, which can be understood as an annotation, together with the mobile code to the consumer. There, a lightweight verification consisting of a check of code and certificate is done, requiring less time and space than the normal verification algorithm. By construction, these verification certificates are tamper-proof and verifiable. However, the technique is hard to generalize and difficult to apply to other domains.
Type-Separated Bytecode – Its Construction and Evaluation
29
Some newer research introduced in [18] and [19] specify a technique which transports annotations containing escape-analysis results in a safe and verifiable manner. The idea is to extend the underlying type system by additional types representing ”capturedness”, the property whether a reference escapes or not or possibly escapes. This annotation procedure is tamper-proof. If the state is changed to escape or possibly escapes, nothing more than an optimization chance is lost. The other way round the change of the type produces an erroneous program which is rejected. In principle, both of these techniques are based on type-separation and therefore, are excellent candidates for an integration into type-separated bytecode. A more general concept for the transport of safe and verifiable program annotations is described in [9]. With this technique, parts of the result of a data flow analysis, that has been performed on the producer side, are added to the mobile code representation of a program. On the consumer side the entire results of the data flow analysis then will be safely restored from the annotation points by a repeated application of the data-flow algorithm used on the consumer side for creating annotation points. Although this technique can be used for an annotation of all program information, which can be expressed as a dataflow problem, its application in constrained devices is questionable since of its high memory requirements.
3
Type-Separated Bytecode
So far, the concept of type separation was used in syntax-oriented intermediate representation, only. In this section we describe, conceptually, the integration of type separation into stack-oriented intermediate representations and demonstrate how this technique can be used for program annotation. 3.1
A Type-Separated Stack Model
Type-separation stands for a technique in which values of different types are kept separate. The concept of type-separation can be applied to nearly each intermediate representation by the transformation of its underlying machine model into a strongly typed architecture. Type: — int2 Obj1 int1 Stack Obj2 int3 Obj3 Register Set
Type⇒ Separation
Type: int
Type: Obj
int2 int1 Stack
Obj1 Stack
int3 Register Set
Obj2 Obj3 Register Set
Fig. 1. Example of type-separation
30
P. Adler and W. Amme
intindex intStack
[I int-ArrayStack
int intStack
f loatStack
String StringStack
ObjectStack
⇒ arrayloadint ⇒
⇒ int2f loat ⇒
⇒ downcastString,Object ⇒
intload intStack
int-ArrayStack
intStack
f loat f loatStack
StringStack
Object ObjectStack
Fig. 2. Example of stack-effects
While the implied machine model of ordinary bytecode is one with a single stack and a limited register plane for all types, type-separated bytecode uses a model in which there is a separate stack and a register plane for every type, respectively. In addition, stack and register planes are created implicitly, taking into account the predefined types, imported types, and local types occurring in the mobile program. As an example of the functioning of type-separated bytecode we refer to Figure 1. On the left side of the Figure, a snapshot of the stack machine during execution of ordinary stack-based code is presented. In this stack machine, one untyped stack and register plane exist, in which values of two different types, i.e. integer values and a reference to an object of class Object, are stored. In contrast, in the type-separated stack model, shown on the right side of Figure 1, two typed stacks and their corresponding register planes are created. Each stack and register plane in this stack machine has now attached a fixed type which restricts assignments of values to the corresponding stack and register type. In the type-separated stack model, for each instruction it is always defined where operands come from and where the result should go to. In Figure 2, the first instruction is an array access statement. It loads an int -value from an int-array at the given index. The instruction implicitly knows that operands come from the int -stack and int-array-stack, and that the result must be taken onto the int -stack. Furthermore, in type-separated bytecode, specific strongly typed instructions are introduced for type conversions. Actually, there exists both a couple of primitive type conversion instructions as int -to-float -conversion (see Figure 2) and two special cast operations for reference types. The latter consist of a downcast instruction, that can be validated statically, and an upcast -instruction, which must be checked dynamically. The principal structure of a downcast -instruction is also depicted in Figure 2. It uses a value from the stack that is assigned to the type String and places the result onto the stack that is assigned to the type Object. Validness of downcast -instructions must be checked only once during the verification process, whereas no further check is needed at program’s runtime.
Type-Separated Bytecode – Its Construction and Evaluation
31
Comparable to ordinary bytecode, when using type-separated bytecode, a proper use of operands and structural integrity must still be guaranteed prior to program execution. This can be achieved using a verification similar to Java Bytecode’s verification process. However, to simplify this verification process for type-separated bytecode, some additional constraints are defined. For instance, in type-separated bytecode no subroutines are allowed and the construction of objects is combined with its corresponding constructor call. Further constraints specifically help to detect empty stacks in branches and pre-initialized registers. This reduces verification to a single-pass data-flow-algorithm using only one active stack-and-register map. During verification, there is no need to check registers, since, in type-separated bytecode, they are pre-initialized and there are no dangling uninitialized object instances. Therefore, it is only necessary to check that the number of registers is not exceeded and that all operands used by program instructions are available on their corresponding stacks, i.e. stacks are not allowed to underflow. Consequently, verification of type-separated bytecode programs is affordable and in principle can be realized using simple element counters assigned to each stack. In this context, the verification process can be seen as an abstract program execution, in which the element counters of the corresponding stacks are updated for each appearance of load- and store-instructions, whereby a verification error, i.e. type or reference error, always appears if an instruction is trying to access an empty stack. 3.2
Type-Extension as a Basis for Program Annotations
Type-separation can be excellently used as a basis for the safe transport of program annotations. The central idea for this application is the expansion of the type-separated machine model by specific types (and with it, the insertion of a property-retaining instruction set), that can be used for the description of transmitted program information. Expressing program annotations by specific types has the nice side-effect, that verification of transported program annotations can be done by a simple application of the same verification algorithm, that is used for ordinary type-separated bytecode. An excellent application for program annotations is the transport of escape information. This annotation technique, which is described in more detail in [8], performs a partial static escape analysis of each class at compile-time and then annotates the intermediate representation of a class with the derived escape information, which the JIT-compiler can use for object resolution and stack allocation. A safe and verifiable transmission of escape information with this technique is guaranteed by the insertion of a so-called may-escape type Amay for each reference type A of a program. Furthermore, the definition of special field and array access instructions, which are constructed such that they are not changing the escape property of an accessed object, makes this annotation technique tamperproof via construction, i.e. escape behavior of an object always can be directly derived without further verification from its assigned escape type.
32
P. Adler and W. Amme
A further candidate of a valuable program annotation, that for instance could be used for the elimination of load instructions, is the annotation of function calls with side-effect information. In such a technique, initially, an interprocedural elimination of load instructions could be prepared during code generation, which then can be accomplished afterwards by a JIT-compiler under consideration of annotated side-effect information. In type-separated models, side-effects can be easily expressed by the insertion of a specific read-only type Aread for each reference type A of a program. In that context, the definition of an object obj from a read-only type indicates that all field accesses via obj are not changing the content of a field. An additional insertion of instructions, that preserve the load-only-property, then again leads to an annotation technique which is tramper-proof via construction. Under the assumption that the types of register planes, which are used by a function, are known latest after its first call, based on this annotation technique a function f is called side-effect free for type A when during execution of f only the read-only or no register plane/stack of A is used. As restructuring of programs used on restricted devices often is impractical or even impossible, as a result of its persistent memory character, in the typeseparated bytecode project we are mostly interested in optimistic optimizations. Program annotations in this area then could be used to revert optimizations that incorrectly have been performed at compile time. An example of such an optimization is an optimistic stack allocation algorithm, which is performed during code generation on the producer side under the assumption that no analyzed objects will escape via function calls. In this case, escape information assigned to functions could guide the reallocation of stack-allocated objects to the heap. A reallocation is necessary whenever the interpreter calls a function during program execution, through which an already stack-allocated object could escape. Main idea of an optimistic load elimination is, that during the elimination of load instructions on the producer side we assume that each function call is sideeffect free. As a consequence, in an optimistic load elimination, load instructions will be also optimized across function calls. In preparation of a possible reset process, each function call across which load eliminations have been performed, is assigned a special back-setting-block. Conceptually, a back-setting-block contains for a function call of f those load instructions which have to be redone if the called function f has side-effects. During program execution, the back-settingblock of a function call must be executed whenever the interpreter detects that the performed function call was not side-effect free.
4
Implementation and Results
In the area of syntax-oriented intermediate representations, the concept of typeseparation is solely used as a pure program transportation format, that, in principle, allows for a safe and fast reconstruction of transmitted programs. One reason for this is its exclusive application in JIT compilers, which makes a mapping of the transported program to the target architecture indispensable.
Type-Separated Bytecode – Its Construction and Evaluation
33
Since we are using type-separation in the context of interpreters, there is more tolerance when building a runtime environment for the type-separated bytecode model. In particular, this is the case when mapping stacks and register planes used in the type-separated model to its actual implementation. In this section we describe experiences that we gathered when translating type-separated bytecode to various virtual machine implementations, i.e. a multi-stack, a five-stack, and a single-stack virtual machine. 4.1
Environment and Measurements
The concept of type-separated bytecode has been realized by the construction of an additional back-end of the SOOT framework [20]. SOOT is an open source system that provides a development tool for simple and elegant optimization of Java Bytecode programs. The integration of our methodology into SOOT enables a convenient way to produce type-separated bytecode programs out of Java Bytecode. In our back-end type-separated bytecode programs are derived directly from the Jimple intermediate representation of Java Bytecode programs, which has the advantage that additional optimizations performed on the Jimple representation could also be used for type-separated bytecode programs. Similar to the construction of the producer side, realization of our consumer side is based on an existing virtual machine. Since in our type-separated bytecode project we are interested in applications for constrained systems, we have chosen Sun’s K Virtual Machine (KVM) as basic runtime environment. Suns KVM [21] was especially developed as a small and efficient virtual machine, that can be used in systems with constrained resources. In particular, for satisfying memory restrictions, the KVM abstains from the application of JIT compilation; instead, all programs are exclusively executed by an interpreter. For the development of a functional type-separated virtual machine, the KVM was extended by an additional classloader and a further interpreter, that perform the loading and execution of type-separated programs, respectively. In addition, to guarantee the correctness of loaded programs, a type-separated bytecode verifier, which is called implicitly from the classloader, was added to the runtime system. A lot of measurements have been performed with the goal of comparing the runtime of Java Bytecode programs with their type-separated counterparts. Further, all runtime measurements were accomplished using two benchmark suites (see Figure 3). The first one is the GrinderBench [22] which measures performance for typical applications on mobile devices like cell phones and PDAs. The second one is the Java-Grande-Forum [23], a collection of mathematical-oriented benchmarks that we used for testing the processing power of our virtual machine. Since the memory requirements for some of the latter benchmark programs exceeded the available memory in the KVM, the following discussion is restricted to programs that are given in Section 2 of the Java-Grande-Forum. Figure 3 contains an overview of the used benchmark programs. For detecting influences that probably arise from the use of different target architectures, measurements were performed on three types of machines: Intel-32
34
P. Adler and W. Amme GrinderBench kXML PNG CHESS CRYPTO
Description XML parsing and DOM tree manipulation PNG photo image decoding chess playing engine cryptographic transactions
Java-Grande Crypt FFT HeapSort LUFact SOR Sparse
Description IDEA encryption Fast Fourier Transform Integer sorting LU Factorisation Successive over-relaxation Sparse Matrix multiplication
Fig. 3. Overview of used benchmarks
(AMDx86-32), PowerPC (PowerMacG4), and Nokia770 (ARM-OMAP-1710). Since results of the measurements were comparable for all of these architectures, we will only refer to the results obtained from the Nokia770 in the following. Furthermore, each result of a benchmark program was accomplished using the mean of five program executions on the modified and unmodified KVM. Since program execution took a lot longer than program verification, the presented results consist of just the execution time excluding verification time. It should be noted that the runtime measurements of the GrinderBench are only allowed to be presented as relative values. 4.2
Multi-stack Implementation
Our first idea when designing a virtual machine for type-separated bytecode was to derive its structure directly from the underlying type-separated model, i.e. a separate stack and register plane is created for each type used in the program. Under the assumption that special non-destructive stack accessing instructions exist, i.e. instructions that access the stack read-only, one advantage of such a design would be that, since operands are spread over multiple stacks, operands could be reused multiple times without taking them from the stacks. However, runtime results that we measured for our benchmark programs using this implementation were strongly disappointing. As can be seen on the left-hand side in Figure 4, based on such a multi-stack implementation, the execution time of the benchmark programs increased on average by about 14%, compared to their Java Bytecode counterparts. The reason for the increase in runtime is the additional overhead that arises from the administration of multiple stacks and register planes. Most problems occur when stacks and their corresponding register plane, which are used in different classes, have to be mapped to an unique type. In such cases the type in each class is described by a different local descriptor, which must be mapped to an unique one. Since it is in general not possible to alter a program for constrained devices, this mapping must be done dynamically. Another drawback
Type-Separated Bytecode – Its Construction and Evaluation
Benchmark kXML PNG Chess Crypto Crypt FFT HeapSort LUFact SOR Sparse
35
Multi-Stack Multi-Stack standard optimized -15 -10 -17 -13 -18 -12 -14 -10 -11 -10 -8 -8 -11 -8 -12 -11 -10 -11 -21 -16
Fig. 4. Multi-Stack-Type-Separated Bytecode vs. Java Bytecode (in %)
occurred during method invocations. In some cases, during polymorphic method calls, the this-parameter has to be moved to another stack due to implicit type conversion, i.e. a call of a re-implemented method must be dispatched to its defining class. This property has to be checked every time a virtual method is invoked and exhibits a further performance penalty. Since the first performance measurements were not satisfying, some optimizations were performed on the multi-stack implementation. At first, the design of the underlying virtual machine was slightly modified, i.e. an exclusive untyped parameter stack was added to speed-up method invocations. Afterwards, optimizations have been performed that target the improvement of the implementation by itself, e.g. a better internal representation of used stacks and register planes, etc. However, as can be seen on the right-hand side of Figure 4, altogether, the performed optimizations could only improve the execution of the benchmark programs slightly. With these optimizations, the measured runtime performance decreases on average to an overhead of about 11% compared with Java Bytecode programs. Despite of the results in runtime execution, performance measurements applied for the type-separated verification algorithm were promising. Performed measurements have shown that on average, our verification algorithm is 28% faster than the stack-map based algorithm of Java Bytecode [17] with similar memory requirements. However, some constraints which were useful for verification had a negative impact on runtime performance, e.g. initializing registers with neutral values and marking control flow joins. Further measurements showed that verification without these constraints leads to improvements in program execution time of up to 4%, whereas verification time increased slightly. 4.3
Five-Stack Implementation
The main disadvantage of the multi-stack implementation is the potential unlimited amount of stacks and register planes. In order to reduce the administration overhead for these, we decided to use a simplified implementation approach with
36
P. Adler and W. Amme Benchmark Five-Stack Single-Stack kXML -1.18 1.72 PNG -1.17 2.98 Chess -1.08 1.49 Crypto -1.13 1.46 Crypt -1.05 0.03 FFT -1.07 -0.48 HeapSort -1.01 1.72 LUFact -1.12 -0.81 SOR -1.10 0.32 Sparse -1.07 -0.80
Fig. 5. Five- and Single-Stack-Type-Separated Bytecode vs. Java Bytecode (in %)
a fixed number of stacks. Since there are four primitive types and reference types in Java Bytecode, we settled for a five-stack implementation, i.e. one stack and register plane for each primitive type and one combined stack and register plane shared by all reference types. As the order of types on the reference stack can be arbitrary, an additional constraint was introduced, which enforced that the ordering of types on the reference stack is identical to their use. This is especially important for method invocations where the order of operands needs to match its formal parameters. Introducing this constraint helps to map the multi-stack approach for reference types, as it is used in the type-separated model, to a single-stack representation without additional efforts, during interpretation. Note, that a five-stack implementation reduces the uses of type-separated bytecode in parts to a pure transportation format, i.e. after verification of transported programs the multi-stack property for reference types will be relinquished. Performance results for the five-stack implementation are depicted in Figure 5. On average, there is a degradation in execution time of around 1% in comparison to Java Bytecode programs. The main overhead still arises from managing the different stacks and register planes. We tried some other techniques to further improve performance but they only had little to no impact. In any case, the measured results show that differences between execution times seems negligible, and therefore a five-stack implementation is a candidate for a type-separated virtual machine that supports the transport of program annotations. 4.4
Single-Stack Implementation
In the last experiment we implemented the type-separated concept as a virtual machine with one stack and a single register plane. In this context, typeseparated bytecode is used as a pure transportation format, i.e. after the verification process, stacks as well as register planes used in the transported program must be mapped to the virtual machine’s counterparts. In principle, the transformation of type-separated bytecode into a single-stack representation could be done at the code consumer side. However, since such processing is very expensive,
Type-Separated Bytecode – Its Construction and Evaluation
37
further constraints were introduced to type-separated bytecode programs for reducing that overhead. These constraints effectively limit programs representable in type-separated bytecode to programs, which could be directly executed using a single-stack implementation. As before, all instructions in a type-separated bytecode program are still strongly typed and type information is transmitted to the code consumer. Transformation to the single-stack machine is then performed implicitly, by treating the type-separated bytecode program as a program written for this kind of machine. For supporting the transformation process, in type-separated bytecode files, type information and bytecode instructions are now stored separately, so that type information can be used for verification as before, but afterwards can be ignored during program execution. For guaranteeing that an implicit transformation will not change the semantic of transported programs, additional checks must be performed during the verification process. Especially, for operator applications and function calls in type-separated bytecode programs, it has to be verified that all operands and parameters, respectively, will be stored on the stacks in the same order as expected from the operation and function declaration. Performed measurements show, that the runtime of type-separated bytecode programs, on a single-stack machine, is nearly identical to Java Bytecode programs. Actually, since type-separated bytecode is more compact than Java Bytecode, and the typed instructions lead to more efficient code in the interpreter loop, a slight improvement in average runtime of 1% (see Figure 5) could be observed. It should be mentioned that the appearance of an increased number of upcast- and downcast-instructions in the program code sometimes results in opposite effects; e.g. consider the effect in the execution time measured for the Sparse and LUFact benchmark program.
5
Conclusion
In this paper, we have introduced the intermediate representation type-separated bytecode and have shown how it could be used as mobile code format for constrained devices. Measurements we have performed indicate, that type-separated bytecode programs can be executed roughly as fast as programs described by means of original Java Bytecode (see Figure 6). Although the overhead of a multi-stack interpretation is not acceptable, the performance results for a fivestack implementation are tolerable, and the use of a single-stack implementation for type-separated bytecode can even be competitive. Type information which is added to type-separated programs, generally, increases program size. At the moment, the size of the produced files is nearly identical to the size of Java Bytecode class files with stack-map information. However, since, in the current type-separated bytecode compiler, type information is stored directly in the files there is still room for further improvements and file size reductions. As a side-effect, verification of type-separated bytecode is quite fast. In fact, verification of type-separated bytecode equals approximately Java Bytecode verification based on stack-maps.
38
P. Adler and W. Amme
"+$/&*0&1*&-(&$+&2
"#$%&'() "#$%&') $*% $+,#%
! !
-+./%
Fig. 6. Overview of measured execution times
For future work, in our project we consider the development of optimistic optimization techniques, that could be used for constrained devices. Especially, we are interested in the construction of optimizations that optimistically estimate the side-effects of function calls, e.g. optimistic stack-allocation or loadelimination. In particular, strategies must be developed that revert incorrectly performed optimizations without an additional overhead. Furthermore, the granularity of annotated side-effect information must be specified. Acknowledgments. This work is supported by the DFG (Deutsche Forschungsgemeinschaft) through grant AM 150/2-1.
References 1. Le, A., Lhot´ ak, O., Hendren, L.: Using inter-procedural side-effect information in JIT optimizations. In: Bodik, R. (ed.) CC 2005. LNCS, vol. 3443, pp. 287–304. Springer, Heidelberg (2005) 2. Whaley, J., Rinard, M.: Compositional pointer and escape analysis for Java programs. In: Proceedings of the Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA 1999), Denver, CO. ACM SIGPLAN Notices, vol. 34, pp. 187–206. ACM Press, New York (1999) 3. Gay, D., Steensgaard, B.: Fast escape analysis and stack allocation for objectbased programs. In: Watt, D.A. (ed.) CC 2000. LNCS, vol. 1781, p. 82. Springer, Heidelberg (2000) 4. Amme, W., von Ronne, J., Franz, M.: SSA-based mobile code: Implementation and empirical evaluation. TACO 4(2) (2007) 5. Jones, J., Kamin, S.N.: Annotating Java class files with virtual registers for performance. Concurrency: Practice and Experience 12(6), 389–406 (2000) 6. Yessick, D.E.: Removal of bounds checks in an annotation aware JVM (May 17, 2004) 7. Franz, M., Krintz, C., Haldar, V., Stork, C.H.: Tamper-proof annotations by construction. Technical Report 02-10, Department of Information and Computer Science, University of California, Irvine (March 2002)
Type-Separated Bytecode – Its Construction and Evaluation
39
8. von Ronne, J., Hartmann, A., Amme, W., Franz, M.: Efficient online optimization by utilizing offline analysis and the SafeTSA representation (2002) 9. Amme, W., M¨ oller, M.A., Adler, P.: Data flow analysis as a general concept for the transport of verifiable program annotations. Electr. Notes Theor. Comput. Sci. 176(3), 97–108 (2007) 10. Amme, W., von Ronne, J., Franz, M.: Using the SafeTSA representation to boost the performance of an existing Java Virtual Machine. In: Proceedings of the 10th International Workshop on Compilers for Parallel Computers (CPC 2003), Amsterdam, Netherland (January 2003) 11. Amme, W., Dalton, N., Franz, M., von Ronne, J.: SafeTSA: A type safe and referentially secure mobile-code representation based on static single assignment form. In: Proceedings of the Conference on Programming Language Design and Implementation (PLDI 2001), Snowbird, Utah, USA. ACM SIGPLAN Notices, vol. 36, pp. 137–147. ACM Press, New York (2001) 12. von Ronne, J., Wang, N., Apel, A., Franz, M.: A virtual machine for interpreting programs in static single assignment form. Technical Report 03-19, Information and Computer Science, University of California, Irvine (October 2003) 13. Necula, G.C.: Proof-carrying code. In: Proceedings of the Symposium on Principles of Programming Languages (POPL 1997). ACM SIGPLAN Notices, pp. 106–119. ACM Press, New York (1997) 14. Azevedo, A., Nicolau, A., Hummel, J.: Java annotation-aware just-in-time (AJIT) compilation system. In: Proceedings of the Conference on Java Grande (JAVA 1999), pp. 142–151 (1999) 15. Krintz, C., Calder, B.: Using annotations to reduce dynamic optimization time. In: Proceedings of the Conference on Programming Language Design and Implementation (PLDI 2001). ACM SIGPLAN Notices, vol. 36.5, pp. 156–167. ACM Press, New York (2001) 16. Le, A., Lhot´ ak, O., Hendren, L.J.: Using inter-procedural side-effect information in JIT optimizations. In: Bodik, R. (ed.) CC 2005. LNCS, vol. 3443, pp. 287–304. Springer, Heidelberg (2005) 17. Rose, E., Rose, K.H.: Lightweight bytecode verification. In: Proceedings of the Workshop on Formal Underpinnings of the Java Paradigm (OOPSLA 1998) (October 1998) 18. Franz, M., Krintz, C., Haldar, V., Stork, C.H.: Tamper proof annotations. Technical Report 02-10, Department of Information and Computer Science, University of California, Irvine (March 2002) 19. Hartmann, A., Amme, W., von Ronne, J., Franz, M.: Code annotation for safe and efficient dynamic object resolution. In: Knoop, J., Zimmermann, W. (eds.) Proceedings of the 2nd International Workshop on Compiler Optimization Meets Compiler Verification (COCV 2003), Warsaw, Poland, April 2003, pp. 18–32 (2003) 20. Vall´ee-Rai, R., Gagnon, E., Hendren, L., Lam, P., Pominville, P., Sundaresan, V.: Soot: A Java Optimization Framework (1999), http://www.sable.mcgill.ca/soot/ 21. Sun Microsystems, Inc.: The K Virtual Machine (KVM), http://java.sun.com/products/cldc/wp/index.html 22. Embedded Microprocessor Benchmark Consortium: GrinderBench, http://www.grinderbench.com/about.html 23. Edinburgh Parallel Computing Centre: Java Grande Forum Benchmark Suite, http://www.epcc.ed.ac.uk/research/activities/java-grande/
Runtime Verification of Safety-Progress Properties Yli`es Falcone, Jean-Claude Fernandez, and Laurent Mounier Verimag, Universit´e Grenoble I [email protected]
Abstract. The underlying property, its definition and representation play a major role when monitoring a system. Having a suitable and convenient framework to express properties is thus a concern for runtime analysis. It is desirable to delineate in this framework the spaces of properties for which runtime verification approaches can be applied to. This paper presents a unified view of runtime verification and enforcement of properties in the safety-progress classification. Firstly, we characterize the set of properties which can be verified (monitorable properties) and enforced (enforceable properties) at runtime. We propose in particular an alternative definition of “property monitoring” to the one classically used in this context. Secondly, for the delineated spaces of properties, we obtain specialized verification and enforcement monitors.
1
Introduction
Runtime-verification [1,2,3,4,5] is an effective technique to ensure at execution time that a system meets a desirable behavior. It can be used in numerous application domains, and more particularly when integrating together untrusted software components. In runtime verification, a run of the system under scrutiny is analyzed incrementally using a decision procedure: a monitor. This monitor may be generated from a user-provided high level specification (e.g. a temporal property, an automaton). The primary goal of this monitor is to detect violation or validation wrt. the given specification. It is a state machine (with an output function) processing an execution sequence (step by step) of the monitored program, and producing a sequence of verdicts (truth values taken from a truth-domain) indicating specification fulfilment or violation. The major part of research endeavor was done on the monitoring of safety properties, as seen for example in [6,7]. However, the authors of [8] show that safety properties are not the only monitorable properties. Recently, a new definition of monitorability was given by Pnueli in [2] and it has been proven in [4] that safety and cosafety properties represent only a proper subset of the space of the monitorable properties. Runtime enforcement is an extension of runtime verification aiming to circumvent property violations. It was initiated by the work of Schneider [9] on what has been called security automata. In this work the monitors watch the current execution sequence and halt the underlying program whenever it deviates S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 40–59, 2009. c Springer-Verlag Berlin Heidelberg 2009
Runtime Verification of Safety-Progress Properties
41
from the desired property. Such security automata are able to enforce the class of safety properties [10] stating that something bad can never happen. Later, Viswanathan [11] noticed that the class of enforceable properties is impacted by the computational power of the enforcement monitor. As the enforcement mechanism can implement no more than computable functions, the enforceable properties are included in the decidable ones. More recently, Ligatti and al. [12] showed that it is possible to enforce at runtime more than safety properties. Using a more powerful enforcement mechanism called edit-automata, it is possible to enforce the larger class of infinite renewal properties. Within the classical safety-liveness dichotomy, the renewal class is a super set of the safety class which contains some liveness properties (but not all). More than simply halting an underlying program, edit-automata can also “suppress” (i.e. freeze) and “insert” (frozen) actions in the current execution sequence. Several tools have been proposed in this context, and in practice there is not always a clear distinction between runtime-verification and runtime-enforcement (for instance a verification monitor may execute an exception handler when detecting an error, hence modifying the initial program execution). The question we consider in this work is then the following: what are the classes of properties that can be handled at runtime, and is there a distinction between these two techniques ? This question is not original in itself, but we propose here to address it within a unified framework: the safety-progress (SP) classification of properties [13,14]. The paper contributions are then the following: – to improve some recent results related to property enforcement [15,16], giving a more accurate classification of enforceable properties; – to integrate in the same framework some existing results related to property monitoring [2,3,4], and to propose an alternative definition of property monitoring, leveraging the semantics of finite execution sequences; – to get a generic monitor synthesis technique, allowing to produce either a verification or an enforcement monitor from a same property description. Paper Organization. The remainder of this article is organized as follows. Sect. 2 introduces some preliminary notations used throughout this paper. Sect. 3 overviews related work on the issues addressed in this paper. In Sect. 4, we provide minimal background on the safety-progress classification of properties in a runtime verification context. Sect. 5 is dedicated to the study of the space of monitorable properties, while Sect. 6 studies the space of enforceable properties. In Sect. 7, we present the synthesis of runtime verification and enforcement monitors. We give some concluding remarks and future works in Sect. 8. Complete proofs and more details are given in [17].
2
Preliminaries and Notations
This section introduces some background, namely the notions of program execution sequences and program properties.
42
Y. Falcone, J.-C. Fernandez, and L. Mounier
2.1
Sequences, and Execution Sequences
Sequences and execution sequences. Considering a finite set of elements E, we define notations about sequences of elements belonging to E. A sequence σ containing elements of E is formally defined by a total function σ : I → E where I is either the integer interval [0, n] for some n ∈ N, or N itself (the set of natural numbers). We denote by E ∗ the set of finite sequences over E (partial function from N), by E + the set of non-empty finite sequences over E, and by E ω the set of infinite sequences over E. The set E ∞ = E ∗ ∪ E ω is the set of all sequences over E. The empty sequence of E is denoted by E or when clear from context. The length (number of elements) of a finite sequence σ is noted |σ| and the (i + 1)-th element of σ is denoted by σi . For two sequences σ ∈ E ∗ , σ ∈ E ∞ , we denote by σ · σ the concatenation of σ and σ , and by σ ≺ σ the fact that σ is a strict prefix of σ (resp. σ is a strict suffix of σ). The sequence σ is said to be a strict prefix of σ ∈ Σ ∞ when ∀i ∈ {0, . . . , |σ| − 1} · σi = σi and |σ| < |σ |. When σ ∈ E ∗ , we def note σ σ = σ ≺ σ ∨ σ = σ . For σ ∈ E ∞ and n ∈ N, σ···n is the sub-sequence containing the n + 1 first elements of σ. Also, when |σ| > n, the subsequence σn··· is the sequence containing all elements of σ but the n first ones. A program P is considered as a generator of execution sequences. We are interested in a restricted set of operations the program can perform. These operations influence the truth value of properties the program is supposed to fulfill. Such execution sequences can be made of access events on a secure system to its ressources, or kernel operations on an operating system. In a software context, these events may be abstractions of relevant instructions such as variable modifications or procedure calls. We abstract these operations by a finite set of events, namely a vocabulary Σ. We denote by PΣ a program for which the vocabulary is Σ. The set of execution sequences of PΣ is denoted by Exec(PΣ ) ⊆ Σ ∞ . This set is prefix-closed, that is ∀σ ∈ Exec(PΣ ), σ ∈ Σ ∗ · σ σ ⇒ σ ∈ Exec(PΣ ). In the remainder of this article, we consider a vocabulary Σ. 2.2
Properties
Properties as sets of execution sequences. A finitary property (resp. an infinitary property, a property) is a subset of execution sequences of Σ ∗ (resp. Σ ω , Σ ∞ ). Considering a given finite (resp. infinite, finite or infinite) execution sequence σ and a property φ (resp. ϕ, θ), when σ ∈ φ, noted φ(σ) (resp. σ ∈ ϕ, noted ϕ(σ), σ ∈ θ, noted θ(σ)), we say that σ satisfies φ (resp. ϕ, θ). A consequence of this definition is that properties we will consider are restricted to single execution sequences1 , excluding specific properties defined on powersets of execution sequences (like fairness, for instance). Runtime properties. In this paper we are interested in runtime properties. As stated in the introduction, we consider finite and infinite execution sequences 1
This is the distinction, made by Schneider [9], between properties and (general) policies. The set of properties (defined over single execution sequences) is a subset of the set of policies (defined over sets of execution sequences).
Runtime Verification of Safety-Progress Properties
43
(that a program may produce). Runtime verification properties should characterize satisfaction for both kinds of sequences in a uniform way. As so, we introduce r -properties (runtime properties) as pairs (φ, ϕ) ⊆ Σ ∗ × Σ ω . Intuitively, the finitary property φ represents the desirable property that finite execution sequences should fulfill, whereas the infinitary property ϕ is the expected property for infinite execution sequences. The definition of negation of a r -property follows from definition of negation for finitary and infinitary properties. For a r property (φ, ϕ), we define (φ, ϕ) as (φ, ϕ). Boolean combinations of r -properties are defined in a natural way. For ∗ ∈ {∪, ∩}, (φ1 , ϕ1 )∗(φ2 , ϕ2 ) = (φ1 ∗φ2 , ϕ1 ∗ϕ2 ). Considering an execution sequence σ ∈ Exec(PΣ ), we say that σ satisfies (φ, ϕ) when σ ∈ Σ ∗ ∧ φ(σ) ∨ σ ∈ Σ ω ∧ ϕ(σ). For a r -property Π = (φ, ϕ), we note Π(σ) (resp. ¬Π(σ)) when σ satisfies (resp. does not satisfy) (φ, ϕ). Evaluation of r-properties. Monitorability, enforceability, and monitor synthesis are based on the evaluation of r -properties. Evaluating an execution sequence σ wrt. a r -property consists in producing a verdict regarding the current propertysatisfaction of σ or future satisfactions of the possible σ-continuations. The verdicts considered here are not usual boolean values: they are truth-values taken from a truth-domain. A truth-domain is a lattice, i.e. a partially ordered set with an upper-bound and a lower-bound. Considering a truth-domain B, a r property Π and an execution sequence σ, the evaluation of σ ∈ Σ ∗ wrt. Π in B, noted [[Π]]B (σ), is an element of B depending on Π(σ) and satisfaction of σ-continuations (i.e. {σ ∈ Σ ∞ | σ ≺ σ }) wrt. Π. The sets of monitorable and enforceable properties (Sect. 5 and 6) relies upon the considered truth-domain and the chosen evaluation function.
3
Related Work
This section overviews related work in the topics addressed in this paper. First we recall previous characterizations on the properties that can be verified at runtime (monitorable properties). Then, we recall previous characterization for runtime enforcement (enforceable properties). Next, we overview previous work on the synthesis of monitors for runtime verification and enforcement. 3.1
Runtime Verification (Monitorable) Properties
Monitorability in the sense of [2]: Pnueli and al. give a notion of monitorable properties relying on the notion of verdict determinacy for an infinite sequence. More precisely, considering a finite sequence σ ∈ Σ ∗ , a property θ ⊆ Σ ∞ is negatively determined (resp. positively determined) by an execution sequence σ if σ and all its extension do not satisfy (resp. satisfy) θ. Then, θ is σ-monitorable if σ has an extension s.t. θ is negatively or positively determined by this extension. Finally, θ is monitorable, if it is σ-monitorable for every σ. In Sect. 5, we give the formal definition in the context of r -properties.
44
Y. Falcone, J.-C. Fernandez, and L. Mounier
The idea is that it becomes unnecessary to continue the execution of a θmonitor after reading σ if θ is not σ-monitorable. The intent of [2] was to characterize when it is worth monitoring a property. Monitorability in the sense of [4]: Bauer and al. inspired from Pnueli’s definition of monitorable properties to propose a slightly different one based on the notion of good and bad prefix introduced in model-checking [18]. The intuitive idea is that with monitorable properties it is possible to “detect” a violation or validation of infinitary properties with finite sequences. Considering an infinitary property ϕ ⊆ Σ ω , a prefix σ is said to be a bad prefix, noted bad prefix (σ, ϕ) (resp. good prefix, noted good prefix (σ, ϕ)) of ϕ if ∀w ∈ Σ ω · ¬ϕ(σ · w) (resp. ∀w ∈ Σ ω · ϕ(σ · w)). Then, a prefix σ is said to be ugly if it has no good nor bad continuation, i.e. ∃v ∈ Σ ω · bad prefix (σ · v, ϕ) ∨ good prefix (σ · v, ϕ). Finally, a property is said to be monitorable if it does not have ugly prefix, formally: ∀σ ∈ Σ ∗ , ∃v ∈ Σ ω · bad prefix (σ · v, ϕ) ∨ good prefix (σ · v, ϕ). Previous characterization of monitorable properties: Bauer and al. have shown that, according to this definition, the set of monitorable properties is a strict super set of safety and co-safety properties. These classes of properties are taken from the classical safety-liveness classification of properties [19,20]. They also gave an example of request/acknowledge property which is not monitorable. Such a property can be framed in the set of response properties (see Sect. 4) wrt. the SP classification (see Ex. 1 in Sect. 5). 3.2
Runtime Enforcement (Enforceable) Properties
In [10], the authors proposed a classification of enforceable properties with the regard of a program as a Turing machine. Their purpose was to delineate the set of enforceable properties according to the mechanism used for the enforcement purpose. Properties are classified according to the modification the enforcement mechanism can perform on the underlying program. The mechanisms can be characterized as static analysis, runtime execution monitor and program rewriting. Other works [9,11,12,21,16] focused on particular runtime enforcement monitors and proposed a characterization of enforceable properties with those mechanisms. Property enforcement by an enforcement monitor (EM) is usually defined as the conjunction of the two following constraints: soundness: the output sequence should satisfy the underlying property transparency: the input sequence should be modified in a minimal way, namely if it already verifies the property it should remain unchanged (up to a given equivalence relation), otherwise its longest prefix satisfying the property should be issued. Security automata and decidable safety properties: Schneider introduced security automata (a variant of B¨ uchi automata) as the first runtime mechanism for enforcing properties in [9]. The set of enforceable properties with this kind of security automata is the set of safety properties. Then [10] Schneider, Hamlen,
Runtime Verification of Safety-Progress Properties
45
and Morisett refined the set of enforceable properties and show that these security automata were in fact restrained by some computational limits. Indeed, Viswanathan [11] noticed that the class of enforceable properties is impacted by the computational power of the enforcement monitor. As the enforcement mechanism can implement no more than computable functions, the enforceable properties are included in the decidable ones. Hence, they showed in [10] that the set of safety properties is a strict superior limit to the power of (execution) enforcement monitors defined as security automata. Edit-automata and infinite renewal properties: Ligatti and al. [12,21] introduced edit-automata as runtime monitors. Depending on the current input and its control state, an edit-automata can either insert a new action by replacing the current input, or suppress it. The properties enforced by edit-automata are called infinite renewal properties: it is a superset of safety properties and contains some liveness properties (but not all). Then a property θ is said to be an infinite renewal property iff ∀σ ∈ Σ ∞ , θ(σ) ⇒ ∀σ ∈ Σ ∗ , σ ≺ σ ⇒ ∃σ , σ σ ≺ σ ∧ θ(σ ). Generic runtime enforcers and response properties: In [16] we introduced a generic notion of EM encompassing previous mechanisms and gave a lower-bound on the space of properties they can enforce in the SP classification (see Sect. 4). 3.3
Synthesis of Monitors
For runtime verification: Generally, runtime verification monitors are generated from LTL-based specifications, as seen recently in [4,22]. Alternatively, ω-regular expressions have been used as a basis for generating monitors, as for example in [8]. An exhaustive list of works on monitor synthesis is far beyond the scope of this paper. We refer to [1,23,5] for a more exhaustive list. For runtime enforcement: In [24] Martinelli and Matteucci tackle the synthesis of enforcement mechanism as defined by Ligatti. More generally the authors consider security automata and edit-automata. The monitor is modelled by an algebraic operator expressed in CCS. The program under scrutiny is then a term Y K X where X is the target program, Y the controller program and K the operator modeling the monitor where K is the kind of monitor (truncation, insertion, suppression or edit). The desired property for the underlying system is formalized using μ-calculus. In [25] Matteucci extends the approach in the context of realtime systems. In [15] we defined transformations for some classes of the safety-progress classification of properties. Those class-specific transformations take as input a Streett automaton recognizing a property and produce an enforcement monitor for this property.
4
The SP Classification in a Runtime Context
This section presents minimal theoritical background on the safety-progress classification of properties, introduced by Manna and Pnueli in [13,14], in a runtime
46
Y. Falcone, J.-C. Fernandez, and L. Mounier
verification context. This classification introduced a hierarchy between properties defined as infinite execution sequences. We extend the classification to deal with finite-length execution sequences. As so, we consider r -properties which are suitable to express runtime properties. This hierarchy presents properties in a uniform way according to 4 views: a language (seeing properties as sets of sequences), a logical (seeing properties as LTL formulas), a topological (seeing properties as open or closed sets), and an automata view (seeing properties as Streett automata [26]). We only present the results about the automata Reactivity MP ∗ (B4 ) view as needed for ongoing discussions in this paper. A graphical representation of the safetyResponse Persistence progress classification of properties is depicted in EP MP (B3 ) Fig. 1. Further details and results can be found Obligation in [27]. For each class of the SP classification it MP ∗ (B3 ) is possible to syntactically characterize a recognizProgress Safety ing automaton. We define a variant of determinGuarantee Safety ∗ istic and complete Streett automata (introduced MP ∗ (B⊥ ) MP (B ) 2 2 in [26]) for property recognition by adding to origFig. 1. The SP classification inal Streett automata a finite-sequence recognizing criterion in such a way that these automata uniformly recognize r -properties. Definition 1 (Streett m-automaton). A deterministic Streett m-automaton is a tuple A = (Q, qinit , Σ, −→, {(R1 , P1 ), . . . , (Rm , Pm )}) defined relatively to a set of events Σ. The set Q is the set of automaton states, qinit ∈ Q is the initial state. The function −→: Q × Σ → Q is the transition function. In the e following, for q, q ∈ Q, e ∈ Σ we abbreviate −→ (q, e) = q by q −→ q . The set {(R1 , P1 ), . . . , (Rm , Pm )} is the set of accepting pairs, for all i ≤ m, Ri ⊆ Q are the sets of recurrent states, and Pi ⊆ Q are the sets of persistent states. We refer to an automaton with m accepting pairs as an m-automaton. When m = 1, a 1-automaton is also called a plain-automaton, and we refer to R1 and P1 as R and P . For σ ∈ Σ ∞ , the run of σ on A is the sequence of states involved by the execution of σ on A. It is formally defined as run(σ, A) = q0 · q1 · · · where σi A ∀i · (qi ∈ QA ∧ qi −→ A qi+1 ) ∧ q0 = qinit . The trace resulting in the execution of σ on A is the unique sequence (finite or not) of tuples (q0 , σ0 , q1 ) · (q1 , σ1 , q2 ) · · · where run(σ, A) = q0 · q1 · · · . The uniqueness of the trace is due to the fact that we consider only deterministic Streett automata. Also we consider the notion of infinite visitation of an execution sequence σ ∈ Σ ω on a Streett automaton A, denoted vinf (σ, A), as the set of states appearing infinitely often in run(σ, A). It is formally defined as follows: vinf (σ, A) = {q ∈ QA | ∀n ∈ N, ∃m ∈ N · m > n ∧ q = qm } with run(σ, A) = q0 · q1 · · · . Acceptance conditions (finite and infinite sequences) are defined using the accepting pairs. Definition 2 (Acceptance condition (finite sequences)). For σ ∈ Σ ∗ s.t. |σ| = n, we say that the m-automaton A accepts σ if (∃q0 , . . . , qn ∈ QA · run(σ, A) = q0 · · · qn ∧ q0 = qinit A and ∀i ∈ [1, m] · qn ∈ Pi ∪ Ri ).
Runtime Verification of Safety-Progress Properties
47
Definition 3 (Acceptance condition (infinite sequences)). For σ ∈ Σ ω , we say that A accepts σ if ∀i ∈ [1, m] · vinf (σ, A) ∩ Ri = ∅ ∨ vinf (σ, A) ⊆ Pi . Note that this notion of acceptance for finite sequences exactly coincides with the one proposed by [4] for the RV-LTL temporal logic. The hierarchy of automata. By setting syntactic restrictions on a Streett automaton, we modify the kind of properties recognized by such an automaton. Each class is characterized by some conditions on the transition function and the accepting pairs. A safety automaton is a plain automaton such that R = ∅ and there is no transition from a state q ∈ P to a state q ∈ P . A guarantee automaton is a plain automaton such that P = ∅ and there is no transition from a state q ∈ R to a state q ∈ R. An m-obligation automaton is an m-automaton such that for each i in [1, m]: there is no transition from q ∈ Pi to q ∈ Pi and there is no transition from q ∈ Ri to q ∈ Ri . A response automaton is a plain automaton such that P = ∅, a persistence automaton is a plain automaton such that R = ∅. And a reactivity automaton is any unrestricted automaton. It is possible to link the syntactic characterizations on automata to the semantic characterization of the properties they specify. As stated by the following definition (transposed from an initial theorem [13,14]). Definition 4. A r-property (φ, ϕ) is a κ-r-property iff it is specifiable by a κ-automaton, where κ ∈ {safety, guarantee, obligation, response, persistence, reactivity} We note Safety(Σ) (resp. Guarantee(Σ), Obligation(Σ), Response(Σ), Persistence(Σ), Reactivity(Σ)) the set of safety (resp. guarantee, obligation, response, persistence, reactivity) r -properties over Σ. Moreover, a r -property of a given class is pure when it is a property of none of others sub-classes.
5
Monitorability wrt. the SP Classification
In this section we first revisit existing monitorability results in the safety-progress classification of properties. Second, we propose an alternative definition of monitorability. In fact, characterizing the space of “monitorable” properties depends on several parameters: the property semantics for finite sequence, the set of monitor verdicts we consider, and the exact definition of monitoring. 5.1
Classical Definition of Monitoring
The main objective of monitoring, in its classical definition, is to evaluate an (infinitary) property ϕ on a possibly infinite execution sequence from one of its finite prefix. This definition is formalized below. Definition 5 (Positive/Negative determinacy [2]). A r-property Π ⊆ Σ ∗ × Σ ω is said to be: – negatively determined by σ ∈ Σ ∗ if ¬Π(σ) ∧ ∀μ ∈ Σ ∞ · ¬Π(σ · μ); – positively determined by σ ∈ Σ ∗ if Π(σ) ∧ ∀μ ∈ Σ ∞ · Π(σ · μ).
48
Y. Falcone, J.-C. Fernandez, and L. Mounier
Definition 6 (Monitorable [2] r -properties). A r-property Π is: – σ-monitorable, if there exists a (finite) μ ∈ Σ ∗ s.t. Π is positively or negatively determined by σ · μ; – monitorable, if it is monitorable for every σ ∈ Σ ∗ . The underlying assumed truth-domain is B3 = {, ?, ⊥}. Value is used to express property satisfaction when the property is positively determined. Value ⊥ is used to express property violation when the property is negatively determined. Whereas value ? is used to express that no verdict can be produced. (See Def. 7). Within the B3 lattice, boolean operators ∨ and ∧ are defined respectively as upper and lower bounds. In this context it can be shown that the set of monitorable properties with B3 strictly contains the set of obligation properties. In the following, for a truth-domain B, we note MP (B) the space of monitorable properties according to this definition. Theorem 1 (Obligation(Σ) ⊂ MP (B3 )). The obligation properties are strictly contained in the set of monitorable properties with B3 . Proof. Obligation r -properties are obtained by boolean combinations of safety and k guarantee r -properties. For k ∈ N, a k-obligation r -property is a r -property i=1 (Safetyi ∪ Guaranteei ), where Safetyi and Guaranteei are safety and guarantee r -properties. The set of all k-obligation r -properties for k ∈ N is the set of obligation r -properties. Let Π ∈ Obligation(Σ), there exists k ∈ N s.t. Π ∈ k−Obligation(Σ). The proof relies on an easy induction on k and uses the following facts: – Safety and guarantee properties are monitorable. By examining the syntactic restrictions of an automaton recognizing a safety or a guarantee property, we have: for all σ ∈ Σ ∗ there exists a continuation μ s.t. this property is positively or negatively determined by σ · μ. – Union and intersection of two monitorable properties is monitorable. – Ex. 1 show that the inclusion is strict. Thus, we have extended the previous bound established by Bauer and al. in [4] stating that Safety(Σ) ∪ Guarantee(Σ) ⊂ MP (B3 )2 . Indeed, the set of obligation properties is a strict super set of the union of safety and guarantee properties. Beyond Obligation properties. Following the classical definition of monitorability, it is possible to show that there exist non-monitorable and monitorable properties for super-classes of the Obligation class. The above two properties are pure response properties, one is not monitorable, the other one is. Example 1 (Non-monitorable response property [4]). The (response) property “Every request should be acknowledged” is not monitorable.
2
In [4], guarantee properties are named co-safety properties.
Runtime Verification of Safety-Progress Properties
49
req
ack
This property is represented by the Streett (response) automaton on the left with R = {1}. For this property, there 1 2 ack are two limitations for monitoring with the considered truthdomain and definition of monitorability. First, it is impossible to distinguish correct (ending in state 1) and incorrect finite sequences (ending in state 2): both evaluate to “?”. Second, for all finite sequences, it is never possible to decide or ⊥ since every finite sequence can be extended to correct or incorrect infinite continuations. req
Example 2 (Monitorable response property). The (response) property “Every request should be acknowledged and two successive requests (without acknowledgement) is forbidden” is monitorable. ack
Σ
This property is represented by the Streett (response) automaton on the left with R = {1}. Intuitively, given 3 1 2 ack an execution sequence, this r -property can always be negatively determined by one of its extension. req
req
Monitorability with B2 . Restraining B3 to a truth-domain of cardinality 2 allows only either positive or negative determinacy, and hence reduces the set of monitorable properties. However, there is no simple characterization of these properties in the safety-progress hierarchy. Intuitively one may think that with B⊥ 2 = {?, ⊥}, the set of monitorable properties would be the set of safety properties. But in fact, there are numerous safety properties which can never be negatively determined. For example, the r -property true = (Σ ∗ , Σ ω ) cannot be negatively determined nor falsified. Moreover all safety properties which are valid forever for execution sequences longer than a given k are not σ − B⊥ 2 -monitorable when |σ| > k. For those kind of properties a monitor would produce only sequences of “?” when evaluating an execution sequence. Similarly, there exist many guarantee properties that cannot be positively determined, and therefore not monitorable with B 2 = {?, }. However, in Sect. 7, we give a syntactic criterion on Streett automata to determine whether a r -property (recognized by a Streett automaton) is monitorable or not under these conditions. 5.2
An Alternative Definition of Monitoring
The interest of previous definitions of monitorability is due to two facts: the underlying truth-domain is 2-valued or 3-valued and the aim is the detection of verdict of infinitary properties. Although it is possible to give a semantics to all reactive properties with either a 2-valued or 3-valued truth-domain, the question is whether those values make sense for some properties in a monitoring context. As noticed in [4,23], it seems interesting to investigate further the space of monitorable properties, and to answer more precisely questions like “what verdict to issue if the program execution stops here”. This means a better distinction between finite sequences which evaluate to ? in a 2-valued or 3-valued truthdomain.
50
Y. Falcone, J.-C. Fernandez, and L. Mounier
Hence, the authors proposed to consider a 4-valued truth-domain B4 = {, p , ⊥p , ⊥}. The truth-value p (resp. ⊥p ) denotes “presumably true” (resp. “presumably false”) and it express “Π-satisfaction (resp. Π-violation) if the program execution stops here”. Boolean operators ∨ and ∧ are defined in [4]. Using B4 leads to an alternative definition of monitoring. This new definition leverages the evaluation of finite sequences in the Safety-Progress classification framework. Property evaluation in a truth-domain. We first introduce how, given a r -property, we evaluate an execution sequence in the truth-domains we considered so far. Definition 7 (Property evaluation wrt. a truth-domain). For each of the possible truth-domain B, we define the evaluation functions [[·]]B (·) : (Σ ∗ × Σ ω ) × Σ ∗ → B as follows: For B⊥ 2 : [[Π]]B⊥ (σ) =⊥ if ¬Π(σ) ∧ ∀μ ∈ Σ ∞ · ¬Π(σ · μ), 2 [[Π]]B⊥ (σ) =? otherwise. 2 For B 2 : [[Π]]B (σ) = if Π(σ) ∧ ∀μ ∈ Σ ∞ · Π(σ · μ), 2 [[Π]]B (σ) =? otherwise. 2 For B3 : [[Π]]B3 (σ) =⊥ if ¬Π(σ) ∧ ∀μ ∈ Σ ∞ · ¬Π(σ · μ), [[Π]]B3 (σ) = if Π(σ) ∧ ∀μ ∈ Σ ∞ · Π(σ · μ), [[Π]]B3 (σ) =? otherwise. For B4 : [[Π]]B4 (σ) = [[Π]]B3 (σ) if [[Π]]B3 (σ) =⊥ or [[Π]]B3 (σ) = , [[Π]]B4 (σ) = p if [[Π]]B3 (σ) =? and Π(σ) [[Π]]B4 (σ) = ⊥p if [[Π]]B3 (σ) =? and ¬Π(σ) An alternative definition of monitorability. Intuitively, the monitorability notion we propose relies on the ability of a given monitor to distinguish between good and bad finite execution sequences with respect to a property Π. Definition 8 (Monitorability). A r-property Π = (φ, ϕ) is said to be monitorable with the truth-domain B, or B-monitorable iff ∀σgood ∈ φ, ∀σbad ∈ φ, [[Π]]B (σgood ) = [[Π]]B (σbad ) We note MP ∗ (B), the set of monitorable properties with truth domain B according to this definition. Theorem 2 (Multi-valued characterization of monitorability). The sets of monitorable properties according to the truth domains considered so far are the following: MP ∗ (B⊥ 2 ) = Safety(Σ)
Runtime Verification of Safety-Progress Properties
51
MP ∗ (B 2 ) = Guarantee(Σ) MP ∗ (B3 ) ⊂ Obligation(Σ) and Safety(Σ) ∪ Guarantee(Σ) ⊂ MP ∗ (B3 ) MP ∗ (B4 ) = Reactivity(Σ) Example 3 (Monitoring of an obligation property). Let consider the LTL property Π = p ∨ ♦q, stating that the state-predicate p should always hold or q should eventually hold. This is an obligation property. Let consider the following execution sequences: σgood = {p} · {p} and σbad = ∅ · {p}. In B3 , we have [[Π]]B3 (σgood ) = [[Π]]B3 (σbad ) =?. Thus, Π is not B3 -monitorable. However, Π is B4 -monitorable and [[Π]]B4 (σgood ) = p and [[Π]]B4 (σbad ) =⊥p . We will show in Sect. 7 that, for a given finite sequence σ, [[Π]]B4 (σ) is easy to compute from a Streett automaton recognizing Π. Remark 1. It is worth noticing that property interpretation of finite sequences with “weak verdicts” (⊥p , p ) extends to infinite sequences in a consistent way, depending on the class of properties under consideration: – for a safety property Π, (∀i ∈ N, [[Π]](σ···i ) = p ) ⇒ Π(σ) – for a guarantee property Π, (∀i ∈ N, [[Π]](σ···i ) = ⊥p ) ⇒ ¬Π(σ) ∞
– for a response property Π, ( ∃ i ∈ N, [[Π]](σ···i ) = p ) ⇒ Π(σ) ∞
– for a persistence property Π, ( ∃ i ∈ N, [[Π]](σ···i ) = ⊥p ) ⇒ ¬Π(σ)
6
Enforceability wrt. the SP Classification
In Sect. 3, we have seen that the previous proposed spaces of enforceable properties were delineated according to the mechanism used to enforce the properties. Such mechanisms should obey the soundness and transparency constraints. We choose here to take an alternative approach. Indeed we believe that the set of enforceable properties can be characterized independently from any enforcement mechanism complying to these constraints. This will give us an upper-bound of the set of enforceable properties. 6.1
Enforcement Criteria
A consequence of transparency is that a r -property (φ, ϕ) will be considered as enforceable only if each incorrect infinite sequence has a longest correct prefix. This means that any infinite incorrect sequence should have only a finite number of correct prefixes. This transparency demand can be seen from the language and automata views of r -properties. Thus we give two equivalent enforcement criteria for r -properties for each view of r -properties3 . 3
Note that those (equivalent) criteria differ from the existence of bad prefixes. Bad prefixes are sequences which cannot be extended to correct (finite or infinite) ones.
52
Y. Falcone, J.-C. Fernandez, and L. Mounier
Definition 9 (Enforcement criterion (language view)). A r-property (φ, ϕ) is said to be enforceable iff ∀σ ∈ Σ ω , ¬ϕ(σ) ⇒ (∃σ ∈ Σ ∗ , σ ≺ σ, ∀σ ∈ Σ ∗ · σ ≺ σ ⇒ ¬φ(σ ))
(1)
A r -property Π recognized by a Streett automaton AΠ is said to be enforceable iff every maximal strongly-connected component (SCC) of R-states contain (only) either P -states or P -states. Definition 10 (Enforcement criterion (automata view)). Denoting S(AΠ ) the set of SCC of AΠ , an m-automaton, recognizing Π, Π is said to be enforceable iff ∀i ∈ [1, m], ∀s ∈ S(AΠ ), s ⊆ Ri ⇒ (s ⊆ Pi ∨ s ⊆ Pi )
(2)
Enforcement criteria of Def. 9 and 10 are equivalent for basic classes of properties, as stated below. Property 1 (Equivalence between enforcement criteria (basic classes)). Considering a r -property Π = (φ, ϕ) of a basic class, recognized by a Streett automaton (QAΠ , qinit AΠ , Σ, →AΠ , {(R, P )}, we have that: (1) ⇔ ∀s ∈ S(AΠ ), s ⊆ R ⇒ (s ⊆ P ∨ s ⊆ P ). Proof. This proof relies on the computation of maximal strongly connected components [28] of a Streett automaton (SCC). The proof is in two stages by proving implications in both ways. (1) ⇒ (2) Let consider a SCC of AΠ containing only R-states. Suppose that there exists two states q, q in this SCC s.t. q ∈ P and q ∈ / P . As q and q are in a SCC, there exists a path from q to q and from q to q in AΠ . Then there would exist an infinite execution sequence σ s.t. the run of σ on AΠ contains infinite occurrences of q and q . As this SCC is made of R-states, σ is not accepted by AΠ (since vinf (σ, AΠ ) ⊆ P ), i.e. ¬ϕ(σ). However σ has an infinite number of “good” prefixes: all prefixes s.t. the run ends in a R-state. This is contradictory with our initial assumption. (2) ⇒ (1) Let consider σ ∈ Σ ω s.t. ¬ϕ(σ). As AΠ recognizes Π, σ is not accepted by AΠ . As AΠ is a finite state automaton, the run of σ on AΠ visits a SCC infinitely often and can be expressed: run(σ, AΠ ) = q0 · · · qk−1 · (qi · · · qi+l )n with k ≤ i ∧ l ≤ |Q| ∧ i = l ∗ n + k, n ∈ N. Moreover, we know that ∀i ≤ j ≤ i + l · qj ∈ P ∨ ∀i ≤ j ≤ i + l · qj ∈ P . • In the first case, the sequence σ is accepted by AΠ (Def. 3): vinf (σ, AΠ ) ⊆ {qi , . . . , qi+l } ⊆ P . This is contradictory with ¬ϕ(σ). • In the second case, the sequence σ is not accepted by AΠ : vinf (σ, AΠ ) ⊆ {qi , . . . , qi+l } ⊆ P . According to the finitesequence acceptance criterion (Def. 2) and since ∀c ∈ N, c ≥ k ⇒ qc ∈ / P , we obtain ∀c ∈ N, c ≥ k ⇒ ¬Π(σ···c ).
Runtime Verification of Safety-Progress Properties
53
Property 2 (Comparing enforcement criteria for compound classes). Considering a r -property Π = (φ, ϕ), recognized by a Streett automaton (QAΠ , qinit AΠ , Σ, →AΠ , {(R, P )}, we have that: (1) ⇔ (2), for Obligation properties (1) ⇐ (2), for Reactivity properties Proof. We sketch the proof for those classes of properties. For Obligation properties. Similarly to the proof of Prop. 1, the proof relies on the fact that in a m-obligation automaton, for i ∈ [1, m], there is no transition from Ri -states to Ri -states, and no transition from Pi -states to Pi -states. For Reactivity properties. Let consider σ ∈ Σ ω s.t. ¬ϕ(σ). Similarly to the proof of Prop. 1 (⇐ direction), the run of σ on AΠ visits a SCC infinitely often and can be expressed: run(σ, AΠ ) = q0 · · · qk−1 ·(qi · · · qi+l )n with k ≤ i∧l ≤ |Q|∧i = l∗n+k, n ∈ N. Moreover, we know that ∀i ≤ j ≤ i + l · qj ∈ P ∨ ∀i ≤ j ≤ i + l · qj ∈ P . We have that ∀σ ∈ Σ ∗ , σ···k σ , ¬Π(σ ). Indeed, otherwise it would have mean that ∀i ∈ [1, m], ∀j ≥ k, qj ∈ Pi . Which would have lead to ϕ(σ) using the infinite-sequence acceptance condition of Streett automata. The set of enforceable r -properties is denoted EP . We will now characterize this set wrt. the SP classification. Though, we will prove that the class of enforceable properties is exactly the class of response properties. The enforcement criterion in the automata view is still usefull as it provides a sufficient condition on automata. Thus, given any automaton, this gives syntactic procedure to determine whether the recognized property is enforceable. 6.2
Enforceable Properties
We start first by proving that response properties (defined in Sect. 4) are enforceable and give an example of persistence properties not enforceable. Then we find that the set of response properties is exactly the set of enforceable ones. Theorem 3 (Response are enforceable). Response(Σ) ⊆ EP . Proof. Indeed consider a response r -property Π = (φ, ϕ) and an execution sequence σ ∈ Σ ω . Π can be expressed (Rf (ψ), R(ψ)) (Π ∈ Response(Σ)). Let suppose that ¬ϕ(σ). It means that σ ∈ R(ψ), i.e. σ has finitely many prefixes belonging to ψ. Consider the set S = {σ ∈ Σ ∗ | ∀σ ∈ Σ ∗ , σ ≺ σ ≺ σ ∧ ¬ψ(σ )} of finite sequences from which all finite continuations do not satisfy ψ. As ¬R(ψ), this set is not empty. Let note σ0 the smallest element of S regarding ≺. We have ∀σ ∈ Σ ∗ , σ0 ≺ σ ⇒ ¬ψ(σ ). Since ∀ψ ⊆ Σ ∗ , Rf (ψ) ⊆ ψ, it implies that ∀σ ∈ Σ ∗ , σ0 ≺ σ ⇒ ¬φ(σ ). A straightforward consequence is that safety, guarantee and obligation r -properties are enforceable. We prove that, in fact, pure persistence properties are not enforceable. An example of pure persistence r -property is Π = (Σ ∗ · a+ , Σ ∗ · aω ) stating that “it will be eventually true that a always occurs”. One can notice that this property is neither a safety, guarantee nor obligation property.
54
Y. Falcone, J.-C. Fernandez, and L. Mounier
Π is recognized by the Streett automaton AΠ depicted on the left (with acceptance criterion vinf (σ, AΠ ) ⊆ P and 1 2 P = {1}). One can understand the enforcement limitation a intuitively with the following argument: if this property was enforceable it would imply that an EM can decide from a certain point that the underlying program will always produce the event a. However such a decision can never be taken by a monitor without memorizing the entire execution sequence beforehand. This is unrealistic for an infinite sequence. More formally, as stated in the previous section, a r -property (φ, ϕ) is enforceable if for all infinite execution sequences σ when ¬ϕ(σ), the longest prefix of σ satisfying φ always exists. For the above automaton, the execution sequence σbad = (a · b)ω exhibits the same issue. Indeed, the infinite sequence does not satisfy the property whereas an infinite number of its prefixes do (prefixes ending with a). Applying enforcement criteria (Def. 9 and 10) on persistence properties, it turns out that the enforceable persistence properties are in fact response properties. a
Σ \ {a}
Σ \ {a}
Theorem 4 (Enforceable persistence properties are response properties). Persistence(Σ) ∩ EP ⊆ Response(Σ). Proof. A r -property becomes non-enforceable as soon as there exists a SCC of R-states containing a P -state and a P -state on its recognizing automaton (see Def. 10). Indeed, on a Streett automaton it allows infinite invalid execution sequences with an infinite number of valid prefixes. When removing this possibility on a Streett automaton, the constrained automaton can be easily translated to a response automaton. Indeed, on this constrained automaton, the states visited infinitely often are either all in P or P , that is: ∀σ ∈ Σ ω · vinf (σ) ∩ P =∅⇔ vinf (σ) ⊆ P . On such automaton there is no difference between R-states and P -states. Consequently by retagging P -states to R, this automaton recognizes the same property. The retagged automaton is a response automaton. Corollary 1. Pure persistence are not enforceable: (Persistence(Σ) \ Response(Σ)) ∩ EP = ∅.
Proof. This is a direct consequence of Theorem 4.
Corollary 2. Pure reactivity are not enforceable: Reactivity(Σ) ⊆ EP ∧ Reactivity(Σ) \ (Persistence(Σ) ∪ Response(Σ)) ∩ EP = ∅.
Proof. This is a direct consequence of Corollary 1. A general reactivity property can be expressed as the composition of response and persistence properties. As a consequence, pure persistence properties are included in the set of reactivity properties. And consequently, the persistence part of a reactivity property is not enforceable. Corollary 3. Enforceable properties are exactly response properties: EP = Response(Σ). Proof. It remains to be proven that the set of enforceable properties is included in the set of response one. Suppose that there exists an enforceable property which is not a response one. Then, according to the definition of the safety-progress hierarchy, this property would be a pure persistence or reactivity property. Consequently this property would not be enforceable.
Runtime Verification of Safety-Progress Properties
7
55
Monitor Synthesis
Now we show how it is possible to obtain a monitor either for verifying or enforcing a property. Generally speaking, a monitor is a device processing an input sequence of events or states in an incremental fashion. It is purposed to yield a property-specific decision according to its goal. In (classic) runtime verification such a decision is a truth-value taken from a truth-domain. This truth-value states an appraisal of property satisfaction or violation by the input sequence. For runtime enforcement, the monitor produces a sequence of enforcement operations. The monitor uses an internal memory and applies enforcement operations to the input event and its current memory so as to modify input sequence and produce an output sequence. The relation between input and output sequence should follow enforcement monitoring constraints: soundness and transparency (Sect. 3.2). In the following we consider two Streett m-automata A = (QA , qinit A , −→A , {(R1 , P1 ), . . . , (Rm , Pm )}) and AΠ = (QAΠ , qinit AΠ , →AΠ , {(R1 , P1 ), . . . , (Rm , Pm )}, Π the r -property recognized by AΠ . Also we evaluate properties only in B4 , and consequently we abbreviate [[Π]]B4 (·) by [[Π]](·). 7.1
Characterizing States of Streett Automata
We will define monitors (for verification and enforcement) from Streett automata. To do so, we will define a set of subsets of Streett automaton states. The set PA = {Good A , GoodpA , BadpA , Bad A } is a set of subsets of QA , s.t. Good A , GoodpA , BadpA , Bad A designate respectively the good (resp. presumably good, presumably bad, bad) states. The set PA is defined as follows: – – – –
m Good A = {q ∈ QA ∩ m i=1 (Ri ∪ Pi ) | Reach A (q) ⊆ i=1 (Ri ∪ Pi )} m A A Goodp = {q ∈ Q ∩ i=1 (Ri ∪ Pi ) | Reach A (q) ⊆ m i=1 (Ri ∪ Pi )} m BadpA = {q ∈ QA ∩ m (R ∩ P ) | Reach (q) ⊆ (Ri ∩ Pi )} i i A i=1 i=1 m Bad A = {q ∈ QA ∩ m i=1 (Ri ∩ Pi ) | Reach A (q) ⊆ i=1 (Ri ∩ Pi )}
Note that QA = Good A ∪ GoodpA ∪ BadpA ∪ Bad A . Property 3 (Correspondence between P and B4 ). Given an m-automaton AΠ , Π, and an execution sequence σ ∈ Σ ∗ of length n s.t. run(σ, AΠ ) = q0 · · · qn−1 , we have that: qn−1 ∈ Good AΠ ⇔ [[Π]](σ) = , qn−1 ∈ BadpAΠ ⇔ [[Π]](σ) = ⊥p , AΠ qn−1 ∈ Goodp ⇔ [[Π]](σ) = p , qn−1 ∈ Bad AΠ ⇔ [[Π]](σ) = ⊥. Proof. This proof is naturally done in four steps. Let consider an execution sequence σ ∈ Σ ∗ of length n. – Proof of qn−1 ∈ Good AΠ ⇔ [[Π]](σ) = . • Suppose that qn−1 ∈ Good AΠ . Using the acceptance criterion on finite sequences, we have that σ is accepted by AΠ . Moreover, as AΠ recognizes Π, we have that Π(σ). Now, let consider μ ∈ Σ + s.t. |σ| + |μ| = n > n
56
Y. Falcone, J.-C. Fernandez, and L. Mounier
and run(σ m ·μ, AΠ ) = q0 · · · qn −1 . We have that ∀k ∈ N, n ≤ k ≤ n −1 ⇒ ω qk ∈ i=0 Ri ∪Pi and consequently Π(σ m ·μ). Moreover, consider μ ∈ Σ , we remark that vinf (σ · μ, AΠ ) ⊆ i=0 Ri ∪ Pi . Then, we obtain that ∀i ∈ [1, m], vinf (σ · μ, AΠ ) ∩ Ri = ∅ ∨ vinf (σ · μ, AΠ ) ⊆ Pi implying that Π(σ · μ). We have Π(σ) ∧ ∀μ ∈ Σ ∞ , Π(σ · μ), i.e. [[Π]](σ) = . • Conversely, suppose that [[Π]](σ) = . By definition, it means that ∀μ ∈ Σ ∞ , Π(σ · μ). According to the acceptance criterion of a Streett ∗ automaton, we deduce that m∀k ≥ n, ∀μ ∈ Σ , run(σ · μ, AΠ ) = q0 · · · qn−1 · · · qk ⇒ qk ∈ i=0 Ri ∪ Pi . That is Reach AΠ (qn−1 ) ⊆ m AΠ . i=1 (Ri ∪ Pi ), i.e. qn−1 ∈ Good AΠ – Proof of qn−1 ∈ Goodp ⇔ [[Π]](σ) = p . Proving that qn−1 ∈ GoodpAΠ ⇔ [[Π]](σ) = p is straightforward by examining the finite-sequence acceptance criterion of Streett automata. • Suppose that qn−1 ∈ GoodpAΠ . Using the acceptance criterion on finite sequences, we have that σ is accepted by AΠ . Moreover, as AΠ recognizes m Π, we have that Π(σ). Now, as Reach A (q) ⊆ i=1 (R ∩ i m Pi ), there exists a state q of AΠ reachable from q and belonging to i=1 (Ri ∩ Pi ). As a consequence, there exists μ ∈ Σ ∗ s.t. run(σ · μ) = q0 · · · qn−1 · · · q . With the acceptance criterion of finite sequences, we deduce that ¬Π(σ · μ), i.e. [[Π]](σ) = p . • Conversely, the same reasoning using the finite sequence acceptance criterion can be used to prove the desired result. – Proof of qn−1 ∈ BadpAΠ ⇔ [[Π]](σ) = ⊥p . Similarly, proving that qn−1 ∈ BadpAΠ ⇔ [[Π]](σ) = ⊥p is straightforward by examining the finite acceptance criterion of Streett automata. – Proof of qn−1 ∈ Bad AΠ ⇔ [[Π]](σ) = ⊥. Proving that qn−1 ∈ Bad AΠ ⇔ [[Π]](σ) = ⊥ can be done following the same proof as for qn−1 ∈ Good AΠ ⇔ [[Π]](σ) = .
7.2
Back to the Notion of Monitorability
We have seen in Sect. 5.1 that there is no exact characterization (in terms of a specific class of the SP classification) of monitorable properties in this its classical definition. It is possible to determine whether the property is monitorable by a syntactic analysis of the automaton states. Definition 11 (Monitorability (automata view)). The r-property Π recognized by the Streett m-automaton AΠ = (QAΠ , qinit AΠ , →AΠ , {(R1 , P1 ), . . . , (Rm , Pm )}) is • MP (B⊥ 2 )-monitorable iff ∀q ∈ QAΠ , qinit →∗AΠ q ⇒ ∃q ∈ Bad AΠ , q →∗AΠ q • MP (B 2 )-monitorable iff ∀q ∈ QAΠ , qinit →∗AΠ q ⇒ ∃q ∈ Good AΠ , q →∗AΠ q • MP (B3 )-monitorable iff ∀q ∈ QAΠ , qinit →∗AΠ q ⇒ ∃q ∈ Bad AΠ ∪ Good AΠ , q →∗AΠ q
Runtime Verification of Safety-Progress Properties
7.3
57
Verification and Enforcement Monitor Synthesis
A monitor is a procedure consuming events fed by an underlying program and producing an appraisal in the current state depending on the sequence read so far. Considered monitors are deterministic finite-state machines producing an output in a relevant domain. This domain will be refined for special-purpose monitors (verification and enforcement). For verification monitors, this output function gives a truth-value (a verdict) in B4 regarding the evaluation of the current sequence relatively to the desired property. For enforcement monitors (EMs), this output function gives an enforcement operation inducing a modification on the input sequence so as to enforce the desired property. Definition 12 (Monitor). A monitor A is a 5-tuple (QA , qinit A , −→A , X A , Γ A ) defined relatively to a set of events Σ. The finite set QA denotes the control states and qinit A ∈ QA is the initial state. The complete function −→A : QA × Σ → QA is the transition function. In the following we abbreviate a −→A (q, a) = q by q −→A q . The set of values X A depends on the purpose of the monitor (verification or enforcement). The function Γ A : QA → X A is an output function, producing values in X A from states. Starting from this general definition of monitor, it is possible to synthesize dedicated monitors for runtime verification and enforcement. The synthesis are based on the definition of P. For example, a verification monitor outputs a p when the current sequence presumably satisfies the property, i.e. when the run of the monitor reaches a state in Good A p in the corresponding Streett automaton A. An enforcement monitor produces a store operation, when the current sequence does not satisfy the property and this execution sequence still has some “good” continuations (at least one). It switches off (off operation), when the run reaches a Good state. More details are given in [17] Furthermore, in [17], we explain in details the synthesis procedure. Also, we formally define the notion of property verification and enforcement. For runtime verification, we show how an execution sequence is processed and verified by a verification monitor. For runtime enforcement, we describe how a given input sequence is transformed using the enforcement operations produced by the monitor. Besides, we prove that our synthesis procedures of verification and enforcement monitors are correct.
8
Conclusion and Future Works
Conclusion. We addressed the problem of monitorability and enforceability of properties at runtime using a general framework. In this framework, we characterized the sets of monitorable and enforceable properties in a unified way. We introduced a new definition of monitorability based on distinguishability of good and bad execution sequences. This definition is weaker than the classical one (based on positive and negative determinacy) and we believe that it better corresponds to practical needs and tool implementations. Fig. 1 summarizes
58
Y. Falcone, J.-C. Fernandez, and L. Mounier
the results of this paper, depicting the set of monitorable and enforceable properties wrt. the SP classification. Furthermore, we have given general synthesis procedures to generate runtime and enforcement monitors in this framework. Future works. The proposed approach raises new research perspectives and open questions. First, it seems interesting to consider this approach in the testing perspective. A monitor (passively) observes the execution of the program. Notably it has no control on the produced events and their sequencing. In a testing context, the notion of controllable event is introduced. An interesting issue would be to characterize the set of testable properties in the SP framework. Note that the classical definition of monitoring could be rather appropriate in this context. An additional issue to take into consideration is to deal with a reduced observability on the system under scrutiny. In practical situations, the desired property may refer to events out of the observation scope of a monitor. Similarly, it seems interesting to see how it is possible to characterize the space of properties for which others runtime-verification derived techniques can be applied (e.g. runtime reflection [23]). Another research perspective is to add expressiveness to EMs. Such augmented enforcers may enjoy more handling abilities on the sequences produced by the program. It seems interesting to see the impact on the set of enforceable properties. Also it seems relevant to study and compare complexity of the proposed monitors, notably with monitors defined in [3] for RV-LTL. To the authors’ knowledge these are the only (runtime) monitors endowed with a 4-valued truth-domain. Acknowledgement. The authors would like to thank Susanne Graf, Yassine Lakhnech, and the referees for their helpful remarks.
References 1. Runtime Verification (2001-2009), http://www.runtime-verification.org 2. Pnueli, A., Zaks, A.: PSL Model Checking and Run-Time Verification Via Testers. In: Misra, J., Nipkow, T., Sekerinski, E. (eds.) FM 2006. LNCS, vol. 4085, pp. 573–586. Springer, Heidelberg (2006) 3. Bauer, A., Leucker, M., Schallhart, C.: Comparing LTL semantics for runtime verification. Journal of Logic and Computation (2008) (accepted for publication) 4. Bauer, A., Leucker, M., Schallhart, C.: Runtime verification for LTL and TLTL. Technical Report TUM-I0724, Institut f¨ ur Informatik, Technische Universit¨ at M¨ unchen (2007) 5. Havelund, K., Goldberg, A.: Verify your runs. In: Meyer, B., Woodcock, J. (eds.) VSTTE 2005. LNCS, vol. 4171, pp. 374–383. Springer, Heidelberg (2008) 6. Ro¸su, G., Chen, F., Ball, T.: Synthesizing monitors for safety properties – this time with calls and returns. In: Leucker, M. (ed.) RV 2008. LNCS, vol. 5289, pp. 51–68. Springer, Heidelberg (2008) 7. Havelund, K., Rosu, G.: Efficient monitoring of safety properties. Software Tools and Technology Transfer (2002) 8. d’Amorim, M., Ro¸su, G.: Efficient monitoring of ω-languages. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 364–378. Springer, Heidelberg (2005)
Runtime Verification of Safety-Progress Properties
59
9. Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3, 30– 50 (2000) 10. Hamlen, K.W., Morrisett, G., Schneider, F.B.: Computability classes for enforcement mechanisms. ACM Trans. Program. Lang. Syst. 28, 175–205 (2006) 11. Viswanathan, M.: Foundations for the run-time analysis of software systems. PhD thesis, University of Pennsylvania, Philadelphia, PA, USA (2000) 12. Ligatti, J., Bauer, L., Walker, D.: Run-time enforcement of nonsafety policies. ACM Transactions on Information and System Security 12, 1–41 (2009) 13. Manna, Z., Pnueli, A.: A hierarchy of temporal properties, invited paper 1989. In: PODC 1990: Proceedings of the ninth annual ACM symposium on Principles of distributed computing, pp. 377–410. ACM, New York (1990) 14. Chang, E.Y., Manna, Z., Pnueli, A.: Characterization of temporal property classes. In: Kuich, W. (ed.) ICALP 1992. LNCS, vol. 623, pp. 474–486. Springer, Heidelberg (1992) 15. Falcone, Y., Fernandez, J.C., Mounier, L.: Synthesizing Enforcement Monitors wrt. the Safety-Progress Classification of Properties. In: Sekar, R., Pujari, A.K. (eds.) ICISS 2008. LNCS, vol. 5352, pp. 41–55. Springer, Heidelberg (2008) 16. Falcone, Y., Fernandez, J.C., Mounier, L.: Enforcement Monitoring wrt. the SafetyProgress Classification of Properties. In: SAC 2009: Proceedings of the 2009 ACM symposium on Applied Computing, pp. 593–600. ACM, New York (2009) 17. Falcone, Y., Fernandez, J.C., Mounier, L.: Runtime Verification of Safety-Progress Properties. Technical Report TR-2009-6, Verimag Research Report (2009) 18. Kupferman, O., Vardi, M.Y.: Model checking of safety properties. Form. Methods Syst. Des. 19, 291–314 (2001) 19. Lamport, L.: Proving the correctness of multiprocess programs. IEEE Trans. Softw. Eng. 3, 125–143 (1977) 20. Alpern, B., Schneider, F.B.: Defining liveness. Technical report, Cornell University, Ithaca, NY, USA (1984) 21. Ligatti, J., Bauer, L., Walker, D.: Enforcing Non-safety Security Policies with Program Monitors. In: de di Vimercati, S.C., Syverson, P.F., Gollmann, D. (eds.) ESORICS 2005. LNCS, vol. 3679, pp. 355–373. Springer, Heidelberg (2005) 22. Chen, F., Ro¸su, G.: MOP: An Efficient and Generic Runtime Verification Framework. In: Object-Oriented Programming, Systems, Languages and Applications(OOPSLA 2007), pp. 569–588. ACM press, New York (2007) 23. Leucker, M., Schallhart, C.: A brief account of runtime verification. Journal of Logic and Algebraic Programming 78, 293–303 (2008) 24. Martinell, F., Matteucci, I.: Through modeling to synthesis of security automata. Electron. Notes Theor. Comput. Sci. 179, 31–46 (2007) 25. Matteucci, I.: Automated synthesis of enforcing mechanisms for security properties in a timed setting. Electron. Notes Theor. Comput. Sci. 186, 101–120 (2007) 26. Streett, R.S.: Propositional dynamic logic of looping and converse. In: STOC 1981: Proceedings of the thirteenth annual ACM symposium on Theory of computing, pp. 375–383. ACM, New York (1981) 27. Falcone, Y., Fernandez, J.C., Mounier, L.: Specifying Properties for Runtime Verification in the Safety-Progress Classification. Technical Report TR-2009-5, Verimag Research Report (2009) 28. Tarjan, R.: Depth-first search and linear graph algorithms. SIAM Journal on Computing 1, 146–160 (1972)
Monitor Circuits for LTL with Bounded and Unbounded Future Bernd Finkbeiner and Lars Kuhtz Universit¨ at des Saarlandes 66123 Saarbr¨ ucken, Germany {finkbeiner,kuhtz}@cs.uni-sb.de
Abstract. Synthesizing monitor circuits for LTL formulas is expensive, because the number of flip-flops in the circuit is exponential in the length of the formula. As a result, the IEEE standard PSL recommends to restrict monitoring to the simple subset and use the full logic only for static verification. We present a novel construction for the synthesis of monitor circuits from specifications in LTL. In our construction, only subformulas with unbounded-future operators contribute to the exponential blowup. We split the specification into a bounded and an unbounded part, apply specialized constructions for each part, and then compose the results into a monitor for the original specification. Since the unbounded part in practical specifications is often very small, we argue that, with the new construction, it is no longer necessary to restrict runtime verification to the simple subset.
1
Introduction
In runtime verification, we monitor the running system and check on-the-fly whether the desired properties hold. Unlike in static verification, where the verification algorithm is executed at design-time and can therefore afford to spend significant time and resources, runtime verification algorithms must run in synchrony with the monitored system and usually even share the resources of the implementation platform. For specifications in succinct temporal logics, such as LTL this is problematic, because one can easily specify properties that are hard to monitor. For example, a simple cache property like “it is always the case that if the present input vector has previously been seen in the last 100 steps, a cache hit is reported” can be specified with an LTL formula that is linear in the size of the input vector, but the construction of a deterministic monitor automaton would yield an intractable number of states, because every possible combination of the vectors needs a separate state (cf. [1]). In the IEEE standard PSL [2], which is based on LTL, these considerations have led to the recommendation that only a restricted sublogic, the so-called
This work was partly supported by the German Research Foundation (DFG) as part of the Transregional Collaborative Research Center “Automatic Verification and Analysis of Complex Systems” (SFB/TR 14 AVACS).
S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 60–75, 2009. c Springer-Verlag Berlin Heidelberg 2009
Monitor Circuits for LTL with Bounded and Unbounded Future
61
simple subset, is to be used in runtime verification (cf. [3]). The simple subset restricts the use of disjunctions in the specification. While the simple subset has been shown to lead to small monitoring circuits, the restriction is often unfortunate, especially when specifications are shared between model checking and runtime verification. Rather than stating, for example, that a disjunction of temporal output patterns is safe, the simple subset requires that every output pattern be described as a deterministic consequence of a specific input pattern. From an automata-theoretic standpoint, the temporal formulas in the simple subset correspond to universal automata, where the transitions relate states to conjunctions of successor states. There is a linear translation from temporal formulas in the simple subset to universal automata, and universal automata can be implemented with a linear number of flip-flops. For unrestricted formulas, on the other hand, a direct translation results in an alternating automaton, whose transitions have both conjunctions and disjunctions. It is the translation from alternating to universal automata that causes the exponential blow-up. However, it is well-known that the membership problem for alternating automata can be solved directly, without a translation to universal automata and in linear time, as long as the relevant part of the input word is available in reverse order. Rather than evaluating in a forward manner, which corresponds to determinization, the automaton is simply evaluated backward, like a combinatorial circuit. The question arises, if, by using this alternative membership test, one can avoid the exponential blow-up in the size of the monitoring circuit. Is the restriction to the simple subset in fact unnecessary? In this paper, we present a new monitoring approach for general temporal specifications that avoids the translation to universal automata when possible. For example, the truth value of the cache specification at some position i is determined by the observations at positions i, i + 1, . . . , i + 99. The specification can therefore be evaluated by unrolling the alternating automaton over 100 steps, avoiding the exponential increase in the size of the circuit. To make this idea precise, we define, for each subformula of a specification, its temporal horizon, which indicates a future point in time by which the value of the subformula for the present position is guaranteed to be determined. Subformulas with finite horizons define languages that are finite themselves. The study of events characterized by finite languages goes back to Kleene’s definite events [4] and the locally testable events of McNaughton and Papert [5]. In the terminology of McNaughton and Papert, a set E of words is called a locally testable event in the strict sense if there exists a finite language L, such that all subwords of each word in E have a prefix in L. McNaughton and Papert construct an automaton that maintains an input buffer that is large enough to capture the largest words in L. In each step, a combinatorial circuit checks if the pipeline content belongs to L. In our setting, the languages recognizable by such an automaton correspond to LTL formulas of the form G φ, where φ contains only bounded future operators. In this paper, we extend this idea to allow the bounded subformulas to occur within general temporal formulas. For each subformula with finite horizon
62
B. Finkbeiner and L. Kuhtz
σ0,now |= ϕ μP M(ϕp ) prefix transducer (P) suffix transducer (S) μS
i−H
σ i
Fig. 1. Overview of the monitor construction
t we introduce a pipeline and a combinatorial circuit that computes, online as new elements enter the pipeline, a Boolean value that corresponds to the truth of the formula from the perspective of t steps ago in the past. From the delayed truth values of the subformulas we extrapolate the current truth value of the formula. This is possible because the truncated-path semantics [6] (as used, for example, in PSL) provides default values for subformulas that refer to the future beyond the current cut-off point. The truncated-paths semantics distinguishes between strong and weak subformulas: for example, the strong specification “X X p” is true only if the visible trace is at least two positions long and p holds in the second position. Negation flips between the strong and weak interpretation. Given a pipeline that contains the delayed truth values of the subformulas, we can therefore construct an extrapolation circuit that applies, at each position, the truncated-trace semantics instantly to the entire path suffix stored in the pipeline. Figure 1 gives an overview of our construction. We say a subformula is bounded if its horizon is finite, and unbounded otherwise. We call the part of the monitor that deals with the pipeline storage and the evaluation of the bounded formulas the suffix transducer S: for some infinite trace σ, the suffix transducer evaluates the suffix of σ, from the delayed position in the trace onward, to derive the truth value of the bounded-future formulas. Correspondingly, the part of the monitor that deals with unbounded formulas is called the prefix transducer P: the prefix transducer evaluates the prefix of σ, up to the currently observed position i, to derive the truth value of the complete specification. The prefix transducer is based on a universal automaton U(ϕ), which checks whether a given prefix of the trace satisfies ϕ. The extrapolation function, denoted by μP in Figure 1, evaluates the part of the trace that is currently stored in the pipeline, i.e., the difference between the delayed position considered by U(ϕ) and the present position i.
Monitor Circuits for LTL with Bounded and Unbounded Future
63
The resulting circuit has the following properties. If the specification is (1) simple, (2) bounded, or (3) a combination thereof (a formula that is simple except for subformulas that are not simple but bounded), the circuit is polynomial in the specification. If the specification is (4) neither simple nor bounded, then the circuit is exponential in the size of the specification after removing all bounded subformulas. While the possibility of an exponential blow-up is thus not excluded, it is our experience that even case (4) rarely leads to a blow-up in practice. Specifications that are neither simple nor bounded mostly occur when the correct behavior is specified in terms of a correlation of different events such as “G((AorB) U(CorD)),” where the events A, B, C and D are specified by bounded formulas expressing certain finite input or output patterns that constitute events. Once the bounded subformulas have been removed, the specification becomes very small and the resulting monitoring circuit typically fits easily on an FPGA board. Related Work. Monitoring LTL is a key problem in runtime verification (cf. [7,8,9,10,11]). The two most prominent tools for the synthesis of monitor circuits from the simple subset of PSL are FoCs [12], developed at IBM Haifa, and MBAC by Boul´e and Zilic [13]. For unrestricted temporal logic, an automata-theoretic construction (based on determinization) is due to Armoni et al. [14]. Our prefix transducer is inspired by this construction. More generally, the problem of translating LTL and logics based on LTL to automata occurs in both runtime verification and model checking. Constructions aimed at model checking (cf. [15,16,17,18]) are, however, not immediately applicable to runtime verification. First, such constructions typically only produce nondeterministic automata, rather than deterministic monitors. Hence, a further exponential determinization step is required to obtain a monitor. Second, these constructions typically produce automata over infinite words rather than automata or transducers over finite words. Our approach is based on the truncated-path semantics [6] used in PSL. The truncated-path semantics differs from the bad-prefix semantics used in several monitoring approaches (cf. [8,19,20]), where a finite-word automaton is constructed that recognizes the “bad prefixes” of the language of an infinite-word automaton, i.e., the set of prefixes that cannot be extended to accepted infinite words [1]. In the truncated-path semantics, strong specifications may be violated on a prefix even if a satisfying extension exists. Locally testable events were introduced by [21] and [5] and broadly studied in the literature (refer e.g. to [22]). In [23] Kupferman, Lustig, and Vardi point out the particular relevance of locally testable events in a strict sense (as introduced in [5]), which they call locally checkable properties. They emphasize the low memory footprint of monitors for locally checkable properties, since their size depend only on the number of variables and the length of the pipeline. The key contribution of this paper is to exploit the local testablility of bounded subformulas that occur within general temporal properties by the introduction
64
B. Finkbeiner and L. Kuhtz
of a pipeline into the monitoring circuit. Because bounded subformulas are evaluated directly, based on the pipeline content, rather than folded into the determinization of the prefix transducer, the resulting circuit can be exponentially smaller than the circuits constructed by previous approaches.
2
Temporal Specifications
Our approach is based on LTL with an bounded and an unbounded version of the temporal operators.1 Definition 1 (Syntax). Given a set of atomic propositions AP , let ϕ1 and ϕ2 be temporal formulas, and let i, j ∈ N ∪ {∞}. Then the following are temporal formulas over AP : all p ∈ AP
¬ϕ1
ϕ1 ∧ ϕ2
ϕ1 U(i,j) ϕ2 .
The main operator of the logic is the Until operator ϕ1 U(l,u) ϕ2 , which we use in its parameterized form, where l, u ∈ N ∪ {∞} indicate a lower and upper bound, respectively, of the interval within which ϕ2 must hold. As usual, the Until operator subsumes the Next, Eventually, and Always operators: X ϕ ≡ true U(1,1) ϕ
F ϕ ≡ true U(0,∞) ϕ
G ϕ ≡ ¬ F ¬ϕ
We call a formula simple if the operand of every negation and the right-hand operand of every Until is a Boolean expression over AP . The size |ϕ| of a formula ϕ is the number of subformulas plus, for parameterized subformulas, the sum of all constants. We use a truncated semantics [6], defined over finite words from the alphabet 2AP . We denote the length of a finite or infinite word σ by |σ|, where the empty word has length || = 0, a finite word σ = σ(0), σ(1), σ(2), . . . σ(n − 1) has length |σ| = n and an infinite word σ = σ(0), σ(1), σ(2), . . . has length |σ| = ∞. For a finite or infinite word σ and i < j ≤ |σ|, σ(i,j) = σ(i), σ(i + 1), . . . , σ(j) denotes the subword of length j − i + 1 starting at index i. σ(i,... ) = σ(i), σ(i + 1), . . . denotes the suffix of σ starting at index i. The truncated semantics is defined with respect to a context indicating either weak or strong strength. We use σ |=s ϕ to denote that σ satisfies formula ϕ w strongly, and σ |= ϕ to denote that σ satisfies ϕ weakly. We say σ satisfies ϕ, denoted by σ |= ϕ, iff σ satisfies ϕ strongly. Negation switches between the weak and strong contexts:
1
Our implementation is based on the Property Specification Language PSL, defined in the IEEE standard 1850 [2]. PSL is a rich logic defined on top of the hardware description languages VHDL and Verilog, which combines temporal operators with extended regular expressions. It is straightforward to extend the approach presented in this paper with standard constructions for SEREs etc. (cf. [16]).
Monitor Circuits for LTL with Bounded and Unbounded Future
65
Definition 2 (Semantics). A finite word σ over AP satisfies a temporal forw mula ϕ, denoted by σ |= ϕ, iff σ |=s ϕ, where |=s and |= are defined as follows: s σ |= p iff |σ| > 0 and p ∈ σ(0), s w σ |= ¬ϕ iff not σ |= ϕ, s s σ |= ϕ1 ∧ ϕ2 iff σ |= ϕ1 and σ |=s ϕ2 , s σ |= ϕ1 U(l,u) ϕ2 iff there is an i such that l ≤ i ≤ u < |σ| s and σ(i,... ) |= ϕ2 and σ(j, . . . ) |=s ϕ1 for all l ≤ j < i, w σ |= p iff |σ| = 0 or p ∈ σ(0), w σ |= ¬ϕ iff not σ |=s ϕ, w w w σ |= ϕ1 ∧ ϕ2 iff σ |= ϕ1 and σ |= ϕ2 ,
σ |= ϕ1 U(l,u) ϕ2 iff for u = min{u, |σ|}, w there is an i such that l ≤ i ≤ u and σ(i,... ) |= ϕ2 w
w and σ(j,... ) |= ϕ1 for all l ≤ j < i, w or σ(k,... ) |= ϕ1 for all for all l ≤ k ≤ u ,
where p ∈ AP and ϕ1 and ϕ2 are temporal formulas.
3
Monitoring Temporal Specifications
Monitoring a specification ϕ means to decide for each prefix of a (possibly infinite) word over 2AP whether the prefix satisfies ϕ. Definition 3 (The Monitoring Problem). Given a temporal formula ϕ over a set of atomic propositions AP , and a word σ over 2AP , the monitoring problem consists of constructing a word σ over 2{ϕ} such that ϕ ∈ σ (i) iff σ(0, i) |= ϕ. A characteristic of the monitoring problem is that, since the length of the trace σ may grow beyond any bound, the space complexity of any reasonable solution must be constant in |σ|. This entails that the problem should be solved online, i.e., by reading new observations as they become available. We now give an overview of our monitoring approach. As shown in Figure 1, our construction is split into two parts: the suffix transducer S, which evaluates the bounded subformulas on the suffix of the trace, and the prefix transducer P, which evaluates the complete specification on the prefix that has been seen so far. To formally describe the interface between the two transducers, we need a few auxiliary definitions. Let ϕ be a temporal formula. The set of strong subformulas Sub s (ϕ) contains all subformulas that occur in the scope of an even number of negations (including 0). The set of weak subformulas Sub w (ϕ) contains all subformulas that occur in the scope of an odd number of negations. The set of subformulas is the union Sub(ϕ) = Sub s (ϕ) ∪ Sub w (ϕ).
66
B. Finkbeiner and L. Kuhtz
For each temporal formula ϕ, we define the horizon of ϕ as the number of steps into the future the truth value of the formula may depend on, i.e., h(p) = 0, h(¬ϕ) = h(ϕ),
h(ϕ1 ∧ ϕ2 ) = max {h(ϕ1 ), h(ϕ2 )} , h(ϕ1 U(l,u) ϕ2 ) = max {u − 1 + h(ϕ1 ), u + h(ϕ2 )} ,
where p ∈ AP and ϕ1 and ϕ2 are temporal formulas. A temporal formula ϕ is called unbounded if h(ϕ) = ∞. Otherwise, ϕ is called bounded. A formula ψ ∈ Sub s (ϕ) or ψ ∈ Sub w is a maximal bounded (strong or weak, respectively) subformula of ϕ if it is bounded and has a (strong or weak, respectively) occurrence that is not within another bounded subformula. We call the sets Γ s ⊆ Sub s (ϕ) and Γ w ⊆ Sub w (ϕ) of maximal bounded (strong and weak) subformulas the separation formulas of ϕ. Let Γ = Γ s ∪ Γ w . The maximal horizon of the formulas in Γ is called the separation horizon H. The separation formulas form the interface between the prefix and suffix transducers. Reading an input word σ over 2AP , the suffix transducer computes, for each separation formula γ ∈ Γ c (where c ∈ {s, w}), each position i, and each offset j ≤ H, the value of the additional propositions γ, j, c , such that γ, j, c is true iff the truncated suffix σ(i−H+j,i) satisfies γ (strongly or weakly, depending on c). Reading an input word over 2AP , where AP = { γ, j, s | γ ∈ Γ s , 0 ≤ j ≤ H} ∪ { γ, j, w | γ ∈ Γ w , 0 ≤ j ≤ H} , the prefix transducer then treats the separation formulas as atomic propositions. Example 1. Consider the temporal formula ¬(true U(0,∞) ((¬(a U(0,1) b)) ∧ (true U(0,∞) b))). The maximal bounded subformulas are ¬(a U(0,1) b) and b, where h(¬(a U(0,1) b)) = 1 and h(b) = 0. Hence, H = 1. Subformula ¬(a U(0,1) b) is weak, b occurs both as a strong and a weak subformula, but only as a maximal weak subformula. Reading an input word over AP = {a, b}, the suffix transducer produces an output word over AP = { ¬(a U(0,1) b), 0, w , ¬(a U(0,1) b), 1, w , b, 0, w , b, 1, w }. The overall monitoring problem is solved by the functional composition of the suffix and prefix transducers. The resulting transducer is implemented in hardware through a linear translation to a circuit built from flip-flops and Boolean gates. In the following sections we describe the construction of the prefix and suffix transducers and the translation to the circuit in more detail.
4 4.1
Automata and Transducers Alternating and Universal Automata
While our constructions are based on automata transformations, our target is a circuit that monitors the given specification. For this reason we define automata in a symbolic setting that facilitates the eventual translation to a circuit: rather
Monitor Circuits for LTL with Bounded and Unbounded Future
67
than referring to an explicit alphabet, our automata are defined over the set AP of atomic propositions. We use AP to denote the set {a, ¬a | a ∈ AP } of literals. An alternating automaton on finite words over a set AP of atomic propositions is a tuple A = (Q, I, F, δ), where Q is a finite set of states, q0 ∈ Q is the initial state, F ⊆ Q is a subset of final states, and δ : Q → B+ (Q ∪ AP) is the transition condition, where B+ (X) denotes the set of positive Boolean expressions over X, i.e., the formulas built from elements of X using ∨, ∧, true and false. An alternating automaton A is called universal, if δ(q) can be written as a conjunction where each conjunct is an element of B+ (AP ∪ {q }) for some q ∈ Q. The direction of evaluation in an automaton is backward. A run of A on a finite input word σ is a Q-labeled tree, such that (1) all nodes at level |σ| (i.e., all nodes where the path from the root has length |σ| + 1) are childless and are labeled with states in F ; (2) the root is labeled with q0 ; and the following condition holds for every node n on some level i = 0, . . . , |σ| − 1: let n be labeled with state q. Then the set S, consisting of the states on the children of n and the elements of σ(i) satisfies δ(q), i.e., replacing every state or atomic proposition in δ(q) with true if it is an element of S and with false if it is not, results in a Boolean expression equivalent to true. The set of words that are accepted by A is called the language of A, denoted by L(A). Corresponding to an evaluation in a strong or weak context, we translate a temporal formula ϕ into one of two alternating automata As (ϕ) or Aw (ϕ): automaton As (ϕ) accepts a finite word σ iff σ satisfies ϕ strongly; analogously, Aw (ϕ) accepts σ iff σ satisfies ϕ weakly. As detailed in the following theorem, the translation is a simple linear-time induction: Theorem 1. For each temporal formula ϕ over AP there are two alternating automata As (ϕ) and Aw (ϕ) over AP such that, for every finite word σ, s σ |= ϕ iff σ ∈ L(As (ϕ))
and
w σ |= ϕ iff σ ∈ L(Aw (ϕ)).
The sizes of As (ϕ) and Aw (ϕ) are linear in the size of ϕ. If ϕ is simple, then As (ϕ) and Aw (ϕ) are universal. Since the context of a temporal formula is, by default, strong, we define the alternating automaton associated with a formula ϕ as A(ϕ) = As (ϕ). Example 2. Consider the temporal formula ϕ = F a ∨ G b, which is equivalent to true U(0,∞) a ∨ ¬(true U(0,∞) ¬b). The alternating automaton A(ϕ) = ({s0 , s1 , s2 }, s0 , δ, F = {s2 }), with δ : s0 → (s1 ∨ a) ∨ (s2 ∧ b), s1 → s1 ∨ a, and s2 → s2 ∧ b, has three states s0 , s1 , s2 , where s0 corresponds to ϕ, s1 corresponds to F a, and s2 corresponds to G b. Every alternating automaton can be translated into an equivalent universal automaton by a simple subset construction. Theorem 2. For each alternating automaton A there exists a universal automaton U such that L(A) = L(U). The size of U is exponential in the size of A.
68
4.2
B. Finkbeiner and L. Kuhtz
Transducers
Automata evaluate the words in a backward manner: the transition expression δ(q) is a Boolean expression over the input and the successor states. We now change the direction of the evaluation. In order to evaluate a word in forward direction, a state machine is equipped with a next-state function τ which defines for each state q a Boolean expression over the input and the predecessor states. A state machine over a set AP of atomic propositions is a tuple M = (Q, Q0 , τ ), where Q is a set of states, Q0 ⊆ Q is a subset of initial states, and τ : Q → B+ (Q ∪ AP) is the next-state function. The motivation for this definition is that we wish to simulate universal automata in hardware, by representing each state as a flip-flop. The states of the state machine can thus be seen as the states of a universal automaton, and sets of states as the states of an implicit determinization. For an input word σ, the state machine defines a run R0 , R1 , . . ., where each Ri is a set of states. The run starts with the set of initial states R0 = Q0 , and for all i > 0, the set Ri includes all states whose next-state function (with true substituted for all states in Ri−1 and false substituted for all states not in Ri−1 is satisfied: i.e., q ∈ Ri
iff
si |= τ (q ) [q → true for q ∈ Ri−1 and q → false for q ∈ Ri−1 ] .
For a given universal automaton U = (Q, q0 , F, δ), we define the state machine M = (Q, Q0 , τ ) that simulates U: the next-state function τ is chosen to precisely provide those successor states that are needed to satisfy the transition function δ: – Q0 = {q 0 }; – τ (q ) = δ(q)=...∧q ∧... q. Finally, we define transducers, which are state machines that are additionally · equipped with an output function: Let AP = AP I ∪AP O be a set of atomic propositions that is partitioned into a set AP I of input propositions and a set AP O of output propositions. A transducer T = (Q, Q0 , τ, {ϑp }p∈AP O ) over AP is a state machine over AP I with an output function ϑp : Q → B+ (AP I ∪ Q) for each p ∈ APO . For an input word σ over 2AP I , the run R0 , R1 , . . . of the transducer is the run of the state machine. The transducer additionally defines an output word σ over 2AP O , where, for all i ≥ 0, and all p ∈ AP O , p ∈ σ (i) iff σ(i) |= q∈Ri ϑp (q).
5
The Suffix Transducer
We start by translating the specification into automata, using Theorems 1 and 2. Let ϕ be a temporal formula and let A(ϕ), U(ϕ), and M(ϕ) be the alternating automaton, the universal automaton, and the state machine, respectively, that are defined by ϕ. When the transducer reads position i, it produces the truth values for all positions from i − H to the cut-off position i. For this purpose, the suffix transducer contains a pipeline, which stores, for each atomic proposition p, H copies
Monitor Circuits for LTL with Bounded and Unbounded Future
69
p0 , p1 , . . . , pH−1 , where pj indicates the truth value of p at position i − H + j. Since pH is the the value of p at the currently available position i, there is no H need to store pipeline. p in the Let π ⊆ p∈AP {p0 , p1 , . . . ph−1 } denote the pipeline content, and let As (γ) = (Qs , q0s , F s , δ s ) and Aw (γ) = (Qw , q0w , F w , δ w ) be the alternating automata for formula γ in strong and weak context, respectively. We define, for each state q ∈ Q and each offset j ∈ {0, . . . , H}, Boolean expressions λs (π, q, j), λw (π, q, j) that indicate if the strong and weak automaton, respectively, starting in state q, accept the word represented by the pipeline content starting from position j. For c ∈ {s, w}: true, if q ∈ F c , c c c λ (π, q, H) = δ (q) q → ,q ∈ Q false, otherwise; q → λc (π, q , j + 1), q ∈ Qc , c c λ (π, q, j) = δ (q) for j < H. p → π(pj ), p ∈ AP The truth value of the atomic proposition γ, j, c in AP is then defined by the Boolean expression μc (π, γ, j), where μc (π, γ, j) = λc (π, q0c , j). Example 3. The weak subformula ψ = ¬(a U(0,1) b) from Example 1 can be translated into the alternating automaton Aw (ψ) = ({s0 , s1 }, s0 , δ : s0 → (¬a ∨ s1 ) ∧ ¬b; s1 → ¬b, F = {s0 , s1 }). Since H = 1, the pipeline stores the values of a and b for one step (as a0 and b0 ). We obtain μw (π, ψ, 0) = (¬π(a0 ) ∨ ¬b) ∧ ¬π(b0 ). We construct the suffix transducer T (ϕ): Theorem 3. For each temporal formula ϕ with separation formulas Γ s , Γ w , there exists a transducer T (ϕ) with input propositions AP and output propositions AP , such that the following holds for each γ, j, c ∈ AP , j ∈ {0, . . . , H}, i ≥ H − j, and each input word σ and output word O0 , O1 , . . .: γ, j, c ∈ Oi
iff
σ(i − H + j, i) |=c γ.
The set of states is formed by the possible pipeline contents. The transition function shifts the contents of the pipeline by one position and adds the new observation. The output interprets each atomic proposition γ, j, c in AP as μc (π, γ, j).
6
The Prefix Transducer
The prefix transducer computes the truth value of the specification ϕ based on the extended trace provided by the suffix transducer. For this purpose, the separation formulas in the specification are replaced by atomic propositions. To ensure that the substitution respects the context, we introduce, in addition to the standard substitution operator ϕ[ψ → ψ ], which replaces every occurrence of ψ
70
B. Finkbeiner and L. Kuhtz
in ϕ with ψ , a strong and a weak version: In the strong substitution ϕ[ψ → ψ ]s , all occurrences of ψ that are in the scope of an even number of negations are replaced by ψ , in the weak substitution ϕ[ψ → ψ ]s , all occurrences of ψ that are in the scope of an odd number of negations are replaced by ψ . We generalize the substitution operators to sets of replacement pairs in the obvious way. Let ϕ be a temporal formula. The prefix transducer is based on a simplified prefix formula ϕp , where we replace every separation formula with a proposition from Γ s × {s} ∪ Γ w × {w}, i.e., with a proposition indicating the separation formula together with the strong or weak context. ϕp = ϕ[γ → γ, s | γ ∈ Γ s ][γ → γ, w | γ ∈ Γ w ]. Example 4. Consider again the specification from Example 1: ϕ = ¬(true U(0,∞) ((¬(a U(0,1) b)) ∧ (true U(0,∞) b))). Hence, ϕp = ¬(true U(0,∞) ( ¬(a U(0,1) b), w ∧ (true U(0,∞) b, w ))).
The idea for the construction of the prefix transducer is to check for the existence of a run of the universal automaton U(ϕp ) on the prefix up to position i. Intuitively, the prefix is split into two parts. The first part, up to position (i − H), is handled by the state machine M(ϕp ), which we run with a delay of H steps. In the transition function of the state machine, we therefore replace every proposition γ, c with the proposition γ, 0, c delivered by the suffix automaton. The second part, from position (i − H) to position i, is handled by the output function of the transducer. For this purpose, we unroll the transition function of U(ϕp ) for H steps, and accordingly replace, in the jth unrolling, the proposition γ, c with the proposition γ, j, c provided by the suffix automaton. Let U(ϕp ) = (Q, q0 , δ, F ). We define inductively: ν(q, H) = δ(q) [ γ, c → γ, H, c | γ ∈ Γ, c ∈ {s, w}] true, if q ∈ F, q → q ∈Q ; false, otherwise; ν(q, j) = δ(q) [ γ, c → γ, j, c | γ ∈ Γ, c ∈ {s, w}] [q → ν(q , j + 1) | q ∈ Q]
for j < H.
Suppose the state machine has computed the state set S when reaching its delayed position (i−H). Then this partial run can be completed into an accepting run on the full prefix iff ν(q, 0) is true for all states q ∈ S. The prefix transducer P with input propositions AP and output propositions AP is obtained from the state machine M(ϕp ) by encoding the delay of H steps. For this purpose, the transducer starts by counting H steps. In the ith step the output is {ϕ} if ν(q, H − i) is true for all intial states of M(ϕp ). Then it proceeds with the initial states of M(ϕp ). The output is {ϕ} if the ν(q, 0) is true for all active states. Theorem 4. For each temporal formula ϕ with separation formulas Γ s , Γ w , there exists a transducer P(ϕ) with input propositions AP and output proposi tions AP O = {ϕ}, such that for all words σ over 2AP , σ over 2AP , and σ over
Monitor Circuits for LTL with Bounded and Unbounded Future
71
2AP , if T (ϕ) produces output σ reading input σ, and P(ϕ) produces output σ reading input σ , then for all i ≤ |σ|, ϕ ∈ σ (i) iff σ(0, i) |= ϕ.
7
The Monitor Circuit
As shown in Figure 1, the monitor circuit is built from four main components: the pipeline circuit for the the suffix transducer S, the output function of the suffix transducer, the state machine of the prefix transducer P, and the output function of the prefix transducer. The circuits for the pipeline and the prefix state machine maintain their internal state via D flip-flops, interconnected via Boolean circuits that implement the next-state function. The circuits for the output functions are pure Boolean circuits without internal state. The pipeline circuit. The states of the suffix transducer S are defined by the pipeline that buffers the truth values of the atomic propositions. For each atomic proposition p ∈ AP and each offset j, 0 ≤ j < H, the pipeline contains a D flipflop fp,j . The input to fp,H−1 is the current input signal for p. The output of fp,j is connected to the input of fp,j−1 , thus shifting the values of p in each clock-cycle by one position. The state machine of the prefix transducer. Each state q of the state machine of the prefix transducer P is implemented by a D flip-flop. The next-state function is translated into Boolean circuits that are connected to the outputs of the flipflops representing the states and the output gates of the circuit for the output functions of the suffix transducer S. The output functions of the transducers. The output functions of the transducers are implemented in hardware as pure Boolean circuits. The input gates of the circuit for the output function of the suffix transducer S are connected to the output of the flip-flops of the state machine of the suffix transducer S and with the signals of the atomic propositions in AP . The input gates of the output function of the prefix transducer P are connected to the outputs of the flip-flops for the state machine of P and the output gates of the output function of S. Its single output gate represents the output of the monitor for ϕ on the prefix of the current input. This implementation of the monitor circuit is well-suited for reprogrammable hardware such as FPGAs. The actual translation of the Boolean functions into the specific hardware can be realized by standard tools for the computer-aided design of digital circuits. The size of the circuits. The size of the pipeline circuit is linear in H · |ϕ|. The output circuit of S consists of sub-circuits for each separation formula and each position within the delayed fragment of the input trace. Each of these subcircuits is linear in the size of H and linear in the size of ϕ. Hence, the overall size for the output circuit is quadratic in H · |ϕ|. The size of the circuit for the state machine of P is linear in |U(ϕ)| and hence linear in |ϕ| if ϕ is simple and exponential in |ϕ| otherwise. The size of the Boolean circuit that computes the output function of P is of the same order as the state machine of P multiplied by H.
72
B. Finkbeiner and L. Kuhtz
Theorem 5. The number of gates of the monitoring circuit for a temporal specification ϕ is quadratic in H · |ϕ| if ϕ is simple except for bounded subformulas; otherwise, the number of gates is exponential in |ϕ|.
8
Experimental Results
Our implementation takes as input an LTL formula and produces synthesizable VHDL code for a cirucit that monitors the input formula. The code is then passed to a synthesis tool for a specific hardware platform. In this section we report on experimental results obtained with our implementation in conjunction with the Xilinx Virtex-5 FPGA synthesis tool. Our benchmarks, shown in Figure 2, include Etessami and Holzmann’s list of commonly used LTL specifications [24] (formulas 1–12, adapted to our setting by the introduction of parametric bounds), as well as a variation of the cache specification from the introduction (formulas cn ). The formulas rn specify fair bounded response, a recurring pattern in many specifications. Table 1 shows the results for the formuals from Figure 2. The number of signals and the number of flip-flops are with respect to the VHDL description of the monitor circuit. The MHz values are computed by the Xilinx Virtex-5 tool. The first two sections of the table compare, for formulas with bound 2, the performance of our construction (b = 2) with a direct approach (b = 2 direct), based on building a universal automaton without pipeline. The presence of already very moderate bounds or a small number of nested Next-modalities can yield a direct universalization of the alternating automaton of the specification infeasable. As long as the bounds (or Next-modalities) are properly nested within 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
p U(0,3b) (q ∧ G(0,3b) r) p U(q ∧ X(r U(0,2b) s)) p U(q ∧ X(r ∧ F(0,2b) (s ∧ X F(0,2b) (t ∧ X F(0,2b) (u ∧ X F(0,2b) v))))) F(0,3b) (p ∧ X G(0,2b) q) F(p ∧ X(q ∧ X( F(0,b) r))) F(q ∧ X(p U(0,2b) r)) (F G(0,3b) q) ∨ (F G(0,3b) p) G(¬p ∨ (q U(0,b) r)) F(p ∧ X F(0,b) (q ∧ X F(0,b) (r ∧ X F(0,b) s))) (G F(0,2b) p) ∧ (G F(0,2b) q) ∧ (G F(0,2b) r) ∧ (G F(0,2b) s) ∧ (G F(0,2b) t) (p U(0,2b) q U(0,2b) r) ∨ (q U(0,2b) r U(0,2b) p) ∨ (r U(0,2b) p U(0,2b) q) G(¬p ∨ (q U(0,b) (( G(0,3b) r) ∨ ( G(0,3b) s)))) cn = F(¬x ∧
(vi ↔ F(0,b) (vi ∧ x))) ∧ G(x → X G ¬x) ∧ F x
0≤i≤n
rn = G((
(G F∀(0,3b) fi )) →(a →( F∀(0,b) b)))
0≤i
Fig. 2. Benchmark formulas
Monitor Circuits for LTL with Bounded and Unbounded Future
73
Table 1. Experimental results
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. c3 c5 c7 c10 r3 r5 r10
b=2 #signals #ffs 165 7 86 9 545 12 109 6 41 7 53 7 70 6 39 8 114 8 180 14 279 7 134 9 202 13 259 15 316 17 405 21 180 10 280 12 475 17
b = 2 direct b = 40 b=∞ MHz #signals #ffs MHz #signals #ffs MHz #signals #ffs 358 5274 192 309 43876 159 169 81 6 442 1367 66 338 997 199 145 141 10 204 – – – 89339 316 75 5792 97 463 1705 128 336 19614 120 215 52 6 467 168 18 468 532 159 253 97 10 440 568 34 432 661 159 143 65 6 467 – – – 7299 120 203 70 6 467 52 6 468 344 160 191 47 5 357 10746 514 297 10893 198 123 418 24 467 173 28 449 7224 242 184 95 12 467 – – – 78547 159 467 1240 66 373 3984 231 301 22888 199 132 193 11 431 8702 88 258 3101 203 188 35273 263 314 27232 176 336 4966 281 179 – – 295 63080 296 289 5821 359 173 – – 274 – – – 8042 515 169 – – 382 – – – 5555 238 241 308 11 402 – – – 7607 316 228 9368 62 367 – – – 12646 511 214 – –
MHz 468 468 383 468 468 468 468 468 468 468 367 468 256 – – – 468 361 –
the unbounded operators, our construction circumvents the combinatorial blowup of the forward universalization and produces rather small monitors. Even for bounds up to several dozens or even hundreds of steps, our approach produces monitor circuits that fit on standard FPGAs. Bounds of this size are clearly far beyond what a forward universalization can handle with an reasonable amount of computational resources. Our results also provide evidence that introducing bounds into a given specification can be helpful in order to simplify the monitoring. The third and fourth section of the table report on the performance for higher bounds (b = 40) and unbounded (b = ∞) formulas. For small specifications, bounds complicate the monitoring problem. For larger specifications, however, the combinatorial overhead introduced by the bounds, which is just polynomial, outplays the exponential blow-up caused by the richer combinatorial structure of the specification.
9
Conclusions
We have presented a novel translation of temporal specifications to monitor circuits that saves exponential space in the size of bounded-future subformulas. The monitors are useful in hardware solutions for high-performance simulation and prototyping, in particular using reprogrammable hardware such as FPGAs. The new construction should allow synthesis tools for PSL to treat at least bounded specifications outside the simple subset. The free use of disjunctions in
74
B. Finkbeiner and L. Kuhtz
temporal specifications will make the specifications shorter and more general and will allow the user to use the same specification logic for static verification and runtime verification, without an error-prone rewrite into a restricted sublogic.
References 1. Kupferman, O., Vardi, M.: Model checking of safety properties. In: Halbwachs, N., Peled, D.A. (eds.) CAV 1999. LNCS, vol. 1633, pp. 172–183. Springer, Heidelberg (1999) 2. IEEE Std 1850-2007: Standard for Property Specification Language (PSL). IEEE, New York (2007) 3. Cohen, B., Venkataramanan, S., Kumari, A.: Using PSL/Sugar for Formal and Dynamic Verification. VhdlCohen Publishing, Los Angeles (2004) 4. Kleene, S.: Representation of events in nerve nets and finite automata. In: Automata Studies. Princeton University Press, Princeton (1956) 5. McNaughton, R., Papert, S.: Counter-Free Automata. Research Monograph, vol. 65. MIT Press, Cambridge (1971) 6. Eisner, C., Fisman, D., Havlicek, J., Lustig, Y., McIsaac, A., Campenhout, D.V.: Reasoning with temporal logic on truncated paths. In: Hunt Jr., W.A., Somenzi, F. (eds.) CAV 2003. LNCS, vol. 2725, pp. 27–39. Springer, Heidelberg (2003) 7. Finkbeiner, B., Sipma, H.B.: Checking finite traces using alternating automata. In: Havelund, K., Ro¸su, G. (eds.) Runtime Verification 2001. Electronic Notes in Theoretical Computer Science, vol. 55, pp. 44–60. Elsevier, Amsterdam (2001) 8. Geilen, M.: On the construction of monitors for temporal logic properties. Electr. Notes Theor. Comput. Sci. 55(2) (2001) 9. Giannakopoulou, D., Havelund, K.: Automata-based verification of temporal properties on running programs. In: ASE, pp. 412–416. IEEE Computer Society, Los Alamitos (2001) 10. Havelund, K., Rosu, G.: Monitoring programs using rewriting. In: ASE, pp. 135– 143. IEEE Computer Society, Los Alamitos (2001) 11. Bauer, A., Leucker, M., Schallhart, C.: The good, the bad, and the ugly, but how ugly is ugly? In: Sokolsky, O., Ta¸sıran, S. (eds.) RV 2007. LNCS, vol. 4839, pp. 126–138. Springer, Heidelberg (2007) 12. Dahan, A., Geist, D., Gluhovsky, L., Pidan, D., Shapir, G., Wolfsthal, Y., Benalycherif, L., Kamdem, R., Lahbib, Y.: Combining system level modeling with assertion based verification. In: ISQED 2005, pp. 310–315. IEEE Comp. Soc., Los Alamitos (2005) 13. Boule, M., Zilic, Z.: Automata-based assertion-checker synthesis of PSL properties. ACM Transactions on Design Automation of Electronic Systems (TODAES) 13(1) (2008) 14. Armoni, R., Korchemny, D., Tiemeyer, A., Vardi, M.Y., Zbar, Y.: Deterministic dynamic monitors for linear-time assertions. In: Havelund, K., N´ un ˜ez, M., Ro¸su, G., Wolff, B. (eds.) FATES 2006 and RV 2006. LNCS, vol. 4262, pp. 163–177. Springer, Heidelberg (2006) 15. Ruah, S., Fisman, D., Ben-David, S.: Automata construction for on-the-fly model checking PSL safety simple subset. Technical report, IBM Research (2005) 16. Ben-David, S., Bloem, R., Fisman, D., Griesmayer, A., Pill, I., Ruah, S.: Automata construction algorithm optimized for PSL. Technical report, PROSYD (July 2005)
Monitor Circuits for LTL with Bounded and Unbounded Future
75
17. Pnueli, A., Zaks, A.: PSL model checking and run-time verification via testers. In: Misra, J., Nipkow, T., Sekerinski, E. (eds.) FM 2006. LNCS, vol. 4085, pp. 573–586. Springer, Heidelberg (2006) 18. Cimatti, A., Roveri, M., Semprini, S., Tonetta, S.: From PSL to NBA: a modular symbolic encoding. In: FMCAD 2006: Proceedings of the Formal Methods in Computer Aided Design, Washington, DC, USA, pp. 125–133. IEEE Computer Society, Los Alamitos (2006) 19. Ben-David, S., Fisman, D., Ruah, S.: The safety simple subset. In: Ur, S., Bin, E., Wolfsthal, Y. (eds.) HVC 2005. LNCS, vol. 3875, pp. 14–29. Springer, Heidelberg (2006) 20. d’Amorim, M., Rosu, G.: Efficient monitoring of omega-languages. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 364–378. Springer, Heidelberg (2005) 21. Brzozowski, J., Simon, I.: Characterizations of locally testable events. Discrete Math. 4, 243–271 (1973) 22. Wike, T.: Locally threshold testable languages of infinite words. In: Enjalbert, P., Wagner, K.W., Finkel, A. (eds.) STACS 1993. LNCS, vol. 665, pp. 607–616. Springer, Heidelberg (1993) 23. Kupferman, O., Lustig, Y., Vardi, M.: On locally checkable properties. In: Hermann, M., Voronkov, A. (eds.) LPAR 2006. LNCS (LNAI), vol. 4246, pp. 302–316. Springer, Heidelberg (2006) 24. Etessami, K., Holzmann, G.: Optimizing B¨ uchi automata. In: Palamidessi, C. (ed.) CONCUR 2000. LNCS, vol. 1877, pp. 153–167. Springer, Heidelberg (2000)
State Joining and Splitting for the Symbolic Execution of Binaries Trevor Hansen, Peter Schachte, and Harald Søndergaard Department of Computer Science and Software Engineering The University of Melbourne, Vic. 3010, Australia
Abstract. Symbolic execution can be used to explore the possible runtime states of a program. It makes use of a concept of “state” where a variable’s value has been replaced by an expression that gives the value as a function of program input. Additionally, a state can be equipped with a summary of control-flow history: a “path constraint” keeps track of the class of inputs that would have caused the same flow of control. But even simple programs can have trillions of paths, so a path-by-path analysis is impractical. We investigate a “state joining” approach to making symbolic execution more practical and describe the challenges of applying state joining to the analysis of unmodified Linux x86 executables. The results so far are mixed, with good results for some code. On other examples, state joining produces cumbersome constraints that are more expensive to solve than those generated by normal symbolic execution.
1
Introduction
Automated software verification aims to ensure that code behaves as intended. Research into the mechanisation of the task began more than 30 ago, but waned as the difficulty of the task became clearer; in particular scalability turned out to be elusive. But recent advances in constraint solving technology have rekindled optimism. At the same time the desire for reliable verification of software, in particular of binary code, is growing. We are currently developing a tool for the analysis of unmodified Linux x86 executables, for the purpose of identifying violated assertions. The solution requires dynamic disassembly of a given executable. A natural approach is to use symbolic execution on semi-random input to explore the space of possible runtime states. Symbolic execution derives formulae that precisely describe the output state of a program in terms of its inputs. Symbolic execution has been used to generate test cases [5], and to show the equivalence of programs [10]. In each case, the tremendous number of execution paths limits the size of programs that can be symbolically executed. Although relevant to other uses of symbolic execution, we are interested in state joining in the context of the verification of binary executables. Dealing with the path explosion is the key challenge in making symbolic execution useful for software verification. S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 76–92, 2009. c Springer-Verlag Berlin Heidelberg 2009
State Joining and Splitting for the Symbolic Execution of Binaries
77
By combining the results of symbolic execution along all the program’s paths, the behaviour of the program is described. Although the number of paths through a deterministic program is less than the number of its input states, real programs still have intractably many possible paths. In traditional symbolic execution, states are split at control flow instructions, and not subsequently joined, even where the program’s control flow merges. State joining addresses the path explosion problem by coalescing states with the same instruction pointer. Dynamic analysis, symbolic execution, and abstract interpretation are three approaches that abstract the concrete semantics of a language differently. Dynamic analysis uses actual values—an under-approximation, symbolic execution uses the complete semantics—no approximation, and abstract interpretation uses an over-approximation. Each establishes the relationship between the input and output. Verification checks that properties apply to this relationship, or sequence of relationships. Each approach can be used to verify just some of the states in a program, partially verifying it, or to verify the entire program. Complete dynamic analysis, and symbolic execution, check every feasible state, and no unfeasible states. Abstract interpretation checks a superset of feasible states, that is, all feasible and some unfeasible states. Consequently the guarantee from each approach is different. Dynamic analysis misses defects that do not occur for the particular states it checks. Symbolic execution misses defects that do not occur for the particular paths it checks. Abstract interpretation checks an over-approximation of the states, so perhaps raises false alarms. Symbolic execution and dynamic analysis do not approximate, so find all the defects and no extras—at the cost of tractability. The partial dynamic analysis and the complete abstract interpretation of large programs are common, but the complete symbolic execution of large programs is not. There are other differences between the approaches. Abstract interpretation is usually designed to terminate, possibly through the use of widening. However, dynamic analysis and symbolic execution will not terminate if the program they are analysing does not terminate. Symbolic execution and dynamic analysis have an advantage that they are able to produce the concrete inputs that cause failures—which allows programmers to reproduce the failure. We use dynamic disassembly to incrementally build a partial Control Flow Graph (CFG). It is partial, because other blocks or transitions might have occurred if the results from system calls were different, or a different length input was used. We use the partial CFG to determine when states can be joined. In practice having a partial rather than a complete CFG causes minor complications. The rest of this paper proceeds as follows. Section 2 gives an example to clarify our approach. The example shows the advantages of state joining, disregarding the practical difficulties (these are discussed in Sect. 7). Section 3 contains a description of the overall algorithm. Without applying simplifications the symbolic formulae quickly grow unmanageable. Section 4 describes how Boolean minimisation can be used to reduce a formula’s size. Section 5 describes the specifics of our tool. In Sect. 6 we compare runs with and without state joining. Mostly state joining helps, but it does produce more cumbersome constraint expressions,
78
1 2 3 4 5 6 7 8 9 10
T. Hansen, P. Schachte, and H. Søndergaard uint64 t m u l t i p l y ( uint64 t x0 , uint64 t y0 ) { uint64 t x=x0 , y=y0 , z =0; while ( x !=0) { i f ( ! ev en ( x ) ) /∗ ( x &1) ∗/ z += y ; x = x >> 1 ; y = y << 1 ; } return z ; } Fig. 1. Using shift-and-add to multiply positive integers x0 and y0
which can slow down the analysis. Finally we compare our approach to others that tackle symbolic execution’s path explosion problem.
2
State Joining: An Example
The terminology of symbolic execution (or symbolic simulation) varies, so let us fix our use of terms. We denote the instruction pointer register (which holds the address of the next instruction to execute) by IP. A path is a sequence of instructions. For a given input, a path can be constructed by running the program and listing the successive IP values. (We assume, without loss of generality, that a program has a single entry point, IPinitial , and a single exit point IPfinal .) When instructions are in loops, or called multiple times in functions, those instructions will occur multiple times in the path. By a trace we mean a path and the input and output of the system calls. A (symbolic) state is a map from locations to expressions, together with the IP, where a location may be a register or a memory address. With each state we shall also associate a “path constraint” (see below). A state is feasible if it can occur along some path. Symbolic execution may fork at some conditional branch instructions. It is natural to associate a path constraint (PC) with a state, the PC being a constraint over the program input variables. The PC describes the inputs that would take the same path through the program, that is, the inputs for which the associated state is valid. States can be seen to form a state tree, with branching occurring whenever a conditional branch instruction depends on symbolic values. In the state tree, a parent’s path constraint is equivalent to the disjunction of its childrens’ path constraints. In the presence of “state joining,” the tree becomes a (directed, acyclic) state graph. To provide the intuition of state joining we will describe the analysis of the Multiply example in Fig. 1. The program uses shift-and-add to multiply two non-negative integers x and y, leaving the result in z. We show C-style code for clarity—all analysis is performed on unmodified executables. Figure 2 shows a control-flow graph for the function and Fig. 3 shows the state graph that is produced during joining. For each distinct x value, a Multiply call takes a different path through the function body; these many paths make
State Joining and Splitting for the Symbolic Execution of Binaries
79
symbolic execution slow. For 64-bit integers, there are 264 distinct paths through the function. To see this consider the branching on the even condition (block B3 ), which checks the rightmost bit of the x variable. Then the bits are shifted one to the right, giving a new rightmost bit. For a given x0, the resulting symbolic expression for z is an expression valid for every y value. For example, given x0 = 2, the symbolic expression produced for z is z = (y0 << 1), with a PC that simplifies to x0 = 2. The resulting expression for z describes the output state in terms of the input state. Substituting the input values into the formula for z gives exactly the same result as if the function was run with the same inputs. Symbolic execution addresses the state explosion problem by reasoning x = x0 about all the states that take a pary = y0 B1 ticular path. In this case there are z = 0 264 paths to symbolically execute, ver1 sus 264 × 264 initial states. We now face the path explosion problem— if x == 0 goto B6 B2 this function still contains 264 paths, too many to reason about one-by2 one. With state joining, states with the same IP are joined. This makes if even(x) goto B5 B3 good sense for the example, as two 3 paths that have previously differed in whether one bit was one or zero could z += y B4 have identical subsequent paths. Each time a control transfer in4 struction is reached, a constraint solver call is made to check which x = x >> 1 branches can be taken. For example, y = y << 1 B5 if x != 0 goto B3 on reaching block B2 , in Fig. 2, we call the solver to check which of the 5 branches to points 2 or 5 can be taken. In this example both branches can be return z B6 taken, so the state is split, with one branch conjoining the PC with x = 0, Fig. 2. CFG for multiplication code and the other using x = 0. We may consider the state of the function to have 5 elements: x, y, z, the block that will next be executed (the IP), and the PC. Figure 3 shows how the states are split and joined. The topmost state in the figure corresponds to having just entered the function, with the IP at program point 1. The second row results from handling the x == 0 condition. One state corresponds to x == 0, the other to entering the loop. There are two joins, shown with bold borders. If the states being joined have the same expression, then the value is unchanged, but if they are different, an ITE constructor
80
T. Hansen, P. Schachte, and H. Søndergaard
x : y : z : pc: ip: x : y : z : pc: ip: x : y : z : pc: ip:
x0 y0 0 x0 != 0 and !even(x0) 3
x0 y0 0 true 1
x0 y0 0 x0 != 0 2
x : y : z : pc: ip: x : y : z : pc: ip:
x0 y0 0 x0 == 0 5
x0 y0 0 x0 != 0 and even(x0) 4
x : y : z : pc: ip:
x0 y0 ite(even(x0),0,y0) x0 != 0 and (!even(x0) or even(x0)) 4
x : y : z : pc: ip:
x0>>1 y0<<1 ite(even(x0),0,y0) x0 != 0 and x0>>1 != 0 2
x : y : z : pc: ip:
ite(x0!=0,x0>>1,x0) ite(x0!=0,y0<<1,y0) ite(x0!=0 and odd(x0),y0,0) x0 == 0 or (x0 != 0 and x0>>1 == 0) 5
Fig. 3. The state graph for the example of Fig. 2
(for if-then-else) joins the different expressions. The result of a join holds all the information of the component states. No information is discarded. Eventually x’s value at the end of B5 is (x0 >>u 64), so that x = 0 is unsatisfiable. Then only the branch to point 5 will be taken. When this happens, the final states will be joined together at point 5, producing the complete expression for z in terms of its inputs. The loops are unrolled automatically—no domain knowledge needs to be entered about the number of times to unroll loops. Note that this analysis is “anytime”: If the analysis is paused, information can be read out that correctly describes the program’s runtime behaviour.
3
The Algorithm
The algorithm (Algorithm 1) takes a program to analyse, in the form of a CFG, and the initial state comprising the PC true, the IP IPinitial , and symbolic variables for all inputs. The symbolic execution will then produce results covering all possible values of these symbolic variables. The algorithm presented is somewhat simplified, in that it does not show the decompilation of the program needed
State Joining and Splitting for the Symbolic Execution of Binaries
81
Algorithm 1. Applying state joining to a program Require: A CFG and an initial symbolic state s0 1: Fringe ← {s0 } 2: while some s ∈ Fringe has ip(s) = IPfinal do 3: for all {s1 , s2 } ⊆ Fringe such that ip(s1 ) = ip(s2 ) do 4: Fringe ← {s1 s2 } ∪ (Fringe \ {s1 , s2 }) 5: end for 6: s ← choose(Fringe) 7: Fringe ← (Fringe \ {s}) ∪ execute(s) 8: end while 9: return s where {s} = Fringe
because we are analysing binaries. In practice, we interleave decompilation with the symbolic execution. Algorithm 1 relies on three key functions not defined here: execute, choose, and (join on states). The execute operation extends the supplied state by executing the instruction at its IP. Note that this will produce more than one state for selection (conditional or computed branch) instructions. The next sections describe the choose and functions. 3.1
Preparing to Join
We want to propagate states then stop them where they can be joined. Figure 4 shows two fragments of control-flow graphs. On the left, state m’s instruction pointer is two blocks away from a join point, while state n’s instruction pointer is one block away from the same point. We wish to execute n for one block, and m for two blocks. The two states could then be joined. If either state is run too far, past the same-IP point, that chance to join states is lost. For each state we find all of the descendants of that state in the partial CFG. Because every path finishes at the same exit block, the paths from the states will intersect. For each state we find the minimum distance of that state to its earliest descendant that is common to another state—we call these join points. We find for each state the minimum number of edges that can be traversed before a join point is reached. For each state we now have the minimum distance to its next join point. Next, we remove from consideration as the next state to run, any state that post-dominates another state. A node o post-dominates another node p if all paths from p to the exit node must pass through o. Since the post-dominated node p should pass through the dominator o, we do not wish to execute the post-dominating state. The right of Fig. 4 shows an example where the post-dominance check applies. Each node is zero distance from its earliest join point. That is, if each node is not advanced, another state may be joined with it. In the figure if state o were advanced, it would be moved away from its join with states q and p.
82
T. Hansen, P. Schachte, and H. Søndergaard State m:
State n:
pc: ... ip: B1
pc: ... ip: B2
B1
B2
State q: pc: ... ip: B6
B6
State p: pc: ... ip: B7
B7
State o: pc: ... ip: B8
B8
Fig. 4. CFG fragments. State m’s IP is two blocks from a join point. State n’s IP is one block away.
We then choose a state to run for as many blocks as it is from its nearest join point. This is not perfect. We build the control flow graph from all the control flow transfers that have occurred so far in the simulator, as well as the static jumps—that is, jumps to a fixed location, where that location has already been disassembled. Runtime calculated jumps (such as returns from functions) which we have not yet seen are omitted. When control transfers to a new location, we perform a dynamic disassembly and incorporate the resulting blocks into the control flow graph. Note that, if the nearest join point is missed, this does not compromise correctness. 3.2
Joining and Splitting
States are joined, and sometimes split. A join occurs when two states will execute the same instruction next. A split occurs when the location of the next address to execute depends on a symbolic expression. We split in two contexts. Firstly, on reaching a conditional branch instruction whose condition ϕ depends on symbolic values, the current PC is conjoined with ϕ, and separately with ¬ϕ. If both are satisfiable, the state is split and two states are created. The second context involves states that have previously been joined. When a function call may return to multiple locations because the return address was joined from states that called the function at different locations, the state needs to be split at the return statement of the function to return the respective states back to their call sites. We create a new state for each distinct next instruction. For example, if the instruction pointer can be 2 or 4, then we split the state, resulting in two states, one with a PC of (PC ∧ IP = 2) and another with a PC of (PC ∧ IP = 4). To join states s1 and s2 where sk contains P Ck and register and memory locations loc[i], create a new state s with P Cs = P C1 ∨ P C2 , and for all i,
State Joining and Splitting for the Symbolic Execution of Binaries
83
locs [i] = IT E(P C1 , loc1 [i], loc2 [i]). That is, if P C1 is true, use the value from the first state, otherwise use the value from the second state. This is allowable because PCs are always disjoint.
4
Simplifications and Approximations
Symbolic execution, even of small programs, can result in large symbolic expressions. This is especially so when analysing machine code. We now discuss some of the simplifications we use. We also apply range and domain analysis of pointers to reduce the number of solver calls. The approximations we use over-approximate the encoding of the constraints, but those constraints describe precisely (without approximation) the behaviour of the program. 4.1
PC Simplification
The Multiply example has a simple branching structure, making its PC easy to simplify. However, programs with more complicated control flow, emanating, for example, from break statements, can benefit from PC simplification. Without simplification the PC becomes large. In this section we simplify the PC by abstracting its primitive constraints as propositional variables. We then apply Boolean simplifications to reduce the number of propositional variables in the PC, hopefully making the PC easier to handle for a theory solver. We use a, b, and c to refer to propositional variables that describe individual SMT(QF AUFBV)1 constraints. Continuing with the Multiply example, consider point 4, where the (x = 0 ∧ even(x)) state joins with the (x = 0 ∧ ¬even(x)) state. Letting a = (x = 0), and b = even(x), the joined PC becomes (a ∧ b) ∨ (a ∧ ¬b). Applying the obvious simplification reduces the joined state’s PC to (x = 0). Heuristic DNF minimisation tools that apply such rules are available. We use Espresso [14]. State splitting complicates this minimisation. Consider for example a return statement from a function that returns to one of three addresses; perhaps because calls from three different sites were joined. Let the potential new IP addresses be 4, 8 and 12, let the PC before the split be P C0 , and let IP be the symbolic expression for the instruction pointer when the transfer occurs. Then after the split there will be three states: (P C0 ∧ IP = 4), (P C0 ∧ IP = 8) and (P C0 ∧ IP = 12). If all three states are later joined, the PC will become P C0 ∧ (IP = 4 ∨ IP = 8 ∨ IP = 12) which should be simplified to P C0 . A Boolean minimisation algorithm will not do so—because the second disjunction is not obviously true. We can, however, assist the minimisation algorithm by modifying the constraints. Let a = (IP = 4 ∨ IP = 8), and b = (IP = 4). Three equivalent PCs that can easily be minimised are: (P C0 ∧ a ∧ b), (P C0 ∧ a ∧ ¬b), (P C0 ∧ ¬a). 1
QF AUFBV is quantifier free formulae over the theory of uninterpreted functions, arrays, and fixed-size bit-vectors. We use the last two. Decision procedures for the theory can solve non-linear formulae with modular arithmetic.
84
T. Hansen, P. Schachte, and H. Søndergaard
Removing tautologies from the PC helps simplification. For example, consider a PC that contains both x = 0, and ¬(x = 0). Label them a and b. No state will have the PC (a ∧ ¬b) that can simplify the b term. It is safe to remove constraints with an unsatisfiable negation. Since their negation is unsatisfiable, these constraints subsume prior constraints—they add no information. State joining produces many ITE expressions, so we carefully simplify their guards. We calculate four possible guards and use the smallest one. Consider two states to be joined, with P C1 = (¬a ∧ ¬c) and P C2 = ((a ∧ ¬c) ∨ (a ∧ b ∧ c)). The PCs are shown on the unit cube to the right— hollow circles for P C1 and filled circles for P C2 . A reasonable ITE to use for the joined locations a is: locnew [i] = IT E((¬a∧¬c), loc1 [i], loc2 [i]). Anb other reasonable choice for an ITE is to take P C2 as the guard, and swap the order of the remaining c arguments. However it may be possible to generate a smaller guard by considering that the ITE expression will only be evaluated if P C1 ∨ P C2 is true. Inspection of the cube shows that the potentially simpler guard of a is equivalent to a ∧ ¬c. a covers only vertices of P C1 and those that we don’t care about, it covers none of P C2 ’s vertices. To minimise the guard, we mark the vertices of P C1 as true, those of P C2 as false, and the rest as don’t care. Then we minimise using Espresso. Then we swap, marking P C2 ’s vertices true, P C1 ’s false, and we minimise again. As guard we choose the expression with the smallest number of nodes in its SMT(QF AUFBV) representation. These are candidate guards: P C1 , P C2 , the restriction of P C1 to P C1 ∨ P C2 , and the restriction of P C2 to P C1 ∨ P C2 . During the constructions of symbolic expressions we follow standard practice and apply rewriting rules to simplify expressions, for example turning x + 0 into x. We implement dozens of simplification rules like some solvers such as STP do. Some SMT(QF AUFBV) solvers, like Boolector and STP [6], provide an interface to simplify single expressions. Using those interfaces to simplify expression would most likely speed up our tool. 4.2
Value Analysis for Pointers
In the atol () function used by a later example (Fig. 5), each character is looked up in an array to determine if it is a digit. So there is a lookup: isDigit[c], where isDigit is an array of 256 values indicating whether each c is a digit, or not. We analyse such pointer accesses using three techniques. First by analysing the domain of the expression, if that fails, by analysing the range of the expression, if that fails, by solving for each memory address in turn. The isDigit[c] expression will translate into a symbolic memory access such as base + (c << 1), that is, access the memory location at the value of the character times two plus some fixed base (the location in memory of the zero index). As this expression contains only a single 8-bit variable, it can take on
State Joining and Splitting for the Symbolic Execution of Binaries
85
only 256 possible values. We substitute for each possible value, ignoring the PC. If the domain of an expression is large, we use a bit-vector interval analysis [13] of the expression to calculate the interval of the range. Balakrishnan [3] gives precise ranges for logical and arithmetic operations. For example a subtract expression over two 8-bit values each with a range of [0,1] has a resulting range of [-1,1]. We use a signed range and widen to the whole range when an overflow occurs. A signed range allows us to handle an expression such as 2 ∗ val − 1. If val is in [0,12], then the range of the expression signed is [-1,23]; an unsigned range would widen terribly to: [1,255]. Because of the ubiquity of aligned memory accesses we use strides, allowing regular gaps in the interval, for example a stride of 2 on the interval [0,6] contains [0,2,4,6]. When performing the interval analysis, we likewise ignore the PC. If the domain and range analysis produce too many values, or if some of the values are not addressable, we (inefficiently) solve iteratively for the index subject to the PC. For example, given a symbolic expression index, which first solves to 2, we then solve index such that index = 2, and so on. The solvers we use do not have the option to produce all the satisfying assignments to a formula—which we require to check all possible memory addresses.
5
The Analysis Tool
We analyse Linux x86 executables, both compiled binaries and interpreted scripts. Given an executable, and an input width, we dynamically disassemble the executable as it runs on a random input of specified length. The result of this dynamic disassembly is a trace through the program. The tool then symbolically executes the program on the same input, splitting states where necessary. When symbolic execution reaches an address that has not already been disassembled, a new dynamic disassembly is triggered. The resulting trace is merged with previous disassembled traces. We follow Minato [10] and check the satisfiability of each guard. A loop is unrolled until its guard evaluates to false. We use the Valgrind framework [12] to disassemble. Valgrind is a widely used dynamic instrumentation framework for analysing x86 and PowerPC executables on the Linux and AIX operating systems. We use Valgrind to identify instructions, and also because it translates x86 instructions into a more manageable RISC language. There are three advantages of dynamic disassembly over static disassembly. First, during disassembly the results from system calls are collected; these are later played back to the simulated program. Second, we disassemble the dynamically linked libraries no differently from the program’s instructions. Third, interpreted scripts, and their interpreters, can be analysed. Our dynamic disassembly is time consuming, others [11] have used an initial static disassembly to save time. The PC and symbolic expressions are stored inside our tool as Valgrind expression graphs, and converted to the SMT(QF AUFBV) language for solving. We do not utilise the solver’s custom interface. Instead we generate standard
86
1 2 3
T. Hansen, P. Schachte, and H. Søndergaard i n t main ( void ) { char s [ 9 ] ; long number ;
4 5 6 7 8 9 10 11
p r i n t f ( ” Enter a Number : ” ) ; f g e t s ( s , sizeof ( s ) , std in ) ; number = a t o l ( s ) ; i f ( number == 1 2 3 4 5 6 7 8 ) { fail (); } return 0 ; Fig. 5. Number: Error on input 12345678
SMT-LIB format and communicate via temporary files. Using the standard interface requires the solver to redo work but allows easy experimentation with different solvers. SMT(QF AUFBV) corresponds closely to the low-level machine integer semantics. It handles precisely the non-linear operations we require. We have built a simulator of Valgrind’s instructions, which executes the instructions symbolically and concretely. It simulates many, but not all, of Valgrind’s instructions. Currently, our tool does not handle multi-threaded programs, asynchronous signals, or floating point instructions. The program’s state is modified by Linux system calls. We have symbolic versions of some system calls only, the remainder we replay from the traces captured during dynamic disassembly.
6
Results
To explore the usefulness of the state joining approach we analyse four programs. One is a simple C program that will fail if a particular integer is input (Fig. 5). The function describing the result of atol () is surprisingly complicated: Given 9 characters of input, there are about 1020 different inputs that will evaluate to 1, corresponding to white space followed by an optional plus sign, followed by 1, followed by non-numeric characters. The second is the gzip file compression program. The third is a program that counts the number of 1-bits in a number (Fig. 6). The fourth is the Multiply example of Fig. 1. In the Number example, an input that causes the program to fail is derived. For the remaining three programs the input-output function is derived. For the Multiply example, this could be used to show the commutativity of x and y. For Wegner this could be used to determine that the return value is always less than 65. For gzip this could be used as the first part of establishing that gunzip composed with gzip is the identity function for some size inputs. Table 2 shows the results of running these four programs. Some of the rows are: Dynamic Disassemblies: the number of calls to the dynamic disassembler (the value varies depending on the initial random input chosen); Joins: the number of
State Joining and Splitting for the Symbolic Execution of Binaries
1 2 3 4
uint64 t uint64 for ( c y &=
87
popCount ( uint64 t y ) { t c; = 0 ; y ; c++) y −1;
5
return c ;
6 7
} Fig. 6. Wegner: Counting 1-bits Table 1. Normalised times, the fastest approach is given a time of 1 Example Bytes Symbolic Execution State Joining Number 9 3 1 Wegner 8 1 1 Gzip 1 572 > 4300 Multiply 16 > 105 1
Execution > 105 > 105 1 > 105
pairs of states that were joined; Maximum Width: the greatest number of active states at any time (the maximum width of the state graph); Maximum Height: the longest path through the state graph. We made three runs of each analysis, the times shown in Table 2 are arithmetic averages. Times were measured on a single core of a Pentium D 3GHz, running Ubuntu 8.04. We use revision 60 of STP, which we found to be the fastest available solver. A program that generates all one byte files, then gzips them, then builds an input-output function takes 7 seconds to run. To produce the equivalent formula by symbolic execution takes 4006 seconds, and to produce the same formula by state joining timed out after 30,000 seconds. Gzip, when symbolically executed, produces singleton states—each input follows a different path. So symbolic execution has no advantage over dynamic analysis. The PC that state joining produces is more difficult to solve, overwhelming the savings from merging. The Wegner example has a number of paths equal to the bitwidth plus one, in our example, 65 paths. State joining on this example joins just before the exit— the same as symbolic execution. The Wegner example benefits neither from the Boolean simplifications nor from the pointer value analysis. The Boolean simplifications occur at the join point at the function’s return, when the PC is no longer used. The pointer value analysis, as for the Multiply example, does not help because the analysis produces no symbolic pointers. The Multiply example has a simple structure, with control transfer instructions that join back on each other—producing constraints that easily cancel out. With Boolean simplifications disabled, the example takes more than 50 times longer to run. It is well suited to state joining. Note that we pass the function
88
T. Hansen, P. Schachte, and H. Søndergaard
Table 2. Results of applying state joining. Gzip with state joining timed out after 30,000 seconds. Problems Category Number Gzip Wegner Multiply Bytes input 9 1 8 16 Dynamic Disassemblies 9-16 1 1 1 With joining Total time 126s > 30000s 33s 45s Solver time (STP v60) 90s > 26973s 28s 11s Solver calls 110 > 684 67 131 Boolean simplification 4.5s > 103s 3s 5.5s Joins 60 > 510 64 127 Maximum height 94 > 500 67 131 Maximum width 5 2 2 3 Without joining Total time 732s 4006s 35s – Solver time 555s 3312s 25s – Solver calls 6929 34944 66 – Boolean simplification 82s 52s 3s – Paths for Symbolic Exec. 1662 256 65 264
symbolic variables, not characters turned into numbers. Parsing the numbers would require effort comparable to that in the Number example.
7
Complications
There are some practical complications with state joining for executables. Linux has hundreds of system calls that can modify the program’s state. Our tool has symbolic versions of the semantics for only a few system calls, for the remainder we replay the system call’s results which Valgrind captured during the dynamic disassembly. Before replaying the result of a system call we check that the parameters are the same. Our assumption is that system calls that are called in the same order with the same input will produce the same results. This limits further the strength of the guarantee we extract. For example if a program would behave differently on different dates, we would not discover this, as the result of the system call that returns the date is not made symbolic. Because we replay traces of the system calls, if inputs cause the program to make different sequences of system calls, the analysis will not have the appropriate system call to replay. One solution may be to split the state whenever the sequence of system calls changes, but we do not do this yet. If two states have different system call traces then they might not be appropriate to join. At present we cannot analyse the program shown in Fig. 7, as the memory assigned on each branch is different. If we allowed the memory mapping system calls to vary on branches then one state would have memory allocated
State Joining and Splitting for the Symbolic Execution of Binaries
1 2 3 4 5
89
i f ( a >0) p = malloc ( 100000) ; else p = malloc ( 0 ) ; ∗p = 0 ; Fig. 7. A complication for state joining
that the other did not. States cannot be joined just when their next instructions are the same. There are other limitations, such as the allocated memory, and the open files being the same. Using dynamic disassembly we cannot visit locations that are not reached at runtime. So we cannot analyse error handling code unless the error occurs at runtime. For example, if we wish to insert error return values from system calls, such as a file read failing, then we need to disassemble the error handling code. Currently we cannot introduce failures when performing a disassembly, so cannot explore the error handling code. Over-zealous joining is detrimental. For small functions we do not want to join, as the cost of joining/splitting overwhelms the saving. If a function is called from different sites and contains few instructions, the joined states will run for a few instructions before splitting when the function returns to the different calls sites. We need heuristics to identify the savings from joining states. Another limitation is that symbolic execution generally operates on a fixed input width, say 20 bits. Depending on the program structure, greater input lengths may be required to cause a particular failure. Symbolic execution builds a circuit that describes the program for some fixed length input. For some inputs this may be equivalent to checking each smaller length input, but usually not. When choosing what to join, we do not consider the calls site. Consider a function which is called at the start and end of a program. Our analysis does not take the call site into consideration, so believes that calling the function could return to either the start or end of the program—when it does not.
8
Related Research
The path explosion problem of symbolic execution has been addresses by others. K¨ olbl and Pixley [9] investigate state joining of programs written in a subset of C++, and describe it well. The principal difference with our work is that we focus on analysing arbitrary binaries which can use dynamic memory and pointer arithmetic. Godefroid [7] performs a compositional analysis—a more general form of memoization. When a function is called with symbolic expressions, the PC when it returns and the effects of the function are stored. Each time the function is called the PC and the results are saved. When calls to the same function are made later, before analysing the function again, a check is made whether the prior PCs cover the current PC. This approach has the advantage that function
90
T. Hansen, P. Schachte, and H. Søndergaard
summaries can be reused; we inline functions, analysing them each time they are called. So a compositional approach would make at most 264 passes through the Multiply function (Fig. 1). We do not reuse the summary, so analyse Multiply in entirety each time it is called. State joining and compositional approaches are orthogonal, and could be combined. Performing a compositional analysis of machine code is more difficult than on source code because the inputs to functions are not as obvious. Babi´c et al. [2] implement a similar memoization approach for their Calysto tool. Boonstoppel et al. [4] discard states that differ from other states only in locations that will not later be read. Consider a conditional output statement if (guard) {printf (‘‘ value’ ’ ); }. If both branches are taken and outputs are ignored, the states do not differ. One can be discarded, as the remainder of their paths will be the same. This approach requires the calculation of which locations will be read and written to in the remainder of the path. Deciding this statically for machine code is more difficult owing to the more complicated control flow transfers. Our approach is more general—allowing the joining of states that differ in variables, while not requiring the calculation of which variables may be read from or written to later. The calculation does allow discarding of symbolic expressions that will not be used later—a good way to conserve memory. Incidental to deriving the program’s input-output function, we extract an accurate (partial) CFG from the binary code. The approach generates a safe under-approximation of the CFG using a flow sensitive analysis. A corresponding upper-bound of the CFG can be produced by abstract interpretation [8]. In calculating the input-output function, our tool converts control dependency into data dependency. To automatically vectorise loops during compilation, Allen et al. [1] use Boolean simplification and if-conversion to similarily convert ifthen-else statements. A statement such as if (g) {y=x} else {y=z} is converted into y= (x & guard || y & guard), where guard is all ones or all zeroes, extended to the bit-width of y. We suspect that compilers, like gcc, which implement ifconversion will produce binaries that our tool can analyse more efficiently.
9
Discussion
State joining as we have implemented it, has varying performance. The performance of the approach depends on the difficulty of solving the generated constraints. On the gzip example, the constraints became so expensive to solve that state joining was slower both than executing the program and symbolic execution. In particular, gzip produced many symbolic memory indexes which slowed down the solvers. State joining is useful if the following conditions apply: the paths call a similar sequence of system calls, the number of paths through the program is large, and memory is rarely written to at symbolic locations. Three improvements to our implementation are apparent. First is that it is common around loops for later constraints to imply earlier ones. It is not apparent to a propositional simplifier that (from the Multiply example): x0 >>u 2 = 0
State Joining and Splitting for the Symbolic Execution of Binaries
91
implies x0 >>u 1 = 0. Removing earlier constraints when they are implied by later constraints is very desirable. Second, and related, is that with our current simplification scheme based on propositional variables, the performance of the analysis is dependent on whether the constraints simplify during joining. If the joined PC can be simplified, as in the Multiply example, performance is good. However, slight syntactic changes to conditionals reduce the simplifications, dramatically increasing running time. Using the solver’s native interface to maintain the state’s PC would reduce the amount of work the solver needs to perform. Third, we plan to use the generalised memoization (compositional) approach to reduce the amount of re-work performed. Normal symbolic execution of binaries allows arbitrary properties about the input-output function of programs to be verified, but the technique works poorly on programs that have many paths through them. We have investigated how state joining may help. So far we have a number of promising results for analysing unmodified executables, as well as examples that do not benefit. Future work will chip away at the latter.
References 1. Allen, J.R., Kennedy, K., Porterfield, C., Warren, J.: Conversion of control dependence to data dependence. In: Proceedings of the Tenth ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, pp. 177–189. ACM Press, New York (1983) 2. Babi´c, D., Hu, A.: Calysto: Scalable and precise extended static checking. In: Proceedings of the Thirtieth International Conference on Software Engineering, pp. 211–220. ACM Press, New York (2008) 3. Balakrishnan, G.: WYSINWYX: What You See Is Not What You Execute. PhD thesis, University of Wisconsin at Madison, Madison, WI, USA (2007) 4. Boonstoppel, P., Cadar, C., Engler, D.R.: RWset: Attacking path explosion in constraint-based test generation. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 351–366. Springer, Heidelberg (2008) 5. Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: Automatically generating inputs of death. In: Proceedings of the Thirteenth ACM Conference on Computer and Communications Security, pp. 322–335. ACM Press, New York (2006) 6. Ganesh, V., Dill, D.L.: A decision procedure for bit-vectors and arrays. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 519–531. Springer, Heidelberg (2007) 7. Godefroid, P.: Compositional dynamic test generation. In: Proceedings of the Thirtyfourth ACM Symposium on Principles of Programming Languages, pp. 47–54. ACM Press, New York (2007) 8. Kinder, J., Zuleger, F., Veith, H.: An abstract interpretation-based framework for control flow reconstruction from binaries. In: Jones, N.D., M¨ uller-Olm, M. (eds.) VMCAI 2009. LNCS, vol. 5403, pp. 214–228. Springer, Heidelberg (2009) 9. K¨ olbl, A., Pixley, C.: Constructing efficient formal models from high-level descriptions using symbolic simulation. International Journal on Parallel Programming 33(6), 645–666 (2005)
92
T. Hansen, P. Schachte, and H. Søndergaard
10. Minato, S.-I.: Generation of BDDs from hardware algorithm descriptions. In: Proceedings of the 1996 IEEE/ACM International Conference on Computer-Aided Design, pp. 644–649. IEEE Comp. Soc., Los Alamitos (1996) 11. Nanda, S., Li, W., Lam, L.-C., Chiueh, T.-C.: BIRD: Binary interpretation using runtime disassembly. In: Proceedings of the International Symposium on Code Generation and Optimization, pp. 358–370. IEEE Comp. Soc., Los Alamitos (2006) 12. Nethercote, N., Seward, J.: Valgrind: A framework for heavyweight dynamic binary instrumentation. In: Proceedings of the ACM SIGPLAN 2007 Conference on Programming Language Design and Implementation, pp. 89–100. ACM Press, New York (2007) 13. Patterson, J.: Accurate static branch prediction by value range propagation. In: Proceedings of the ACM SIGPLAN 1995 Conference on Programming Language Design and Implementation, pp. 67–78. ACM Press, New York (1995) 14. Rudell, R.L.: Multiple-valued logic minimization for PLA synthesis. Technical Report UCB/ERL M86/65, EECS Department, Berkeley (1986)
The LIME Interface Specification Language and Runtime Monitoring Tool Kari K¨ ahk¨ onen, Jani Lampinen, Keijo Heljanko, and Ilkka Niemel¨a Helsinki University of Technology TKK Department of Information and Computer Science P.O. Box 5400, FI-02015 TKK, Finland [email protected], [email protected], {Keijo.Heljanko,Ilkka.Niemela}@tkk.fi
Abstract. This paper describes an interface specification language designed in the LIME project (LIME ISL) and the supporting runtime monitoring tool. The interface specification language is tailored for the Java programming language and supports two kinds of specifications: (i) call specifications that specify requirements for the allowed call sequences to a Java object instance and (ii) return specifications that specify the allowed behaviors of the Java object instance. Both the call and return specifications can be expressed with Java annotations in several different ways: as past time LTL formulas, as (safety) future LTL formulas, as regular expressions, and as nondeterministic finite automata. We also describe the supporting LIME interface monitoring tool which is an open source implementation of runtime monitoring for the interface specifications implemented using AspectJ.
1
Introduction
The interface specification language (LIME ISL) developed in the LIME project (http://lime.abo.fi/) is a lightweight formal method for defining behavioral interfaces of Java objects. The approach is supported by an open source implementation of a runtime monitoring tool automatically generating AspectJ [1] aspects to monitor that given interface specifications are not violated. The aim of the LIME ISL is to enable a convenient way for the specification of behavioral aspects of interfaces in a manner that can be efficiently supported by tools. The aim is to extend the design by contract [2] approach to software development supported by approaches such as the Java Modeling Language (JML) [3] to behavioral aspects of interfaces. The idea is to divide the component interface to two parts in an assume/guarantee fashion: (i) call specifications (component environment assumptions) that specify requirements for
Work financially supported by Tekes - Finnish Funding Agency for Technology and Innovation, Conformiq Software, Elektrobit, Nokia, Space Systems Finland, Academy of Finland (projects 112016,126860,128050), and Technology Industries of Finland Centennial Foundation.
S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 93–100, 2009. c Springer-Verlag Berlin Heidelberg 2009
94
K. K¨ ahk¨ onen et al.
the allowed call sequences to a Java object instance and (ii) return specifications (component behavior guarantees) that specify the allowed behaviors of the Java object instance. Both the call and return specifications can be expressed as Java annotations in several different ways: as past time LTL formulas, as (safety) future LTL formulas, as regular expressions, and as nondeterministic finite automata. Our work also draws motivation from runtime monitoring tools such as MOP [4] and Java PathExplorer [5] as well as the tool of Stolz and Bodden [6] in the tool implementation techniques. However, unlike general event based monitoring approaches LIME interface specifications are more structured and the approach can be seen as a more disciplined approach to specifying runtime monitors for the software system of interest. For example, in our approach each behavioral interface is divided into call specifications and return specifications (assumptions/guarantees). Now a violation of a call specification is always a violation of the caller of the interface, while a violation of a return specification is the fault of the called Java class instance. This approach also allows the closing of open systems in testing by automatically generating test stub code directly from the interface specifications. We fully agree with the Jackson and Fekete [7] stating: “Formal descriptions must be lightweight; this means that software developers should not have to express everything about the system being developed, but can instead target formal reasoning at those aspects of the system that are especially risky.” The LIME ISL tries to achieve this goal by allowing partial specification of behavioral interfaces unlike model based design approaches that usually require modelling a large part of the design in order to be genuinely useful. This is achieved by allowing partial and incremental descriptions of the interfaces that can be made richer as needed. Another source of inspiration for the design of the LIME ISL has been the rise of standardized specification languages in the hardware design community such as IEEE 1850 - Property Specification Language (PSL) [8]. One of the key features of PSL is the inclusion of both temporal logic LTL as well as regular expressions in the specification language provided for the user. This combination of several specification methods provides a choice of a convenient notation for specifying the different properties at hand. This is one of the reasons why LIME ISL supports past time LTL formulas, (safety) future LTL formulas, regular expressions, as well as nondeterministic finite automata. The inclusion of future time LTL was also motivated by the need to directly reuse specifications from model checking in the runtime monitoring context.
2
Interface Specifications
The core idea of the LIME interface specification language is to provide a declarative mechanism for defining how different software components can interact through interfaces in a manner that can be monitored at runtime. These interactions can be specified in two ways: by call specifications (CS) which define
The LIME Interface Specification Language and Runtime Monitoring Tool
95
Fig. 1. The interaction model
how components should be used and by return specifications (RS) which define how the components should respond. If a call specifications is violated, the calling component can be determined to be incorrect and, respectively, if the called component does not satisfy its return specifications, it is functioning incorrectly. This interaction model between components is illustrated in Fig. 1. To get an overview of the specification language, let us consider the following example where LIME interface specifications are written for a simple log file interface. 1: @CallSpecifications( 2: regexp = { "FileUsage ::= (open(); (read() | write())*; close())*" }, 3: valuePropositions = { "validString ::= (#entry != null)" }, 4: pltl = { "ProperData ::= G (write() -> validString)" } 5: ) 6: @ReturnSpecifations( 7: valuePropositions = { 8: "okLength ::= #this.length() == #pre(#this.length()+#entry.length())" 9: }, 10: pltl = { "ProperWrites ::= G (write() -> okLength)" } 11: ) 12: public interface LogFile { 13: public void open(); 14: public void close(); 15: public String read(); 16: public void write(String entry); 17: public long length(); 18: }
In this example the call specifications describe the allowed call orders of the interface methods and the valid input values to the write method. The return specifications describe how the implementation of the LogFile should behave when write method is called. Call specifications are similar in spirit to JML preconditions, while return specifications are similar in spirit to JML postconditions.
96
K. K¨ ahk¨ onen et al.
The main difference is that LIME ISL allows also to specify temporal aspects of an interface (behavior over several method calls) while JML concentrates on the behavior of a single call. In the LIME interface specification language, the specifications are written as annotations to Java interfaces or classes. The two main annotations that can be written are @CallSpecifications and @ReturnSpecifications. The annotations for call and return specifications consists of a set of atomic propositions and actual specifications. Atomic propositions are used to make claims about the program execution and the state of the program. These atomic propositions are subdivided into three classes: value propositions, call propositions and exception propositions. Value propositions are claims about the state of the program and the values of arguments given to the observed methods. A value proposition can be seen as a native language expression that should be free of side effects and that is true if and only if the native language expression evaluates to true. In value propositions there are several reserved words that give special semantics for the propositions. Keyword #this allows referencing the instance of the annotated interface (see line 8 of the example), while keyword #result allows referencing the return value of a method. Keyword #pre[primitive type](Java expression ) makes it possible to reference an entry value in return specifications after the actual method has been executed. This allows specifications that describe how some value must change during execution of the observed method. Primitive type int is the default type and therefore it is not necessary to explicitly write it as shown in line 8 of the example specification. By writing #<argument>, it is possible to reference the arguments given to an observed method (see line 3 of the example). Call propositions are claims about method execution. A call proposition is true if and only if the method named in the proposition is currently executing (e.g., the body of open() is executing at the top of the call stack). Argument overloading is not yet supported in the current version and therefore the call propositions refer to all methods that have the same name regardless of their argument types. In line 2 of the example, the methods named in the FileUsage specification are call propositions. Exception propositions are claims about thrown exceptions. Specifically, they are propositions available in return specifications that are true if and only if the observed method threw a specific exception (e.g., RuntimeException has been thrown by a method). The defined call and return specifications use these atomic propositions to describe the expected properties of the interface components and they can be written in three complementary ways: by using regular expressions, nondeterministic finite automata (NFA) and a large supported subset of Linear Temporal Logic with Past (PLTL). In LIME ISL the user does not have to explicitly define when the specifications are observed but the observers see an execution trace of the program where the observation points are implicitly defined by the call propositions that are used in the observed specification. We will refer to these observations points as events.
The LIME Interface Specification Language and Runtime Monitoring Tool
97
Fig. 2. An execution trace of the LogFile example
There are two types of events that differ slightly from each other: call events and return events. Call events occur right before a call to a method that has been used as a call proposition in the corresponding call specification. This means that the respective call specification is observed at this point of the execution trace. Return events are similarly determined by the call propositions in the corresponding return specification. There is, however, a difference how these events are observed. In order to allow the return specifications to contain value propositions that use #pre to describe values at the entry point of the called method, the implementation uses a history variables technique to store the required values at the entry point of a call. It then uses these history variables to monitor the return specification at the return of the called method, where all value propositions are evaluated and the monitored return event happens. Figure 2 illustrates the concept of events in one possible execution trace of a system that uses the LogFile interface. The filled circles in the picture are call events and the empty circles are return events. The semantics is then that the observers are fed their own linear event sequences and based on that event sequence, the observer can detect failing specifications during system runtime.
3
The Runtime Monitoring Tool
The LIME Interface Monitoring Tool is our first software tool for the introduced specification language. It allows monitoring the specifications at runtime to determine if some component violates the given specifications. Multi-threaded programs are not supported in the current version. An architectural overview of the tool is given in Fig. 3. The monitoring tool works by reading the specification annotations from the Java source files. The specifications are then translated into deterministic finite state automata that function as observers. These automata are translated into runnable Java code and AspectJ (http://www.eclipse.org/aspectj/) is used to weave the code into the original program that is being tested. This results in an instrumented runtime environment where the observers are executed at the timepoints discussed in the previous section. Spoon [9], the dk.brics.automaton (http://www.brics.dk/automaton/) package and SCheck [10] are adopted as third-party software. Spoon is used
98
K. K¨ ahk¨ onen et al.
Fig. 3. Architecture of the LIME interface monitoring tools
for analyzing the program and the dk.brics.automaton package is used for internal representation and manipulation of regular expression checkers. SCheck is used for converting future time LTL subformulas into finite state automata. The approach of [11] using synthesized code with history variables is used for past time subformulas, while for the future part the tool SCheck is used to encode informative bad prefixes [10] of future LTL formulas to minimal DFA. Our implementation currently allows the use of past-time subformulas LTL in futuretime LTL formulas but not vice versa. More implementation details for an early version of the tool can be found from [12]. 3.1
Closing Partially Implemented Systems
The call specifications can be used to automatically generate stub code that closes an open system from above. In other words, it is possible to generate a stub code implementation of the application part shown in Fig. 1. so that it uses a component that we want to test. The stub code generates test sequences to the component and the call specifications are used to filter out violating method call sequences. The LIME runtime monitoring tool supports this idea by providing a generator that creates such stub code implementations. The generated code selects the methods to be called non-deterministically and generates random argument values. The number of method calls is limited to a test depth that can be selected by the user. To avoid reporting call specification violations that are caused by the stub code, such violations are set to be identified as inconclusive test runs. Purely random environment is likely to generate a large number test runs that are inconclusive. For this reason the described approach is intended to be used with a testing tool based on dynamic symbolic execution similar to jCUTE [13] and Pex [14]. This prevents the generation of multiple instances of the same test case and also allows us to generate test cases that are difficult to obtain by using only random testing. The implementation of the test case generator is work in progress.
The LIME Interface Specification Language and Runtime Monitoring Tool
99
As an example of the stub code generation, let us consider the LogFile interface again. The class TestDriver shown below has been generated by the monitoring tool and it consists of a simple loop (line 8) where one of the methods in the LogFile interface is called. The ExceptionOverride class (line 7) is used to set the call specification violations to be identified as inconclusive test runs. The random values generated by the stub code can be replaced by input values received from the test case generator when the test generator tool is used. 1: 2: 3: 4: 5: 6:
public class TestDriver { public static void main( String[] args ) { Random r = new Random(); int testDepth = 0; FileImpl obj = new FileImpl(); java.lang.String javalangString1;
7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: } 21: }
4
ExceptionOverride.setCallException(obj, InconclusiveException.class); while (testDepth < 5) { testDepth++; int i = r.nextInt(5); switch (i) { case 0: obj.length(); break; case 1: javalangString1 = RandomString.getString(r); obj.write(javalangString1); break; case 2: obj.read(); break; case 3: obj.close(); break; case 4: obj.open(); break; } }
Conclusion
We have described the LIME interface specification language and interface monitoring tool, available from: http://www.tcs.hut.fi/~ktkahkon/LIMT/. There are interesting topics for further work. The SCheck tool could be extended to allow free mixing of future and past LTL subformulas. The implementation of a test case generator that can be used with the automatically generated stub code is currently work in progress. We are also investigating how the test case generation process can be guided to achieve good interface specification coverage with a small number of test cases. Adding support for multi-threaded programs is one important topic for future work. We are also working on porting the specification language to the C programming language. Another more far reaching research direction would be to investigate interface compatibility of different interfaces along the lines of [15].
100
K. K¨ ahk¨ onen et al.
Acknowledgements. We thank our LIME research partners at ˚ Abo Akademi University and colleagues at TKK for feedback on earlier versions of the LIME interface specification language, and the anonymous referees of RV 2009 for valuable suggestions for improving the paper.
References 1. Kiczales, G., Hilsdale, E., Hugunin, J., Kersten, M., Palm, J., Griswold, W.G.: An overview of AspectJ. In: Knudsen, J.L. (ed.) ECOOP 2001. LNCS, vol. 2072, pp. 327–353. Springer, Heidelberg (2001) 2. Meyer, B.: Applying ”design by contract”. IEEE Computer 25(10), 40–51 (1992) 3. Burdy, L., Cheon, Y., Cok, D., Ernst, M.D., Kiniry, J., Leavens, G.T., Leino, K.R.M., Poll, E.: An overview of JML tools and applications. Software Tools for Technology Transfer 7(3), 212–232 (2005) 4. Chen, F., Rosu, G.: MOP: An efficient and generic runtime verification framework. In: Gabriel, R.P., Bacon, D.F., Lopes, C.V., Steele Jr., G.L. (eds.) OOPSLA, pp. 569–588. ACM, New York (2007) 5. Havelund, K., Rosu, G.: An overview of the runtime verification tool Java PathExplorer. Formal Methods in System Design 24(2), 189–215 (2004) 6. Stolz, V., Bodden, E.: Temporal assertions using AspectJ. Electr. Notes Theor. Comput. Sci. 144(4), 109–124 (2006) 7. Jackson, D., Fekete, A.: Lightweight analysis of object interactions. In: Kobayashi, N., Pierce, B.C. (eds.) TACS 2001. LNCS, vol. 2215, pp. 492–513. Springer, Heidelberg (2001) 8. IEEE: IEEE Standard 1850 - Property Specification Language, PSL (2005) 9. Pawlak, R., Noguera, C., Petitprez, N.: Spoon: Program Analysis and Transformation in Java. Research Report RR-5901, INRIA (2006) 10. Latvala, T.: Efficient model checking of safety properties. In: Ball, T., Rajamani, S.K. (eds.) SPIN 2003. LNCS, vol. 2648, pp. 74–88. Springer, Heidelberg (2003) 11. Havelund, K., Ro¸su, G.: Efficient monitoring of safety properties. Software Tools for Technology Transfer (STTT) 6(2), 158–173 (2004) 12. Lampinen, J.: Interface specification methods for software components. Research Report TKK-ICS-R4, Helsinki University of Technology, Department of Information and Computer Science, Espoo, Finland (June 2008) 13. Sen, K., Agha, G.: CUTE and jCUTE: Concolic unit testing and explicit path model-checking tools. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 419–423. Springer, Heidelberg (2006) 14. Tillmann, N., de Halleux, J.: Pex-white box test generation for.net. In: Beckert, B., H¨ ahnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008) 15. de Alfaro, L., Henzinger, T.A.: Interface theories for component-based design. In: Henzinger, T.A., Kirsch, C.M. (eds.) EMSOFT 2001. LNCS, vol. 2211, pp. 148–165. Springer, Heidelberg (2001)
A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis and Runtime Healing Bohuslav Kˇrena1, Zdenˇek Letko1 , Yarden Nir-Buchbinder2 , Rachel Tzoref-Brill2 , Shmuel Ur2 , and Tom´aˇs Vojnar1 1
FIT, Brno University of Technology, Boˇzetˇechova 2, 61266, Brno, Czech Republic, {krena,iletko,vojnar}@fit.vutbr.cz 2 IBM, Haifa Research Lab, Haifa University Campus, Haifa, 31905, Israel {yarden,rachelt,ur}@il.ibm.com
Abstract. This paper presents a tool for concurrency testing (abbreviated as ConTest) and some of its extensions. The extensions (called plug-ins in this paper) are implemented through the listener architecture of ConTest. Two plug-ins for runtime detection of common concurrent bugs are presented—the first (Eraser+) is able to detect data races while the second (AtomRace) is able to detect not only data races but also more general bugs caused by violation of atomicity presumptions. A third plug-in presented in this paper is designed to hide bugs that made it into the field so that when problems are detected they can be circumvented. Several experiments demonstrate the capabilities of these plug-ins.
1
Introduction
Concurrent programming is very popular nowadays despite its complexity and the fact that it is error-prone. The crucial problem encountered when testing and debugging concurrent programs is the huge number of possible execution interleavings and the fact that they are selected nondeterministically at runtime. The interleaving depends—among other factors—on the underlying hardware, with the result that concurrent bugs hide until they manifest in a specific user configuration. Applications of model checking in this area are limited by the state space explosion problem, which is quite severe when considering large applications and their huge interleaving space, while static analysis tools suffer from many false alarms, and are also complicated by the need to analyse non-sequential code. In this paper, we consider testing and runtime analysis (capable of catching bugs even when they do not appear directly in the witnessed run) supported by techniques that make concurrent bugs appear with a higher probability. In particular, in Section 2, we present ConcurrentTesting (abbreviated as ConTest), a tool for testing, debugging, and measuring test coverage for concurrent Java programs, on top of which specialised plug-ins with various functionalities can be built. In Section 3, we describe two ConTest plug-ins for runtime detection of common concurrent bugs (data races and atomicity violations). In Section 4, we describe a plug-in for healing bugs that escaped to the field. To evaluate these S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 101–114, 2009. c Springer-Verlag Berlin Heidelberg 2009
102
B. Kˇrena et al.
plug-ins, we performed several experiments. Due to space limitations, we only provide a short description of the experiments in Sections 3 and 4.
2
The ConTest Tool and Infrastructure
ConTest [4] is an advanced tool for testing, debugging, and measuring test coverage for concurrent Java programs. Its main goal is to expose concurrency-related bugs in parallel and distributed programs, using random noise injection. ConTest instruments the bytecode—either off-line or at runtime during class load—and injects calls to ConTest runtime functions at selected places. These functions sometimes try to cause a thread switch or a delay (generally referred to as noise). The selected places are those whose relative order among the threads can impact the result; such as entrances and exits from synchronised blocks, accesses to shared variables, and calls to various synchronisation primitives. Context switches and delays are attempted by calling methods such as yield(), sleep(), or wait(). The decisions are random, so that different interleavings are attempted at each run, which increases the probability that a concurrency bug will manifest. Heuristics are used to try to reveal typical bugs. No false alarms are reported because all interleavings that occur with ConTest are legal as far as the JVM rules are concerned. ConTest itself does not know that an error occurred. This is left to the user or the test framework to discern, exactly as they do without ConTest. ConTest is implemented as a listener architecture [12], which enables writing other tools easily using the ConTest infrastructure. These tools are referred as ConTest plug-ins. Among the tools that can be written as ConTest plug-ins are those for concurrency testing, analysis, verification, and healing. The ConTest listener architecture provides an API for performing actions when some types of events happen in the program under test. The events that can be listened to include all events that ConTest instruments as described above. Each plug-in that extends ConTest defines to which event types it listens. ConTest can run any number of different plug-ins in a single execution. A plug-in registers to ConTest through an XML mechanism. ConTest takes care of the instrumentation and of the invocation of the plug-in code for a specific event when this event occurs at runtime. ConTest also provides various utilities that can be useful when writing plug-ins. ConTest supports partial instrumentation, i.e., it can be instructed to include (or exclude) specific program locations in the instrumentation. This can be useful, for example, when concentrating on specific bug patterns. The Noise class provides noise injection utilities, such as the makeNoise() method that performs noise according to ConTest preferences, and more specific methods that perform noise of certain types and strengths. The noise injection utility is useful not only because it spares the developer the need to implement noise injection, but also because it takes care of risks that may arise when using more complicated types of noise. For example, if the noise is implemented by sleep() or wait(), these calls can take interrupts invoked by the target program, and this may interfere with the semantics of the program.
A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis
103
The ConTest API takes care of this scenario. ConTest’s own noise injection can be disabled if it is not required or interferes with the plug-in function. Additional utilities include methods for retrieving lock information, efficient random number generation, utilities for safe retrieval of target program threads and objects names and values in the code of plug-ins, and a hash map suitable for storing target program objects. Eclipse is a popular IDE for Java. Eclipse itself has an open architecture so that tools can be implemented as plug-ins and plug-ins can extend other plug-ins through extension points. ConTest is available both as a stand-alone tool and as an Eclipse plug-in. As an Eclipse plug-in, it defines an extension point, which allows ConTest’s own plug-ins to be easily made into Eclipse plug-ins themselves.
3
Plug-Ins for Detecting Synchronisation Errors
This section describes two plug-ins we implemented as ConTest extensions for detecting common concurrency problems at runtime. Namely, the plug-ins are targeted at detecting data races and atomicity violations. 3.1
The Eraser+ Plug-in for Data Race Detection
Our first plug-in uses a slightly enhanced version for the Java context of the well-known Eraser algorithm [13] for data race detection (denoted Eraser+). The plug-in registers to events beforeAccess(v, loc) and afterAccess(v, loc) generated by accesses to class fields v at program locations loc as well as to events monitorExit(l) and monitorEnter(l) generated by acquire/release operations on lock l. The detection of races is based on the consideration that every shared variable (detected by ConTest at runtime) should be protected by a lock. Since Eraser has no way to know which lock protects which variable, it deduces the protection relation during execution. For each shared variable v, Eraser identifies the set C(v) of candidate locks. This set contains those locks that have protected v during the computation so far. Initially, C(v) contains all locks. At each beforeAccess(v, loc) caused by a thread t, C(v) is refined by intersecting it with the set of locks held by t. The set of locks currently held by a thread t is managed within the monitorEnter(l) and monitorExit(l) events. If C(v) becomes empty, a race condition over v is reported. To reduce false alarms and optimise the algorithm for Java, Eraser+ uses several improvements to the original algorithm as described in [7]. However, despite the implemented support of join synchronisation and variable initialisation by another thread, Eraser+ can still produce false alarms, especially when synchronisation mechanisms other than basic Java locks or the join synchronisation are used. 3.2
AtomRace: Detecting Data Races and Atomicity Violations
Our second plug-in uses the AtomRace algorithm [8], invented in consideration of the needs (low overhead, no false alarms) of self-healing programs. AtomRace
104
B. Kˇrena et al.
can detect not only data races but also atomicity violations. In fact, data races are viewed by AtomRace as a special case of atomicity violations. AtomRace does not track the use of any concrete synchronisation mechanisms; instead, it concentrates solely on the consequences of their absence or incorrect use. Thus, AtomRace can deal with programs that use any kind of synchronisation (even non-standard). AtomRace may miss data races or atomicity violations. On the other hand, it does not produce any false alarms. The plug-in registers to events generated by accesses to class fields (i.e. beforeAccess(v, loc) and afterAccess(v, loc)) and by encountering method exit points (denoted methodExit(loc)). AtomRace detects data races by making each access to a shared variable v at a location loc a primitive atomic section delimited by beforeAccess(v,loc) and afterAccess(v,loc). If execution of such a primitive atomic section is interleaved by executing any other atomic section over v, and at least one of the accesses is for writing, a data race is reported. Of course, such primitive atomic sections are very short and the probability of spotting a race on them is very low. Therefore, we make the execution of these atomic sections longer by inserting some noise. In addition, AtomRace can deal with more general atomic sections when appropriate. For a shared variable v, it views an atomic section as a code fragment that is delimited by a single entry point and possibly several end points in the control flow graph. When a thread t starts executing the atomic section at some beforeAccess(v,loc), no other thread should access v in a disallowed mode (read or write) before t reaches an end point of the atomic section at some afterAccess(v,loc’) or methodExit(v,loc’). This way, AtomRace is able to detect atomicity violations. AtomRace can also detect non-serialisable accesses in the sense of [9] if atomic sections are defined over two subsequent accesses to the same variable. When AtomRace deals with general atomic sections, it must be provided with their definition in advance, whether defined manually by the user or obtained automatically via static and/or dynamic analyses. We implemented a pattern-based static analysis that looks for typical programming constructions that programmers usually expect to be executed atomically. Occurrences of such patterns are detected in two steps. First, the PMD tool is used [3] to identify the lines of code where critical patterns that use certain variables appear from the abstract syntax tree of the Java code under test. Then, FindBugs [1] analyses the ConTest-instrumented bytecode, and the occurrences of critical patterns detected by PMD are mapped to the variable and program location identifiers used by ConTest. Moreover, a dataflow framework implemented in FindBugs finds all possible execution paths in the control flow graph, starting from a concrete location denoting the start of an atomic section, and hence finds all possible exits of the section (including those related to exceptions). Further, another static analysis was implemented to support detection of non-serialisable accesses introduced in [9]. FindBugs obtains the initial set of access interleaving (AI) invariants. AtomRace then removes non-relevant AI invariants from the set during testing (we assume that invariants broken when a test passes successfully are not relevant).
A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis public static void Service(int id, int sum) { accounts[id].Balance += sum; BankTotal += sum; }
105
// thread safe // data race
Fig. 1. A problematic method in a program simulating bank accounts
As noted above, the atomic sections monitored by AtomRace may be too short to identify a conflict. However, we can profit from the ConTest noise injection mechanism to increase the length of their execution and hence to increase the probability of spotting a bug—to the extent that the detection becomes particularly useful according to our experiments. We implemented three injection schemes: First, the noise may be injected into the atomic sections randomly, when no a-priori knowledge on what to concentrate is available. Second, if we have already identified suspicious code sections or suspicious variables via previous analysis (e.g., using Eraser+ or static analysis), we may inject noise into the appropriate code sections or into sections related to the suspicious variables only. This way we significantly reduce the overhead and may confirm that a suspicion raised by an algorithm such as Eraser+ is not a false alarm. 3.3
Experiments
We evaluated the Eraser+ and AtomRace plug-ins1 on the four case studies listed in Table 1, with the results listed in Table 2. Below, we first describe the case studies and explain the races that we identified in them. Then, we provide experimental evidence of how our algorithms found the problems. Finally, we discuss the influence of noise injection in more detail. Case Studies. The first case study is a program that simulates a simple bank system in which the total bank balance is accessed by several threads without a proper synchronisation. The bug is related to the global balance variable BankTotal, and the problematic method is depicted in Figure 1. The Balance variable is unique for each thread simulating an account, and hence, there is no race possible if a correct thread id is used as a parameter of the method. The BankTotal variable is shared among all threads, and there occurs a bug following the load-and-store bug pattern [7] on it. To see this, note that the += operation is broken into a sequence of three operations on the bytecode level. Thus, two threads may read the same value from BankTotal, modify it locally, and store the resulting value back to BankTotal while overwriting each other’s result. The data race causes the final balance to possibly be wrong. The problematic method is called many times during execution of the test case. Our second case study is the web crawler, which is a part of an older version of a major IBM production software. The crawler creates a set of threads waiting for a connection. If a connection simulated by a testing environment is established, a worker thread serves it. The method which causes problems in this case is 1
http://www.fit.vutbr.cz/research/groups/verifit/tools/racedetect/
106
B. Kˇrena et al. public void finish() { if (connection != null) connection.setStopFlag(); if (workerThread != null) workerThread.interrupt(); }
// data race
Fig. 2. A problematic method in the IBM web crawler program
shown in Figure 2. This method is called when the crawler is being shut down. If some worker thread is just serving a connection (connection != null), it is only notified not to serve any further connection. This notification is done within the finish() method by a thread performing the shutdown process. A problem occurs if the connection variable is set to null by a worker thread (a connection was served) between the check for null and an invocation of the setStopFlag() method. This represents an occurrence of the test-and-use bug pattern [7], and such a situation causes an unhandled NullPointerException. Contrary to the previous race example, this race shows up only very rarely. The third case study is a development version of an open-source FTP server produced by Apache and mentioned in [6]. It contains several types of data races. The server works as follows. When a new client connects to the server, a new thread for serving the connection is constructed and enters the serving loop in the run method which is depicted in Figure 3. The close method, also depicted in Figure 3, can be run by another thread concurrently with the run method. When the close method is executed during processing of the do-while loop in the run method, the m request, m writer, m reader, and m controlSocket variables are set to null but still remain accessible from the run loop. This situation leads to an unhandled NullPointerException within the loop. The problem corresponds to the repeated test-and-use bug pattern mentioned in [7], but, in this case, more than one variable is involved. In the program there are also present several further occurrences of the load-and-store bug pattern. However, none of them were considered as harmful because they only influence values of internal statistics values. Our fourth and final case study is TIDorbJ developed by Telef´ onica I+D. It is a CORBA 2.6 compliant ORB (Object Request Broker), which is available as open source software running on the MORFEO Community Middleware Platform [14]. In particular, we used the basic echo concurrent test shipped with TIDOrbJ. The test starts a server process for handling incoming requests and a client process that constructs several client threads, each sending several requests to the server. The server constructs several threads that serve the requests. If there are not enough server threads available, the client threads produce a timeout exception and retry later. Using this test, we identified some harmless data races in TIDorbJ as well as some races that led to a code modification upon our reporting them to Telef´onica. The most harmful and interesting races are described in the following paragraphs. The first data race that we identified in TIDOrbJ is depicted in Figure 4. In this case, the problematic variable forwardReference can be set to null by one thread in the catch branch while another thread is about to invoke a method
A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis
107
public void run() { ... // initialise m_request, m_writer, m_reader, and m_controlSocket ... do { String commandLine = m_reader.readLine(); if (commandLine == null) break; ... m_request.parse(commandLine); if (!hasPermission()) { m_writer.send(530, "permission", null); continue; } service(m_request, m_writer); } while (!m_isConnectionClosed); } public void close(){ synchronized(this){ if (m_isConnectionClosed) return; m_isConnectionClosed = true; } ... m_request = null; m_controlSocket = null; // still accessible from m_reader = null; m_writer = null; // the run() method above } Fig. 3. Problematic methods in the Apache FTP server
on forwardReference. An unhandled NullPointerException is caused if such a situation occurs. This race is very rarely manifest, because it is yielded by an exception produced within the try–catch block. Another data race in TIDOrbJ has then been identified in the IIOPProfile class on the variable m listen point. The variable is first tested for being null and then set to a new value within a method defined as synchronized. However, since the test for null is out of the synchronized method (and not repeated inside the method), an instance of the test-and-use bug pattern from [7] appears here. Finally, several other data races that we identified in TIDOrbJ were classified as not harmful since they do not lead to an exception. An example of these data races is an instance of the load-and-store bug pattern from the IIOPCommLayer class. The data race is on the recover count variable, which decreases by one each time a catch block in the sendRequest method executes due to an exception produced while sending a data element. When the recover count variable reaches zero, the algorithm does not try to recover by resending the data. The data race can cause the recovery process to be executed more times than
108
B. Kˇrena et al. class IIOPCommunicationDelegate extends CommunicationDelegate{ ... public void invoke(RequestImpl request) { try { if ( this.forwardReference == null ) { ... } else { this.forwardReference.invoke(request); } } catch (org.omg.CORBA.COMM_FAILURE cf) { this.forwardReference = null; throw cf; } } } Fig. 4. A data race in TIDorb Java
required by the value of the recover count variable (hence the system does not fail, yet its performance may be lowered unnecessarily). Table 1 gives some more numerical data about the case studies under consideration. In particular, the numbers of classes and lines of code that the case studies consist of are given in columns two and three of the table. Then, the table also gives the number of monitored atomic sections for each case study. Finally, the last two columns give the average number of threads that arose during the tests as well as the average number of monitored instances of shared variables for each case study. Results of Experiments. Table 2 summarises results of the tests that we performed with both Eraser+ and AtomRace. In particular, we performed five testing runs with approximately ten different noise settings for each case study. Of course, more runs with different input values and different places where some noise is injected could discover more bugs, but we chose to stay with constant input values and a relatively small number of executions. The results were obtained under Sun’s Java version 1.5 on a machine with 2 AMD Opteron 2220 processors at 2.8 GHz. We see that Eraser+ produces false alarms while AtomRace does not. On the other hand, Eraser+ was able to detect data races even when the problem did not occur in a given execution. Interestingly, in repeated test runs, AtomRace managed to identify all bugs found by Eraser+, and more. The table also gives the time needed for one test run of the applications without ConTest, with ConTest but without the plug-ins, and with the Eraser+ and AtomRace plugins. The column marked as Data Races presents the number of data races we found in the test cases during all tests presented in this paper (some of them were already known, some not). Our further experimental results given in Table 3 illustrate the impact of noise injection, which enforces different and still legal thread interleavings, on
A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis
109
Table 1. Case studies on which the Eraser+ and AtomRace plug-ins have been tested inst. of Example Classes kLOC atom. sect. threads shared vars. Bank 3 0.1 2 9 28 Web crawler 19 1.2 8 33 320 FTP server 120 12 14 304 23 123 TIDOrbJ 1120 84 310 49 438 014
Table 2. Data races detected by the Eraser+ and AtomRace plug-ins (Note: Two of the Eraser+ warnings on TIDOrbJ have not yet been classified as true or false errors.) Time Time Data Eraser+ appl. only ConTest Races Warn. True False (sec.) (sec.) Races Alarms Bank 1.005 1.007 1 1 1 0 Web crawler 3.01 3.02 1 1 0 1 FTP server 11.04 11.42 15 12 12 0 TIDOrbJ 3.51 5.28 5 15 5 8 Example
AtomRace Time Warn. True False (sec.) Races Alarms 1.009 1 1 0 3.04 1 1 0 13.58 15 15 0 10.80 5 5 0
Time (sec.) 1.008 3.03 12.67 9.29
both of the considered algorithms. The table shows average numbers of true data races reported during one execution of a test case (out of 100 executions) on a machine with 2 AMD Opteron 2220 processors at 2.8 GHz. The columns CT 100 and CT 200 show results for test cases when the ConTest noise injection was activated, and noise was inserted into particular locations of the bytecode with probability of 0.1 and 0.2, respectively. The columns ARV 100 and ARV 200 show results for the AtomRace variable-based noise injection, where noise was inserted into particular (primitive) atomic section with probability of 0.1 and 0.2, respectively. As can be seen, the efficiency of Eraser+—which even detects data races not manifest during the execution—increases only a little. In the case of AtomRace, the efficiency increases to values obtained by Eraser+ and beyond—still without any false alarms. Again, one can increase the amount of injected noise but then the number of detected races increases only a little or even decreases. A higher amount of injected noise can also cause a different code coverage, e.g., when a server application is not able to serve all incoming requests, the code responsible for solving such a situation is executed and examined by the detection algorithm. A Brief Summary. To sum up the abilities of data race and atomicity violation detection plug-ins, we can say that the Eraser+ algorithm is able to detect data races even in many executions where the problem does not occur, because, in fact, it does not detect data races but violations in a locking policy. On the other hand, it produces false alarms, and it is problematic to suppress these warnings without avoiding false negatives. On the other hand, AtomRace does not suffer from false alarms but only very rarely reports data races that do not occur during the execution (e.g., when the thread is interleaved between beforeAccess(v) and the immediate access to v). Hence, to give useful results, AtomRace needs to see more different interleavings
110
B. Kˇrena et al. Table 3. The influence of noise injection on Eraser+ and AtomRace
Data Eraser+ AtomRace races no noise CT 100 CT 200 no noise CT 100 CT 200 ARV 100 ARV 200 Bank 1 1 1 1 0.39 0.99 1 1 1 Web Crawler 1 0 0 0 0 0.01 0.03 0.04 0.04 FTP server 15 5.70 5.88 6.05 3.40 5.79 5.27 5.94 6.00 TIDOrbJ 5 1.80 1.96 2.23 0.37 2.38 2.39 3.63 3.82
than Eraser+. However, as our experiments indicate, this can be achieved using suitable noise injection heuristics. In the end, judging from our experiments, in practice, AtomRace seems to be able to detect all bugs detected by Eraser+, and sometimes even more. Moreover, AtomRace is able to detect atomicity violations that could not be detected by Eraser+.
4
Plug-In for Bug Healing at Runtime
When data races or atomicity violations are detected during software development, they can be corrected manually by a programmer. Some bugs, however, may (and often really do) remain in an application even when it is deployed. This motivates another ConTest plug-in that we developed and that is able to heal such bugs automatically at runtime [7]. This healing plug-in cannot remove bugs from the code but it can prevent the code from failing on them. Note that the healing techniques discussed in this section focus on healing bugs that may be classified as occurrences of the test-and-use and load-andstore bug patterns described as typical patterns of errors in atomicity in [7]. Other bugs than those corresponding to instances of these patterns cannot be healed automatically by our plug-in. An example of such a bug is the problem detected in the FTP server test case depicted in Figure 3. Fixing (or healing) such problems is not trivial, and to make it acceptable, a significant amount of information regarding the designer’s intent may be needed as was discussed, e.g., in [6]. Our first method of self-healing is based on affecting the scheduler. The scheduler is affected during execution of the beforeAccess(v, loc) listener for a problematic variable v, where loc is the beginning of an atomic section defined over v. The scheduling may, for example, be affected by injecting a yield() call (alternatively, wait() or sleep() with a minimum or zero waiting/sleeping time) that causes the running thread to lose its time slice; but the next time it runs, it has an entire time slice to run through the critical code. Another similar approach is to increase the priority of the critical thread. Yet another approach is to inject yield() or wait() to a thread that is trying to enter a critical section in which there is already another process. Such healing approaches, of course, do not guarantee that a bug is always healed, but at least they significantly decrease the probability of a manifestation of the bug. On the other hand, such a healing is safe (i.e., it cannot introduce a new bug) as it does not change the semantics of the application.
A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis
111
Our second self-healing method injects additional healing locks to the application. A healing lock for a variable v is acquired at beforeAccess(v, loc) whenever loc is a starting point of any atomic section related to v and then released at afterAccess(v, loc) or methodExit(v, loc) at loc corresponding to the end point of the entered atomic section. This approach guarantees that the detected problem cannot manifest anymore. However, introducing a new lock can lead to a deadlock, which can be even more dangerous for the application than the original problem. Moreover, frequent locking can cause a significant performance drop in some cases. However, one can consider either using some light-weight static analysis showing that adding locks is safe (as there is obviously no danger of nested locking, which is often the case) and/or consider combining the healing with a deadlock avoidance method as suggested in [11]. 4.1
Experiments
We evaluated the healing plug-in on the same case studies as the detection plugins. The healing efficiency was tested using assertions (oracles) introduced into the original code of the test cases. These assertions allow one to detect whether the known bug manifested, e.g., if a NullPointerException was thrown within the problematic block of code. Manifestation of the bugs depends on timing and the used hardware architecture. Therefore, all tests have been done on several architectures that vary in the number of available processor cores. We used a computer with (1) one core based on Intel Pentium 4 2,8 GHz (with hyperthreading), (2) two cores based on Intel Core 2 Duo E8400, (3) four cores based on two AMD Opteron 2220, and (4) eight cores based on two Intel Xeon 5355. Our experiences show that data races described in the previous section can be divided into two groups of problems from the healing point of view. Frequently Manifesting Bugs. The first group includes data races and atomicity violations that occur often during an execution. An example of such a data race is the bank account test case shown in Figure 1. We found a similar situation also in the FTP server (in a module responsible for gathering server statistics) and TIDOrbJ (in the exception handling block shown in Figure 4). In the Bank test case, the problematic piece of code is called during each operation with accounts. The healing efficiency for this test case on computers with a different number of cores is shown in Table 4. The results were obtained for eight account threads doing ten account operations each, without any computation in between these operations. The first column of the table shows the number of cores. The other columns of the table describe the ratio of runs in which a problem gets manifested (i.e., Bank Total ends up with a wrong value) out of 6000 executions of the test for a particular setting. The second column of the table shows how often the bug manifests without any healing. The Yield column refers to calling Thread.yield() when entering a problematic atomic section, the Prio column refers to increasing the priority of the thread entering such a section, and the YiPrio column is a combination of both of these techniques. The OTYiyeld (OTWait ) columns refer to calling Thread.yield()
112
B. Kˇrena et al. Table 4. Efficiency of healing techniques in the Bank test case Proc 1 2 4 8
Orig 0.908 0.297 0.545 0.710
Yield 0.734 0.094 0.673 0.681
Prio 0.821 0.705 0.648 0.783
YiPrio OTYield OTWait NewMut 0.735 0.711 0.598 0 0.444 0.068 0.041 0 0.658 0.415 0.242 0 0.755 0.651 0.573 0
Table 5. Efficiency of healing the web crawler Proc Orig Yield Prio YiPrio OTYield OTWait NewMut 1 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 4 0.0195 0.0018 0.0018 0.0023 0.0003 0 0 8 0.0194 0.0022 0.0035 0.0035 0.0002 0 0
(or Thread.wait(0, 10), respectively) if a thread is inside a critical section when another thread wants to enter it. The NewMut column shows the results of adding healing locks. As can be seen from the table, the probability of a race manifestation highly depends on the used configuration. In most cases, healing by affecting the scheduler cannot be effectively used for suppressing such races. However, results given in the second row show that some methods (e.g., Yield, OTYiled, and OTWait) on some configurations can help. Some other methods (e.g., Prio) can, on the other hand, make the problem even worse. In general, it can be said that methods based on influencing the scheduler are not suitable for healing frequently occurring bugs. On the other hand, such bugs should be easy to find during development, e.g., by testing. The last column of Table 4 shows that only additional synchronisation can heal the problem in a satisfactory way. Rarely Manifesting Bugs. The second group of data races and atomicity violations are those that occur only very rarely. Such bugs depend on a very special timing among the involved threads. An example of this kind of scenario is the web crawler test case shown in Figure 2. Table 5 shows the results of applying our healing plug-in in this test case when only 30 worker threads were used, and so there were only 30 possible manifestations of the bug in one execution. In this case, healing techniques that influence the scheduler can be successfully used to avoid the bug, as shown in Table 5. Table 5 shows the percentage of runs in which a problem manifested (meaning that a NullPointerException occurred in the problematic piece of code) out of 6000 executions of the test for a particular setting. It can be seen that the techniques that influence threads that are about to access a variable when some other thread is inside an atomic section defined for this variable (OTYiedl and OTWait) provide a better healing efficiency than techniques influencing threads that enter the problematic section first. Of course, additional synchronisation again suppresses the bug completely.
A Concurrency Testing Tool and Its Plug-Ins for Dynamic Analysis
5
113
Related Work
For monitoring and influencing a concurrent Java program, one could consider using AspectJ as an alternative to ConTest. However, as discussed in [12], ConTest differs from aspect-oriented programming in making a clear separation between a target program and tools, so that different tools can all target the same target program without interfering with one another. Moreover, ConTest works on the bytecode level and comes with various ready-to-use methods for noise injection, test coverage measurement, etc. Another alternative that one could think of is the JVM Tool Interface (JVMTI), which is a popular C programming interface provided by JVM implementations in order to write monitoring and development tools that inspect and control the execution of the applications running in the JVM. Despite providing a rich interface, JVMTI lacks some of the interface methods that are important for developing concurrency bugs detection tools. For example, it does not provide a method for the event where a thread takes a lock (it supports only the event of a contended lock when the thread has to wait since another thread is holding the lock). On top of ConTest, various other algorithms for dynamic detection of concurrency-related bugs than Eraser+ or AtomRace could be implemented, such as [2,15,5]. Such dynamic analysis algorithms differ in their detection power (ability to warn about unseen bugs), number of false alarms, and their overhead— a deeper discussion of these algorithms is beyond the scope of this tool paper. We have concentrated on Eraser+ and AtomRace because of their simplicity and relatively low overhead (and in the case of AtomRace, absence of false alarms). Experiments with other dynamic analyses implemented in the same framework are, however, an interesting issue for future work. Most existing works concentrate on detecting concurrency-relate bugs. There are far fewer works on their self-healing. The approach closest to our healing plug-in is probably that of ToleRace [10] healing asymmetric races (i.e., readwrite races, not write-write races) by using local copies of shared variables (which cannot be variables referring to external resources, such as files, whose local copy cannot be created).
6
Conclusions
We presented ConTest, a tool and infrastructure for testing concurrent programs, and its plug-ins for detecting and healing data races and atomicity violations. In the future, we plan to improve the efficiency of the methods (to decrease the overhead and increase the ratio of bug finding), improve the static analyses we use (e.g., for detecting occurrences of bug patterns), and consider additional types of concurrency-related bugs. Acknowledgement. This work is partially supported by the European Community under the Information Society Technologies (IST) programme of the 6th FP for RTD: project SHADOWS contract IST-035157. The authors are solely
114
B. Kˇrena et al.
responsible for the content of this paper. It does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of data appearing therein. This work is partially supported by the Czech Ministry of Education, Youth, and Sport under the project Security-Oriented Research in Information Technology, contract CEZ MSM 0021630528.
References 1. Ayewah, N., Pugh, W., Morgenthaler, D., Penix, J., Zhou, Y.: Using FindBugs on Production Software. In: Proc. of OOPSLA 2007. ACM, New York (2007) 2. O’Callahan, R., Choi, J.-D.: Hybrid Dynamic Data Race Detection. In: Proc.of PPoPP 2003. ACM, New York (2003) 3. Copeland, T.: PMD Applied. Centennial Books (2005) 4. Edelstein, O., Farchi, E., Nir, Y., Ratsaby, G., Ur, S.: Multithreaded Java Program Test Generation. IBM Systems Journal 41(1), 111–125 (2002) 5. Elmas, T., Qadeer, S., Tasiran, S.: Goldilocks: A Race and Transaction-aware Java Runtime. In: Proc. of PLDI 2007. ACM, New York (2007) 6. Keremoglu, M.E., Tasiran, S., Elmas, T.: A Classification of Concurrency Bugs in Java Benchmarks by Developer Intent. In: Proc. of PADTAD 2006. ACM, New York (2006) 7. Kˇrena, B., Letko, Z., Tzoref, R., Ur, S., Vojnar, T.: Healing Data Races On-TheFly. In: Proc. of PADTAD 2007. ACM, New York (2007) 8. Kˇrena, B., Letko, Z., Vojnar, T.: AtomRace: Data Race and Atomicity Violation Detector and Healer. In: Proc. of PADTAD 2008. ACM, New York (2008) 9. Lu, S., Tucek, J., Qin, F., Zhou, Y.: AVIO: Detecting Atomicity Violations via Access Interleaving Invariants. In: Proc. of ASPLOS-XII. ACM, New York (2006) 10. Nagpaly, R., Pattabiramanz, K., Kirovski, D., Zorn, B.: ToleRace: Tolerating and Detecting Races. In: Proc. of STMCS 2007 (2007) 11. Nir-Buchbinder, Y., Tzoref, R., Ur, S.: Deadlocks: from Exhibiting to Healing. In: Leucker, M. (ed.) RV 2008. LNCS, vol. 5289, pp. 104–118. Springer, Heidelberg (2008) 12. Nir-Buchbinder, Y., Ur, S.: ConTest Listeners: A Concurrency-Oriented Infrastructure for Java Test and Heal Tools. In: Proc. of SOQUA 2007. ACM, New York (2007) 13. Savage, S., Burrows, M., Nelson, G., Sobalvarro, P., Anderson, T.: Eraser: A Dynamic Data Race Detector for Multi-threaded Programs. In: Proc. of SOSP 1997. ACM, New York (1997) 14. Soriano, J., Jimenez, M., Cantera, J., Hierro, J.: Delivering Mobile Enterprise Services on Morfeo’s MC Open Source Platform. In: Proc. of MDM 2006. IEEE, Los Alamitos (2006) 15. Yu, Y., Rodeheffer, T., Chen, W.: Racetrack: Efficient Detection of Data Race Conditions via Adaptive Tracking. SIGOPS Oper. Syst. Rev. 39(5), 221–234 (2005)
Bridging the Gap between Algebraic Specification and Object-Oriented Generic Programming Isabel Nunes, Ant´onia Lopes, and Vasco T. Vasconcelos Faculty of Sciences, University of Lisbon, Campo Grande, 1749–016 Lisboa, Portugal {in,mal,vv}@di.fc.ul.pt
Abstract. Although generics became quite popular in mainstream objectoriented languages and several specification languages exist that support the description of generic components, conformance relations between object-oriented programs and formal specifications that have been established so far do not address genericity. In this paper we propose a notion of refinement mapping that allows to define correspondences between parameterized specifications and generic Java classes. Based on such mappings, we put forward a conformance notion useful for the extension of C ON G U, a tool-based approach we have been developing to support runtime conformance checking of Java programs against algebraic specifications, so that it becomes applicable to a more comprehensive range of situations, namely those that appear in the context of a typical Algorithms and Data Structures course.
1 Introduction Many approaches have been developed for runtime checking the conformance of objectoriented programs with formal specifications. A limitation of existing approaches, including our own work [20], is the lack of support to check generic components. Despite the popularity of generics in mainstream object-oriented languages, available approaches, such as [14,17,2,18,5,22,8], are not applicable to programs that include generic classes. At the specification side, the description of programs that include generic elements is not a problem. In particular, in the case of algebraic specification, languages s.a. C ASL [6] support the description of parameterized specifications as well as the definition of specifications through instantiation of parameterized ones. What is lacking is to bridge the gap between parameterized specifications and generic classes.The conformance relations of specifications with object-oriented programs that have been established so far (e.g. [2,12,1,11,4,9,22]) only consider simple, non-parameterized, specifications. In our own previous work on runtime conformance checking between algebraic specifications and Java programs — the C ON G U approach [20] — we have also considered simple specifications only. However, since generics are available in Java, the fact that generic specifications are not supported by C ON G U has become a severe drawback. The C ON G U tool [10] is intensively used by our undergraduated students in the context of a course on algorithms and data structures. They use the tool to analyze their implementations of ADTs with respect to specifications. Given that generics became extremely S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 115–131, 2009. c Springer-Verlag Berlin Heidelberg 2009
116
I. Nunes, A. Lopes, and V.T. Vasconcelos
useful and popular in the implementation of ADTs in Java, in particular those that are traditionally covered in this course, it is crutial to extend C ON G U in order to support parameterized specifications and generic classes. The extension of the C ON G U approach requires (i) finding an appropriate mechanism for expressing correspondences between parameterized specifications and generic classes, (ii) defining a more comprehensive notion of conformance between specifications and Java programs, and (iii) extending the C ON G U tool, which is based on the generation of annotated classes with monitorable contracts written in JML [13], to cope with this broader notion of conformance. This paper mainly focuses on the first two aspects, considering not only generic specifications, but also specifications that make use of subsorting. In the developed approach, subsorting revealed to be a very useful construct, fundamental to cope with the range of situations that appear in the algorithms and data structures course. To our knowledge, our work is the first to tackle the problem of bridging the gap between algebraic specifications and generic object-oriented programming (a problem that does not arise in the context of other approaches to verification that were extended to handle generics in Java, namely in [21] that is focused on the proof of functional properties of programs). Our contributions are twofold. On the one hand, we propose a way of describing the modular structure of reusable libraries that use generics, s.a. the Java Collections Framework (generics are especially common in this context). We believe our proposal can be easily understood by Java programers in general and our students in particular. This is because, as we will show, there exists a straightforward correspondence between the key concepts at the specification and programming levels. On the other hand, we put forward a notion that allows to express correspondences between specifications and Java programs and a conformance notion that paves the way for the extension of application of runtime checking to a more comprehensive range of situations, namely to APIs that use generics and code that uses these APIs. The remainder of the paper is organised as follows. Section 2 presents the structure of specification modules and the adopted specification language which includes subsorting and parameterization. Then, in Section 3, we propose an interpretation of those modules in terms of Java programs and, in Section 4 we put forward a notion of conformance. The solution envisaged for extending the C ON G U tool is addressed in Section 5, and Section 6 concludes the paper. We illustrate our approach to runtime conformance checking of generic Java programs against parameterized data types with two typical examples: sorted sets and (closed) intervals.
2 Specifications and Modules As specifications we take essentially a subset of the set of specifications that can be defined in C ASL [6], considered a standard for algebraic specification. However, for the sake of simplicity, we adopt a different concrete syntax. This section introduces our specifications in three steps: simple, with subsorting and parameterized specifications. These are the building blocks of modules, introduced at the end of the section.
Bridging the Gap between Algebraic Specification
117
s p e c i f i c a t i o n TOTAL ORDER sorts Orderable observers geq : Orderable Orderable ; axioms E , F , G: Orderable ; E = F i f geq ( E , F ) and geq ( F ,E ) ; geq ( E , E ) ; geq ( E , F ) i f not geq ( F , E ) ; geq ( E , G) i f geq (E , F ) and geq ( F , G ) ; end s p e c i f i c a t i o n
Fig. 1. A specification of a total order
2.1 Specifications Simple specifications are those currently supported by C ON G U, described in detail in [20]. A specification defines exactly one sort and introduces a set of operations and predicates. The first argument of every operation and predicate is required to be of that sort. Operations declared with −→? may be partial, i.e., can be interpreted by partial functions. Operations can be classified as constructors corresponding to the usual (loose) datatype constructors; in this case, they may have no arguments. Furthermore, the language imposes some restrictions on the form of axioms, namely the separation, under the keyword domains, of the domain conditions of operations, that is, the conditions in which operations have to be defined.1 Figure 1 presents an example of a simple specification — TOTAL ORDER. This specification is self-contained, i.e., it does not include external symbols. However, simple specifications may refer to sorts, operations and predicates defined elsewhere. For instance, for specifying a total order that, additionally, has a correspondence with a set of natural numbers, we would have to refer to a sort Nat, say, defined in a different specification. Specifications can be more complex, namely making use of subsorting. More precisely, a specification may introduce a new sort that is declared to be a subsort of one or more sorts introduced elsewhere. Following [6], this means that the values of the subsort have to be understood as a special case of those of the supersort. Figure 2 presents an example of a specification of a total order with a successor operation, achieved by defining elements of this data type (values of sort Successorable) as special cases of elements of a total order. Similarly to C ASL, an operation or predicate defined in the supersort s > s can receive a term of sort s as argument wherever an element of sort s is expected. For instance, in our example, this justifies why the formula geq(suc(E),E) with E:Successorable is well-formed. The third kind of specifications are parameterized specifications. They have one or more specifications as parameters, and introduce one compound sort of the form name[sort1 , . . . , sortk ] where sorti is the name of the sort introduced in the ith parameter of the specification. Figure 3 presents a specification of the data type Sorted Set. It 1
These restrictions are related with the way C ON G U supports runtime conformance checking, that involves the automated generation of monitorable contracts from the specified properties.
118
I. Nunes, A. Lopes, and V.T. Vasconcelos
s p e c i f i c a t i o n TOTAL ORDER WITH SUC sorts Successorable < Orderable constructors suc : Successorable −→ Successorable ; axioms E , F : Successorable ; geq ( suc (E ) , E ) ; geq ( E , suc ( F ) ) i f geq ( E , F ) and not ( E = F ) ; E = F i f suc ( E ) = E and suc ( F ) = F ; end s p e c i f i c a t i o n
Fig. 2. A specification of a total order with a successor operation s p e c i f i c a t i o n SORTED SET [ TOTAL ORDER ] sorts SortedSet [ Orderable ] constructors empty : −→ SortedSet [ Orderable ] ; i n s e r t : SortedSet [ Orderable ] Orderable −→ SortedSet [ Orderable ] ; observers isEmpty : SortedSet [ Orderable ] ; i s I n : SortedSet [ Orderable ] Orderable ; l a r g e s t : SortedSet [ Orderable ] −→? Orderable ; domains S : SortedSet [ Orderable ] ; l a r g e s t ( S ) i f not isEmpty (S ) ; axioms E , F : Orderable ; S : SortedSet [ Orderable ] ; isEmpty ( empty ( ) ) ; not isEmpty ( i n s e r t ( S , E ) ) ; not i s I n ( empty ( ) , E ) ; i s I n ( i n s e r t ( S , E ) , F ) i f f E = F or i s I n ( S , F ) ; l a r g e s t ( i n s e r t ( S , E ) ) = E i f isEmpty ( S ) ; l a r g e s t ( i n s e r t ( S , E ) ) = E i f not isEmpty ( S) and geq ( E , l a r g e s t ( S ) ) ; l a r g e s t ( i n s e r t ( S , E ) ) = l a r g e s t ( S) i f not isEmpty ( S) and not geq ( E , l a r g e s t ( S ) ) ; i n s e r t ( i n s e r t ( S , E ) , F ) = i n s e r t ( S , E) i f E = F ; i n s e r t ( i n s e r t (S, E) , F) = i n s e r t ( i n s e r t (S, F) , E ) ; end s p e c i f i c a t i o n
Fig. 3. A specification of a sorted set
is an example of a parameterized specification with one single parameter — the specification TOTAL ORDER — that introduces the compound sort SortedSet[Orderable]. Parameterized specifications are usually used as a means to support reuse at the specification level, through the instantiation of their parameters with different specifications. In this work, however, we are mainly interested in parameterized specifications as a means of specifying generic data types. Specifications defined through instantiation of parameterized specifications are not first-order, in the sense that they cannot be used as elements of specification modules (introduced below). Still, these specifications are useful as they introduce sorts and operations that can be used in other specifications. The parameterized specification INTERVAL presented in Figure 4 illustrates this situation. Intervals are defined as pairs of elements of a total order with a successor operation. The operation elements, that calculates the set of elements of an interval, returns elements of sort SortedSet[Successorable]. This sort is defined by the specification SORTED SET[[TOTAL ORDER WITH SUC]] — the specification that results from the instantiation of the parameter of SORTED SET with TOTAL ORDER WITH SUC. Moreover, the specification INTERVAL uses the operation and the predicate
Bridging the Gap between Algebraic Specification
119
s p e c i f i c a t i o n INTERVAL [ TOTAL ORDER WITH SUC ] sorts I n t e r v a l [ Successorable ] constructors i n t e r v a l : Successorable Successorable −→? I n t e r v a l [ Successorable ] ; observers max : I n t e r v a l [ Successorable ] −→ Successorable ; min : I n t e r v a l [ Successorable ] −→ Successorable ; b e f o r e : I n t e r v a l [ Successorable ] I n t e r v a l [ Successorable ] ; elements : I n t e r v a l [ Successorable ] −→ SortedSet [ Successorable ] domains E , F : Successorable ; i n t e r v a l ( E , F ) i f geq ( F , E ) ; axioms E , F : Successorable ; I , J : I n t e r v a l [ Successorable ] ; max ( i n t e r v a l ( E , F ) ) = F ; min ( i n t e r v a l ( E , F ) ) = E ; b e f o r e ( I , J ) i f f geq ( min ( J ) , max ( I ) ) ; elements ( I ) = i n s e r t ( empty ( ) , min ( I ) ) i f max ( I ) = min ( I ) ; elements ( I ) = i n s e r t ( elements ( i n t e r v a l ( suc ( min ( I ) ) , max ( I ) ) ) , min ( I ) ) i f not ( max ( I ) = min ( I ) ) ; end s p e c i f i c a t i o n
Fig. 4. A specification of an interval insert :SortedSet[Successorable] Successorable −→SortedSet[Successorable] empty: SortedSet[Successorable]
also defined in this specification. Usually, specification instantiation is defined if the argument specification provides symbols corresponding to those required by the corresponding parameter, and the properties required by the parameter hold. For the sake of simplicity, we decided not to support the explicit definition of fitting morphisms and so instantiation is restricted to situations in which this correspondence can be left implicit, being established by an obvious injection. For instance, in the example of the specification SORTED SET[[TOTAL ORDER WITH SUC]], the injection is indeed obvious as sort Successorable was defined to be a subsort of Orderable. 2.2 Specification Modules In the previous subsection, in order to intuitively provide the meaning for different specification constructs, we have always considered there was a well known context for dereferencing external symbols — the set of specifications previously presented. Specification modules provide a means for establishing this context. The meaning of external symbols is only defined when the specification is integrated in a specification module, together with other specifications that define those external symbols. As shown before, the specification of a generic data type may involve much more than two specifications. For instance, in the case of Interval, we used four: TOTAL ORDER WITH SUC, TOTAL ORDER, SORTED SET and INTERVAL. Clearly, the role of the first two specifications is different from the last two. Specifications SORTED SET and INTERVAL define data types that have to be implemented (they are the core of the module), while the role of the other two specifications is simply to impose constraints over their admissible instantiations. Specification modules, defined as pairs of sets of specifications, explicitly identify the nature of each specification.
120
I. Nunes, A. Lopes, and V.T. Vasconcelos
More concretely, a specification module is a pair core, param of sets of specifications s.t.: – – – –
the intersection of core and param is empty and all sorts are different; the param set contains every specification used as parameter; both core and param sets are closed under the supersort relation; all external symbols —parameters, sorts, operations and predicates— are resolved.
With the specifications introduced in the previous subsection, we can build different modules. Three examples are presented below, ranging from a very simple module TO with a single core specification to the module ITV that specifies two generic data types. TO = <{TOTAL ORDER}, {}> SS = <{SORTED SET},{TOTAL ORDER}> ITV = <{SORTED SET,INTERVAL},{TOTAL ORDER,TOTAL ORDER WITH SUC}>
Specification modules play a role that, to some extent, is similar to that of architectural specifications [7] in C ASL. In both cases, it is prescribed the intended structure of implementations, that is to say, the implementation units that have to be developed. The difference is that architectural specifications additionally describe how units, once developed, are put together to produce the overall result. However, since the operators used for combining units in architectural specifications are not available in objectoriented languages s.a. Java, this aspect of architectural specifications is not useful in this context.
3 Interpreting Modules in Terms of Java Programs The standard interpretation of algebraic specifications in terms of algebras has proved very important, namely to understand ADTs. However, as pointed out in [3], this interpretation only provides us with an indirect connection with the abstractly described programs. For a broader use of specifications, namely in the context of runtime conformance checking, a direct connection between specifications and programs is needed. In this paper we propose an interpretation of specification modules in terms of sets of Java classes and interfaces. This is presented in two steps. In this section, we characterise the Java programs appropriate for interpreting a specification module, taking into account the structural constraints, while in the next section, we define the class of Java programs that correctly realize the specified requirements. These programs are said to be conforming with the specification module. 3.1 Constraints over the Structure of Programs Let us take the perspective of a Java developer that has to implement a collection of ADTs, described in terms of a specification module. Each core specification of the module abstractly describes a Java class. This class has to be generic if the specification is parameterized. Moreover, in the presence of specifications that make use of subsorting, the induced type hierarchy must be enforced by the implementations.
Bridging the Gap between Algebraic Specification
121
More specifically, a Java program, regarded as a set C of classes and interfaces, is appropriate for interpreting a specification module core, param only if there exists a correspondence from every core specification to a Java type in C. Additionally, the following conditions also need to be fulfilled: – the sort introduced by each simple specification in core corresponds to a nongeneric type T in C; – the generic sort introduced by each parameterized specification in core corresponds to a Java generic type in C with the same arity (i.e., the same number of parameters); – the sort s < s introduced by each subsorting specification in core corresponds to a type T in C, and T is subtype of the type that corresponds to s . As an example consider again the module ITV. According to the constraints just described, an implementation of this module in Java has to include two generic classes, one implementing SORTED SET and the other implementing INTERVAL. 3.2 Constraints over the Structure of Classes and Interfaces Method signatures. The signature of a specification S introducing a sort s imposes constraints over the methods that need to be available in the corresponding Java type T . Every operation and predicate declared in a specification S must correspond to a public method of T with a “matching” signature in terms of arity and return and parameter types. More precisely: – Arity: (i) every (n+1)-ary operation or predicate corresponds to an n-ary method — this is due to the fact that the object that corresponds to the first parameter of every operation is the current object (this); (ii) every zero-ary operation corresponds to a constructor of the corresponding class. – Return type: (i) every predicate corresponds to a boolean method; (ii) every operation with result sort s = s corresponds to a method with return type T , if s corresponds to Java type T ; (iii) every operation with result sort s corresponds to a method with any return type, void included. – Parameter type: given a method m corresponding to operation/predicate op, the i-th parameter of m has the type corresponding to (i+1)-th parameter sort of op. This is similar to what we defined in [20]. However, the underlying correspondence of types, in addition to what was defined before, has to satisfy the following condition: – the occurrence of a sort of the form s [s1 , ..., sn ], defined by an instantiation of a parameterized specification, must correspond to 1) an instantiation T T1 , ...., Tn , if the specifications defining every si belong to core; 2) a generic type T E1 , ...., En , if the specifications defining every si belong to param. As an example let us consider the module SS introduced before and a Java program containing the class TreeSet and the interface IOrderable (see Figure 5). If we consider that the sort SortedSet[Orderable] corresponds to the type TreeSetE, then it is easy to see that every operation and predicate in specification SORTED SET has
122
I. Nunes, A. Lopes, and V.T. Vasconcelos
interface IOrderableE{ boolean greaterEq(E e); } public class TreeSetE extends IOrderableE { public TreeSetE(){...} public void insert(E e){...} public boolean isEmpty(){...} public boolean isIn(E e){...} public E largest(){ ...} ... }
Fig. 5. An excerpt of a Java implementation of a sorted set
a corresponding method in the class TreeSet with a matching signature. For example, the operation insert :SortedSet[Orderable] Orderable −→SortedSet[Orderable] corresponds to void insert(E e), the predicate isIn :SortedSet[Orderable] Orderable corresponds to boolean isIn(E e) and the operation largest :SortedSet[Orderable] −→ Orderable corresponds to E largest(). Parameters of Java generic classes. Other type of constraints imposed over the structure of classes concerns the way parameters of generic classes are bound. For every parameterized specification, the instantiation of the parameter type of the corresponding generic class has to be limited to types that have a method for every operation and predicate in the corresponding parameter specification, with a signature that matches that of the operation/predicate. More concretely, if a Java class K can be used to instantiate a given generic Java type T E that corresponds to a generic specification S[S ], then every operation and predicate of S must correspond to a method of K with a matching signature considering that the sort s corresponds to type K. Going back to the example discussed before, we see that the instantiation of the type parameter in TreeSet is limited to classes K that implement IOrderableK and hence, it is ensured that they have the method boolean greaterEq(K e). The signature of this method clearly matches with the declaration geq:Orderable Orderable in specification SORTED SET considering that sort Orderable corresponds to type K. 3.3 Refinement Mappings The correspondence between specifications and Java types as well as between operations/predicates and methods can be described in terms of what we have called a refinement mapping. In order to support the analysis of a Java program with respect to a specification module, it is crucial that a correspondence between them be explicitly defined. A refinement mapping from a specification module M to a set C of Java types consists of a set V of type variables equipped with a pre-order, and an injective refinement function R that maps: – each core simple specification to a non-generic type defined by a Java class; – each core parameterized specification to a generic type, with the same arity, defined by a Java class; – each core specification that defines a sort s < s , to a subtype of R(S ), where S is the specification defining s ;
Bridging the Gap between Algebraic Specification
123
refinement <E> SORTED SET [ TOTAL ORDER ] i s TreeSet<E> { empty : −→ SortedSet [ Orderable ] i s TreeSet<E> ( ) ; i n s e r t : SortedSet [ Orderable ] e : Orderable −→ SortedSet i s v o i d i n s e r t ( E e ) ; isEmpty : SortedSet [ Orderable ] i s boolean isEmpty ( ) ; i s I n : SortedSet [ Orderable ] e : Orderable i s boolean i s I n ( E e ) ; l a r g e s t : SortedSet [ Orderable ] −→? Orderable i s E l a r g e s t ( ) ; } TOTAL ORDER i s E { geq : Orderable e : Orderable i s boolean g r e a t e r E q ( E e ) ; } end refinement
Fig. 6. A refinement for a sorted set
– each parameter specification to a type variable in V ; – each operation/predicate of a core specification to a method of the corresponding Java type with a matching signature; – each operation/predicate of a parameter specification S to the signature of a method. Additionally: – if a parameter specification S defines a subsort of the sort defined in another parameter specification S, then it must be the case that R(S ) < R(S) holds (recall that the set V of type variables is equipped with a pre-order); – if S is a parameterized specification with parameter S , it must be possible to ensure that any type K that can be used to instantiate the corresponding parameter of the generic type R(S) possesses all methods op defined by R for type variable R(S ) after appropriate renaming — the replacement of all instances of the type variable R(S ) by K (among these methods are the methods defined by R for any type variable V s.t. R(S ) < V ). In Figure 6 we present an example of a refinement mapping between the module SS and the java types {TreeSetE, IOrderableE}, using a concrete syntax that extends the one that is currently supported by C ON G U. In order to check that the described function indeed defines a refinement mapping we have to confirm that the last condition above holds (the first one is vacuously true as TOTAL ORDER does not define a subsort). This can be ensured by inspecting in the class TreeSet whether any bounds are declared for its parameter E, and whether those bounds are consistent with the methods that were associated to parameter type E by the refinement mapping — boolean greaterEq(E e). This is indeed the case: the parameter E of TreeSet is bounded to extend IOrderableE, which, in turn, declares the method boolean greaterEq(E e). At a first glance, it may look strange that, the definition of refinement mapping, does not require instead that the parameter specification TOTAL ORDER be mapped directly to IOrderableE, the bound associated to E. This would be simpler to write but it would be much too restrictive, without practical interest. In our example, this notion would only be applicable if boolean greaterEq(E e) in IOrderableE was replaced by boolean greaterEq(IOrderableEe). This would require that any two objects of any two classes that can be used to instantiate E in TreeSet can be compared which is, clearly, much stronger than what is necessary.
124
I. Nunes, A. Lopes, and V.T. Vasconcelos
refinement <E , F extends E> SORTED SET [ TOTAL ORDER ] i s TreeSet<E> { empty : −→ SortedSet [ Orderable ] i s TreeSet<E> ( ) ; i n s e r t : SortedSet [ Orderable ] e : Orderable −→ SortedSet i s v o i d add ( E e ) ; isEmpty : SortedSet [ Orderable ] i s boolean isEmpty ( ) ; i s I n : SortedSet [ Orderable ] e : Orderable i s boolean i s I n ( E e ) ; l a r g e s t : SortedSet [ Orderable ] −→? Orderable i s E g r e a t e s t ( ) ; } INTERVAL [ TOTAL ORDER WITH SUC ] i s M y I n t e r v a l { i n t e r v a l : e1 : Successorable e2 : Successorable −→? I n t e r v a l [ Successorable ] i s M y I n t e r v a l(F e1 , F e2 ) ; max : I n t e r v a l [ Successorable ] −→ Sucessorable i s F f s t ( ) ; min : I n t e r v a l [ Successorable ] −→ Sucessorable i s F snd ( ) ; b e f o r e : I n t e r v a l [ Successorable ] e : I n t e r v a l [ Successorable ] i s boolean b e f o r e ( M y I n t e r v a l e ) ; elements : I n t e r v a l [ Successorable ] −→ SortedSet [ Successorable ] i s TreeSet elems ( ) ; } TOTAL ORDER i s E { geq : Orderable e : Orderable i s boolean g r e a t e r E q ( E e ) ; } TOTAL ORDER WITH SUC i s F { suc : Successorable −→ Successorable i s F suc ( ) ; } end refinement
Fig. 7. A refinement for an interval interface ISuccessorableE extends IOrderableE{ E suc(); } public class MyIntervalE extends ISuccessorableE { public MyIntervalE(E e1, E e2){...} public E fst(){...} public E snd(){...} public before (MyIntervalE i) {...} public TreeSetE elems(){...} ... }
Fig. 8. An excerpt of a Java implementation of an interval
A more complex and interesting example that shows the full potential of our notion of refinement mapping is presented in Figure 7. It describes a refinement mapping between the module ITV and the java types TreeSetE, IOrderableE and MyIntervalE, ISuccessorableE. Figure 8 partially shows how the last two types were defined. Notice that, in this case, because F < E, in order to check that the described function defines a refinement mapping we have to confirm that any K that can be used to instantiate E in MyInterval possesses methods boolean greaterEq(K e) and K suc(). This is indeed the case as the parameter E of MyInterval is bounded to extend ISuccessorableE, which, in turn, declares boolean greaterEq(E e) and inherits from IOrderableE the method boolean greaterEq(E e). In order to check the expressive power of the proposed notion, we have additionally considered a large number of examples that appear in the context of a typical Algorithms and Data Structure Course.
Bridging the Gap between Algebraic Specification
125
4 Conformance between Modules and Java Programs In the previous section, we characterised the class of Java programs that are appropriate for interpreting a specification module. In this section, we characterise the programs that are in conformity with the module, i.e., those that correctly realize the specified requirements. In what follows, we consider a set C of Java types that is appropriate for interpreting a module M with the refinement mapping R. For illustration purposes we use the refinement mapping presented in Figure 6 between the module SS and C={TreeSetE, IOrderableE}. Specifications introduced in Section 2 define two types of properties: axioms and domain conditions. In the first paragraphs we address these two types of properties when they are defined in the context of core specifications. Finally, in the last paragraph, we address properties of parameter specifications. 4.1 Constraints Imposed by Axioms of Core Specifications The axioms included in a core specification S impose constraints over the behaviour of the corresponding class, R(S). More concretely, the axioms specified in a nonparameterized specification S must be fulfilled by every object of type R(S). Notice that in the case of specifications that make use of subsorting, the above condition implies that the axioms defined for values of the supersort are also fulfilled by the values of the subsort. This is a simple consequence of the type system: if an object has type T < T , then it also has type T . In case S is a parameterized specification, the axioms must be fulfilled by every object of any type T that can be obtained through the instantiation of R(S), which in this case is a Java generic type. From the point of view of an object, the properties described by axioms are invariants — they should hold in all client visible-states, i.e., they must be true when control is not inside the object s methods. In particular, they must be true at the end of each constructors execution, and at the beginning and end of all methods [13,16]. It remains to define which object invariants are specified by axioms. Consider for instance the following two axioms, included in SORTED SET: E , F : Orderabl e ; S : S ortedS et [ Orderabl e ] ; l a r g e s t ( i n s e r t ( S , E ) ) = E i f isEmpty ( S ) ; i s I n ( i n s e r t (S , E ) , F ) i f f E = F or i s I n ( S , F ) ;
The translation of these axioms to properties of an object ts:TreeSetK, for some type K that implements IOrderableK, has to take into account that method insert is void. In what concerns the first axiom, ts has to satisfy the following property: In all client visible-states, for all k:K, – if ts.isEmpty() holds then, immediately after the execution of ts.insert(k), the expression ts.largest().equals(k) evaluates to true. The translation of the second axiom is more complex because it is an equivalence. In this case, the property that ts has to satisfy is the following: In all client visible-states, for all k,k’:K,
126
I. Nunes, A. Lopes, and V.T. Vasconcelos
– if k.equals(k’) or ts.isIn(k’) is true then, immediately after the execution of ts.insert(k), the expression ts.isIn(k’) evaluates to true; – if, immediately after the execution of the invocation ts.insert(k), the expression ts.isIn(k’) evaluates to true, then k.equals(k’) or ts.isIn(k’) is true. Space limitation prevents us from presenting the translation function induced by a refinement mapping R. For modules composed of simple specifications only, this translation was not only formally defined but also encoded in the form of a runtime conformance tool in the context of C ON G U (see [19] for details). As the example shows, generics and subsorting do not raise any additional difficulty in the translation process. 4.2 Constraints Imposed by Domain Conditions of Core Specifications The domain conditions included in a core specification S impose constraints over the behaviour of the corresponding class, R(S), as well as over all classes in C that are clients of R(S). Let φ be a domain condition of an operation op in a non-parameterized (resp. parameterized) specification S. On the one hand, the implementation of method R(op) must be such that, for every object o of type R(S) (resp. for every object o of a type that results from the instantiation of R(S)), in all client visible-states, if the property φ holds, then a call of R(op) terminates normally (i.e., no exception is raised) [16]. Notice that operations that are not declared to be partial, implicitly have the domain condition true and, hence, corresponding methods must always return normally. On the other hand, φ defines a pre-condition for method R(op). That is to say, φ must hold at the time the method is invoked. Hence, φ defines a constraint over the behaviour of all classes in C that are clients of R(S) and use method R(op). These classes cannot call R(op) if φ does not hold. As an example, let us consider the domain condition of largest included in specification SORTED SET. Let K be a class that implements IOrderableK. For any object ts:TreeSetK in a client-visible state in which !ts.isEmpty() holds, the call of largest must return normally. Because C does not contain any client of TreeSet, no more restrictions are imposed by this domain condition. 4.3 Constraints Imposed by Parameter Specifications Axioms and domain conditions described in parameter specifications of a module impose constraints similar to those defined in previous sections for core specifications. The difference lies on the target of these constraints only. The axioms included in a parameter specification S impose constraints over the behaviour of the types of C that are used to instantiate the corresponding generic type. More precisely, if S is a parameterized specification with parameter S, then every object of a type K used in C to instantiate the Java parameter type variable R(S) of R(S ) must fulfil the axioms of S. In the case of domain conditions, there are also constraints that apply to the clients of these classes.
Bridging the Gap between Algebraic Specification
127
In order to illustrate these ideas, consider a Java program C that, in addition to TreeSetE and IOrderableE, includes classes Date, Card and Main. Suppose also the last class is a client of TreeSetDate, TreeSetCard, Date and Card. Because C is a superset of C, the mapping R is also a refinement mapping between SS and C . The program C is in conformity with SS only if (1) Both Date and Card behave according to the properties described in TOTAL ORDER; (2) Main respects the domain conditions defined in SORTED SET for operation largest , when invoking the corresponding method either over an object of TreeSetCard or an object of TreeSetDate.
5 Extending the ConGu Tool The C ON G U tool [10] supports the runtime checking of Java classes against algebraic specifications. The input to the tool are specification modules and refinement mappings, and of course, Java programs, in the form of bytecode. Running a program under C ON G U may produce exceptions due to domain condition violations or to axiom violations. The first case is a manifestation of a ill-behaved client, i.e., a client that invokes a method in a situation in which the method should not be invoked. The second case is a manifestation of a faulty supplier class: one of the classes under test is failing to ensure at least one of the specified properties. The approach to runtime checking used in C ON G U involves replacing the original classes and generating further classes annotated with monitorable contracts, presently written in JML [13]. The generated pre and post-conditions allow to check if the constraints imposed by axioms and domain conditions hold at specific execution points. The approach is applicable even if the specified operations are implemented as void methods or methods with side effects. Roughly speaking, this is achieved as follows. For every specification S in the module, considering that R(S) is the class MyT: – Rename bytecode MyT to MyT_Original; – Generate a static class MyT_Contract annotated with contracts automatically generated from the axioms and domain conditions specified in S. This class is a sort of functional version of the original class MyT. For instance, if class MyT_Original has the method void m(...), then class MyT_Contract has a static method MyT_Original m(MyT_Original o,...) that starts by executing co=o.clone() and then returns co after the execution of co.m(...). Notice that the usage of clones in all methods of MyT_Contract ensures that the methods are pure (i.e., without side effects) and, hence, can be used in contracts. This step also generates classes to describe state-value pairs and ranges to be used in forall contracts; – Generate a proxy class MyT with exactly the same interface as the original one. This is a wrapper class in the sense that it intercepts all method invocations coming from clients of the original class and forwards these invocations to the corresponding method of MyT_Contract. In this way, contracts corresponding to axioms and domain conditions are monitored. In the rest of this section, we briefly discuss how this approach can be extended in order to support the more comprehensive notion of conformance between specification modules and Java programs presented in the previous section.
128
I. Nunes, A. Lopes, and V.T. Vasconcelos
From the point of view of the structure, there are two main extensions that C ON G U must suffer in order to deal with generic classes. One implies the creation of contract classes for every Java type declared as the upper bound of parameters to generic classes. For instance, in the implementation of the SORTED SET specification, in which we have IOrderableE interface as the upper bound for the parameter of the generic class TreeSetE, C ON G U must create a class IOrderable_Contract. In this class, the method greaterEq(IOrderable e1, IOrderable e2) uses dynamic dispatching to invoke the correct greaterEq method over a clone of the e1 argument. Every contract that needs to invoke the greaterEq method over some element of a TreeSet (for example, the post-condition for the insert method in which two elements of the TreeSet must be compared), must do so using the IOrderable_Contract class. Central to the implementation of C ON G U is the ability to clone objects for contract monitoring purposes. C ON G U distinguishes Cloneable from non-Cloneable types, cloning references of the former kind, calling contracts directly on the references of latter form. Java types declared as upper bounds of parameters to generic classes must be declared Cloneable in order to make it possible to create clones of its subtypes. This is because all objects of classes that implement some upper bound T in the context of a generic class instantiation, are statically used as T objects in contract classes. For example, the IOrderable interface in Figure 5 must extend the marker interface Cloneable (and announce the signature of method clone) if any of its implementations turns out to be mutable.2 Presently, C ON G U generates JML contracts. At the time of this writing JML does not support generic types in source code, hence C ON G U generates only non-generic (Java 1.4) code. In order to continue using the existing framework, the generated JML-related classes must be non-generic. Specifically, we generate non-generic contract classes (TreeSet_Contract and IOrderable_Contract), together with the remaining auxiliary (pair and range) classes. JML then compiles these classes relying on the bytecode for the program to be monitored (whose source code may contain generics that are not reflected in the bytecode) to resolve dependencies. The other main extension concerns classes arguments to generic classes. In order to enforce the constraints imposed by parameter specifications of a module over a Java program C, C ON G U must first be capable of determining, for each generic specification S and each of its parameter specifications S , the set T (S, S ) of types that are used in the program to instantiate the corresponding parameter of R(S) (for instance, in the program considered in the previous section, T (SORTED SET,TOTAL ORDER) consists of classes Date and Card). Then, we can apply the methodology described before, and generate a proxy for each type in this set, each referring to the same contract class capturing the requirements specified in S (e.g., IOrderable_Contract). The problem of determining the set T (S, S ) can only be completely solved if the source code of the Java program is available, for such information is lost in the bytecode generated by the Java compiler (through a bytecode inspection one can compute the set of subtypes of the upper bound of R(S ) found in C, which is a superset of the required 2
One may wonder if forcing all subtypes of a parameter type to be of the same kind (cloneable or non-cloneable) is over-restrictive. We note however that immutable classes can implement method clone by simply returning this, if so desired.
Bridging the Gap between Algebraic Specification
129
T (S, S )). In the absence of the source code, a possible solution is to ask the user to explicitly identify the classes that C ON G U should monitor for each parameter. The sets T (S, S ) may include classes that do not correctly implement S . If this is the case for some class A, then the execution of program C under C ON G U may behave in two different ways, depending on the nature of the violation. It may produce an exception signalling the violation of an axiom of S in A, or it may as well signal the violation of an axiom of a generic specification that has S as a parameter. This problem of imprecise blame assignment can be mitigated if, prior to executing C, we execute some simpler programs under C ON G U that do not involve clients of the generic classes but, instead, involve direct clients of the classes in T (S, S ). In this way, only the classes that correctly implement S would be used to test the generic implementations R(S) of generic specifications S that have S as a parameter. To some extent, this problem is related with the problem of testing generic units independently of particular instantiations addressed in [15]. Herein, given that the set of all possible instantiations is almost always infinite, it is suggested that a representative finite class of possible parameter units be selected.
6 Conclusion and Further Work The work presented in this paper contributes to bridging the gap between parameterized specifications and Java generic classes, in the context of checking the conformance of the latter against the former. By evolving the C ON G U approach to cope with more sophisticated specifications — involving parameters and subsorting — it becomes applicable to a more comprehensive range of situations, namely those that appear in the context of a typical Algorithms and Data Structures course and in the context of reusable libraries and frameworks. Algebraic specifications are known to be especially appropriate for describing software components that implement data abstractions. These components play an important role in software development and are, today, an effective and popular way of supporting reuse. Many ready-to-use components produced for libraries and frameworks that support reuse of common data abstractions are generic. Given that generics are known to be difficult to grasp, obtaining correct implementations becomes more challenging and, hence, automatic support for detection of errors becomes more relevant. We presented a specific interpretation of specification modules in terms of sets of Java classes and interfaces. This is useful if we take the perspective of a Java developer that has to implement a collection of units, described in terms of a specification module. The interpretation rigourously defines what the module’s correct implementation in Java is and paves the way to the development of tools that support the runtime conformance checking of Java programs. Using these tools increases the confidence in the source code, and facilitates component-based implementation. The C ON G U tool already supports checking Java classes against simple specifications. A solution for extending it in order to cope with specifications using parameters and subsorting was also presented. We are currently implementing the sketched extension to C ON G U tool. As for future work, in order to increase the effectiveness of the monitoring process, while keeping
130
I. Nunes, A. Lopes, and V.T. Vasconcelos
it completely automated, we also intend to develop testing techniques based on the specifications. Our aim is the automatic generation of unit and integration testing that make use of runtime conformance checking. A black box approach to the reliability analysis of Java generic components brings the challenge of not having any a priori knowledge of which Java types are to be used to instantiate parameters. Therefore, tests are harder to generate, because test data must be generated of types that are unknown at testing time. Existing methods and techniques to automatically generate test suites from property-driven specifications cannot be directly applied, and coverage criteria and generation strategies of test cases in this new context needs also to be investigated.
References 1. Antoy, S., Gannon, J.: Using term rewriting to verify software. IEEE Transactions on Software Engineering 4(20), 259–274 (1994) 2. Antoy, S., Hamlet, R.: Automatically checking an implementation against its formal specification. IEEE Transactions on Software Engineering 26(1), 55–69 (2000) 3. Aspinall, D., Sannella, D.: From specifications to code in CASL. In: Kirchner, H., Ringeissen, C. (eds.) AMAST 2002. LNCS, vol. 2422, pp. 1–14. Springer, Heidelberg (2002) 4. Barnett, M., Schulte, W.: Spying on components: A runtime verification technique. In: Proc. Workshop on Specification and Verification of Component-Based Systems 2001 (2001) 5. Barnett, M., Schulte, W.: Runtime verification of.NET contracts. Journal of Systems and Software 65(3), 199–208 (2003) 6. Bidoit, M., Mosses, P.: CASL User Manual. LNCS, vol. 2900. Springer, Heidelberg (2004) 7. Bidoit, M., Sannella, D., Tarlecki, A.: Architectural specifications in CASL. In: Haeberer, A.M. (ed.) AMAST 1998. LNCS, vol. 1548, pp. 341–357. Springer, Heidelberg (1998) 8. Chen, F., Rosu, G.: Java-MOP: A monitoring oriented programming environment for Java. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 546–550. Springer, Heidelberg (2005) 9. Chen, F., Tillmann, N., Schulte, W.: Discovering specifications. Technical Report MSR-TR2005–146, Microsoft Research (2005) 10. Contract based system development, http://gloss.di.fc.ul.pt/congu/ 11. Edwards, S., Shakir, G., Sitaraman, M., Weide, B., Hollingsworth, J.: A framework for detecting interface violations in component-based software. In: Proc. International Conference on Software Reuse (ICSR) 1998, pp. 46–55 (1998) 12. Henkel, J., Reichenbach, C., Diwan, A.: Discovering documentation for Java container classes. IEEE Transactions on Software Engineering 33(8), 526–543 (2007) 13. Leavens, G., Cheon, Y.: Design by contract with JML (2006), http://www.eecs.ucf.edu/˜leavens/JML/jmldbc.pdf 14. Leavens, G.T., Cheon, Y., Clifton, C., Ruby, C., Cok, D.R.: How the design of JML accommodates both runtime assertion checking and formal verification. Science of Computer Programming 55(1–3), 185–208 (2005) 15. Machado, P.L., Sannella, D.: Unit testing for CASL architectural specifications. In: Diks, K., Rytter, W. (eds.) MFCS 2002. LNCS, vol. 2420, pp. 506–518. Springer, Heidelberg (2002) 16. Meyer, B.: Object-Oriented Software Construction, 2nd edn. Prentice-Hall PTR, Englewood Cliffs (1997) 17. Meyer, B.: Eiffel as a framework for verification. In: Meyer, B., Woodcock, J. (eds.) VSTTE 2005. LNCS, vol. 4171, pp. 301–307. Springer, Heidelberg (2008)
Bridging the Gap between Algebraic Specification
131
18. Nikolik, B., Hamlet, D.: Practical ultra-reliability for abstract data types. Software Testing, Verification and Reliability 17(3), 183–203 (2007) 19. Nunes, I., Lopes, A., Vasconcelos, V., Abreu, J., Reis, L.: Testing implementations of algebraic specifications with design-by-contract tools. DI/FCUL TR 05–22 (2005) 20. Nunes, I., Lopes, A., Vasconcelos, V.T., Abreu, J., Reis, L.S.: Checking the conformance of Java classes against algebraic specifications. In: Liu, Z., He, J. (eds.) ICFEM 2006. LNCS, vol. 4260, pp. 494–513. Springer, Heidelberg (2006) 21. Stenzel, K., Grandy, H., Reif, W.: Verification of java programs with generics. In: Meseguer, J., Ros¸u, G. (eds.) AMAST 2008. LNCS, vol. 5140, pp. 315–329. Springer, Heidelberg (2008) 22. Yu, B., King, L., Zhu, H., Zhou, B.: Testing Java components based on algebraic specifications. In: Proc. International Conference on Software Testing, Verification and Validation, pp. 190–198. IEEE, Los Alamitos (2008)
Runtime Verification of C Memory Safety Grigore Ros, u1 , Wolfram Schulte2 , and Traian Florin S, erb˘anut, ˘a1 1
University of Illinois at Urbana-Champaign 2 Microsoft Research [email protected], [email protected], [email protected]
Abstract. C is the most widely used imperative system’s implementation language. While C provides types and high-level abstractions, its design goal has been to provide highest performance which often requires low-level access to memory. As a consequence C supports arbitrary pointer arithmetic, casting, and explicit allocation and deallocation. These operations are difficult to use, resulting in programs that often have software bugs like buffer overflows and dangling pointers that cause security vulnerabilities. We say a C program is memory safe, if at runtime it never goes wrong with such a memory access error. Based on standards for writing “good” C code, this paper proposes strong memory safety as the least restrictive formal definition of memory safety amenable for runtime verification. We show that although verification of memory safety is in general undecidable, even when restricted to closed, terminating programs, runtime verification of strong memory safety is a decision procedure for this class of programs. We verify strong memory safety of a program by executing the program using a symbolic, deterministic definition of the dynamic semantics. A prototype implementation of these ideas shows the feasibility of this approach.
1
Introduction
Memory safety is an crucial and desirable property for any piece of software. Its absence is a major source for software bugs which can lead to abrupt termination of software execution, but also, and sometimes even more dangerous, can be turned into a malicious tool: most of the recent security vulnerabilities are due to memory safety violations. Nevertheless most existing software applications, and especially performance-critical applications, are written in low-level programming languages such as C, which offer performance at the expense of safety. Due to C’s support of pointer arithmetic, casting, and explicit allocation and deallocation, C program executions can exhibit memory safety violations ranging from buffer overflows, to memory leaks, to dangling pointers. An important research question is thus the following: Given a program written in an unsafe programming language like C, how can one guarantee that any execution of this program is memory-safe? Many different approaches and tools were developed to address this problem. For instance, CCured [1] uses pointer annotations and analyzes the source of S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 132–151, 2009. c Springer-Verlag Berlin Heidelberg 2009
Runtime Verification of C Memory Safety
133
the program, trying to prove it memory safe, introducing runtime checks in the code to monitor at runtime the parts which cannot be proven; Purify [2] and Valgrind [3] execute the program in a “debugging” mode, adding metadata to the program pointers to guarantee a proper separation of the allocation zones, and use that metadata to monitor and detect anomalies at runtime; DieHard [4] and Exterminator [5] replace the standard allocation routines by ones using randomization, which enables the detection of errors with high probability, attempting to correct the errors on-the-fly. However, most of these tools arise from ad-hoc observations and practical experience, without formally defining what it means for a program to be memory safe. This paper makes a first step towards bridging this gap, by introducing a formal definition of memory safety for programs written in a non-memory safe programming language and execution platform. The language and platform chosen to this aim is KernelC, a formal definition of the executable semantics of a fragment of the C language including memory allocation/freeing routines. KernelC only supports one type namely mathematical integers; and each KernelC location can hold exactly one integer. Nevertheless one can write many interesting pieces of C code in KernelC. Here are some that we will refer to in the paper: ALLOCATE allocates a linked list of 5 nodes, in reversed order, each node having two contiguous locations, one holding a value and the other a pointer to the next node; REVERSE reverses a list of nodes as above that starts at p; and DEALLOCATE ALLOCATE frees a list starting with p. Informally, memory safety means that the program can- n = 0; not access a memory location which it shouldn’t (e.g., ex- p = null; ceeding arrays boundaries, addressing unallocated mem- while(n != 5) { q = malloc(2); ory, and so on). For example, consider the program AL*q = n; LOCATE’, obtained from ALLOCATE by removing the sec*(q+1) = p; ond statement, i.e., p = null;. Then, any of the comp = q; posed programs ALLOCATE’ DEALLOCATE, or ALLOCATE’ n = n+1; REVERSE, is not memory safe, since the list can poten- } tially be non-null terminated, which would lead to an atREVERSE tempt of accessing non-allocated memory upon deallocat- if(p != null) { x = *(p+1); ing/reversing the list. On the other hand, a compiler might *(p+1) = null; initialize all local variables with 0 (which in C corresponds while(x != null) { to null); if so our example program would have no memy = *(x+1); ory access error and would terminate. *(x+1) = p; The principal source of C’s non-determinism comes p = x; from the under-specification of C’s memory allocator, x = y; which implements the malloc and free functions. The } C language specification guarantees that a call to mal- } loc( n ) will, if it succeeds, return a pointer to a region of DEALLOCATE n contiguous and previously unallocated locations. These while(p != null) { q = *(p+1); locations now become allocated. When these locations are free(p); no longer needed, the pointer, which was returned from p = q; a malloc call, is passed to free which deallocates the }
134
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
memory. The C language specification provides no guarantees except for that fact that malloc returns unallocated locations; free might deallocate the memory or not. To cope with this non-determinism, memory safety of KernelC programs is defined as a global property on the entire set of executions of a program, derivable using the KernelC definition. We say: A KernelC program is memory safe if none of its possible executions gets stuck in a non-final state. One might expect that verification of memory safety would be decidable for terminating programs – after all we have so many checkers addressing the problem. However, we show that memory safety is undecidable even for closed, terminating programs. The argument for undecidability comes from the rather unusual usage of memory allocation, that is, using memory allocation as a source of nondeterminism in the execution, such as the examples in Fig. 1: INPUT presents a simulation of non-deterministic input, and CHOICE shows how one can model non-deterministic choice. INPUT CHOICE Based on the fact that standards n=malloc(1); x = malloc(1); for writing “good” C code [6] advise while (n!=1) y = malloc(1); against taking advantage of this kind if (n%2) n=3*n+1; if (x
Runtime Verification of C Memory Safety
135
defines memory safety, and shows that, although we can monitor memory safety along any execution path, proving memory safety is generally undecidable. Section 4 introduces strong memory safety as a reasonable and decidable restriction of memory safety and shows that runtime verification of strong memory safety yields a sound technique for verifying memory safety. Section 5 concludes.
2
Formal Semantics of KernelC
We here discuss the definition of KernelC using K [7], a technique for defining languages within the Rewriting Logic Semantics [8,9]. Within this framework, languages L are defined as rewrite theories (ΣL , EL , RL ), where ΣL is a signature extending the syntax of L, EL is a set of ΣL -equations, which are thought of as structural rearrangements preparing the context for rules and carrying no computational meaning, while RL is a set of ΣL -rules, used to model irreversible computational steps. Since our base logic, rewriting logic [10], is a conservative extension of equational logic, terms can be replaced by equal terms in any context and in any direction. We write R t = t whenever t can be proved equal to t using equational deduction with the equations in R. Like in term rewriting, rules can be applied in any context, but only from left-to-right. One way to think of rewriting logic is that equations apply until the term is matched by the left-hand-side (lhs) of some rule, which then irreversibly transforms the term. We write R t→t when t can be rewritten, using arbitrarily many equational steps but only one rewrite step in R, into t . Also, we write R t →∗ t when t can be rewritten, using the equations and rules in R, into t . Rewriting logic thus captures rewriting modulo equations into a logic, with good mathematical properties (loose and initial models, complete deduction, proofs = computations, etc.). It is simple to understand and efficiently executable. K is a modular definitional framework: rules match only what they need from the configuration, so one can change the configuration (e.g., adding store, input/output, stacks, etc.) without having to revisit existing rules. Sequences, bags and maps. Sequences, bags and maps are core to K language definitions and are defined as standard (equational) data-structures. We use notations Sequ@ [S ] for sequences and Bagu@ [S ] for bags, resp., where u is their unit and @ is their binary construct. Formally, if added for sort S , these correspond to adding subsorting S < S (i.e., production S S , not needed when S = S ), operations u :→ S (a constant) and @ : S ×S → S , and appropriate unit and associativity equations for sequences, and unit, commutativity and associativity equations for bags. For example, an environment is a finite bag of pairs, ρ[X] retrieves the Int associated to the Id X in ρ, ρ[X ← J] updates the Int corresponding to X in ρ to J, and ρ\X removes pair X → from ρ (if there is any). One can also define, in the same style, an operation Dom giving the domain of a map as a bag of elements, as well as an operation checking whether the map term is indeed a partial function. These operations are easy to define algebraically and therefore we assume them from here on; in fact, we assume that each map that
136
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
occurs in an equation or rule is a well-formed map (e.g., the maps σ σ in the rules for malloc and free in Fig. 2). In general, Mapu@ [S 1 , S 2 ] corresponds to bags of pairs of elements of sorts S 1 and S 2 , respectively, each pair written s1→s2 , with additional operations [ ] : S × S 1→ S2 and [ ← ] : S × S1 × S 2→ S and \ : S × S1 → S for lookup, update (adding a new pair if map undefined on that element) and deletion (i.e., removing an element binding), respectively, where S is the sort corresponding to the maps. Abstract Syntax. Fig. 2 shows the complete K definition of KernelC, a Clike language with dynamic memory allocation and deallocation. K definitions typically use only one (abstract) syntactic category, K, serving as minimal syntactic infrastructure to define terms; it is not intended to be used for parsing or type-checking. We make no distinction between algebraic signatures and their context-free notation: syntactic categories correspond to sorts and productions
Nat naturals, Int integers (abstract syntax) Id identifiers, to be used as variable names K Int | Id | null | *K | !K | K1 op K2 | K1 && K2 | K1 || K2 | | K1 =K2 | K; | K1 K2 | {K} | {} | malloc(K) | free(K) | if (K1 ) K2 | if (K1 ) K2 else K3 | while (K1 ) K2 null = 0 (desugaring of non-core constructs) ! K = if (K) 0 else 1 K1 && K2 = if (K1 ) K2 else 0 K1 || K2 = if (K1 ) 1 else K2 {K} = K if (K1 ) K2 = if (K1 ) K2 else {} Cfg Bag· [CfgItem] (configuration) CfgItem Kk | Envenv | Memmem | Ptrptr K ... | Seq· [K] Mem Map· [Nat+ , Int] , Env Map· [Id, Int] Ptr Map· , [Nat+ , Nat] *K = (K *) (computation structural equations) K1 op K2 = (K1 op K2 ) I1 op K2 = (K2 I1 op ) if (K1 ) K2 else K3 = (K1 if () K2 else K3 ) K; = K ; (K1 =K2) = (K2 K1 =) (*K1 =K2 ) = (K1 *=K2 ) malloc(K) = (K malloc()) free(K) = (K free()) {} = · (semantic equations and rules) K1 K2 = K1 K2 I; → · I1 op I2 → I1 opInt I2 if (I) K2 else K3 → K3 , where I = 0 if (I) K2 else K3 → K2 , where I 0 X Kk X → I, ρenv → I Kk X → I, ρenv X=I Kk ρenv → Kk ρ[X ← I]env *P Kk P → I σmem → I Kk P → I σmem *P=I Kk P → I σmem → Kk P → I σmem while(K1 )K2 Kk = if(K1 ){K2 while(K1 )K2 } Kk malloc(N)Kk σmem πptr→ P Kk σσ mem π[P←N]ptr where Dom(σ ) = P, P+N −1 free(P); Kk σσ mem P → N, πptr → Kk σmem πptr where Dom(σ ) = P, P+N −1 (range of variables: X ∈ Id; K, K1 , K2 ∈ K; I, I1 , I2 ∈ Int; P ∈ Nat+ ; N ∈ Nat)
Fig. 2. KernelC in K: Complete Semantics
Runtime Verification of C Memory Safety
137
to operations in the signature; for example, production “K Id=K;” is equivalent to defining an operation “ = ; : Id × K → K”. In Fig. 2, op stands for the various arithmetic and relational operations that one may want to include in one’s language, and opInt stands for the mathematical counterpart (function or relation) of op which operates on integers. For example, op can range over standard arithmetic operator names +, -, *, /, etc., and over standard relational operator names ==, !=, <=, >=, etc., in which case +Int is the addition operation on integers (e.g., 3 +Int 7 = 10), etc., and ==Int is the equality on integers (e.g., (3 ==Int 5) = 0 and (3 ==Int 3) = 1. Like in C, we assume that boolean values are special integer values, but, unlike C we assume unbounded integers. We also assume the C meaning of the language constructs. In particular, malloc(N) allocates a block of N contiguous locations and returns a pointer to the first location, and free(P) assumes that a block of N locations has been previously allocated using a corresponding malloc and deallocates it. Definition 1. A KernelC computation K is well-formed iff it is equal (using equational reasoning within KernelC’s semantics) to a well-formed statement list or expression in C. Also, a computation is well-terminated iff it is equal to the unit computation “·” or to an integer value I ∈ Int. Syntactic Sugar. The desugaring equations are self-describing; we prefer to desugar derived language constructs wherever possible. The “boolean” constructs && and || are shortcut. Even though the conditional is a statement, once all syntactic categories are collapsed into one, K, it can be used to desugar expression constructs as well. Configurations. We use sequences, bags, maps and abstract syntax as configuration constructors, henceforth just called cells. The configuration of KernelC is a top ... cell containing a “soup” of four sub-cells: a cell ... k wrapping the computation; a cell ... env holding the mapping for the stack variables; a cell ... mem holding the memory (or heap) which can be dynamically allocated/deallocated; and a cell ... ptr associating to pointers returned by malloc the number of locations that have been allocated (this info is necessary for the semantics of free). K definitions achieve context-sensitivity in two ways: (1) by adding algebraic structure to configurations and using it to control matching; and (2) by extending the original language syntax with a special task sequentialization construct, “” pronounced “then”, as well as frozen variants of existing language constructs. Frozen operators have a “” as part of their name and are used to “freeze” fragments of a program until their turn comes. Definition 2. Let (Σ, E) be the algebraic specification of KernelC configurations: Σ contains all the configuration constructs (for bags, maps, etc.) and E contains all their defining equations (associativities, commutativities, etc.). Let T be the Σ-algebra of ground terms; the E-equational classes (i.e., provably equal using equational reasoning with E) of (ground) terms in T of sort Cfg which
138
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
have the form K k ρ env σ mem π ptr are called (concrete) configurations. We distinguish several types of configurations: – Configurations of the form K k · env · mem · ptr where K is a well-formed computation, also written more compactly K, are called initial configurations; – Configurations K k ρ env σ mem π ptr whose embedded computation K is well-terminated (a “·” or an I ∈ Int) are called final configurations; – Configurations γ ∈ T which cannot be rewritten anymore (i.e., there is no configuration γ ∈ T such that KernelC γ → γ ) are normal form configurations; – Normal form configurations which are not final are called stuck (or junk, or core dump) configurations; – Configurations γ which cannot be rewritten infinitely (i.e., there is no infinite set of configurations {γn }n∈Nat such that γ0 = γ and KernelC γn → γn+1 for any n ∈ Nat) are called terminating configurations. Computation structures. Sort K contains computation structures, or simply computations, obtained by adding to the original abstract syntax computation sequences (terms in Seq· [K]) and frozen computations (wrapped by operators containing a “” in their name). Intuitively, K1K2 means “first process K1 , then process K2 ”. Frozen computations are structurally inhibited from advancing until their turn comes. For example, “K1 op K2 ” first processes K1 and in the meanwhile keeps K2 frozen: “K1 op K2 = K1 op K2 ”. After K1 is processed, its result is placed back in context and K2 is “scheduled”: “I1 op K2 = K2I1 op ”. As equations, these can be applied forth (to “schedule” for processing) and back (to “plug” results back). We assume all freezing operators are automatically added to sort K (i.e., the “...” in “K ...” in Fig. 2 include “Id=;” and “ op K | K op ” for all operations op that one specifies in the language syntax). Computation equations give the evaluation strategy of each language construct; note the one for the conditional, which schedules for processing the condition, keeping the two branches frozen. They accomplish the same role as the context productions of evaluation contexts [11], but logically rather than syntactically. Semantic equations and rules. Empty blocks and sequential composition are dissolved into the unit and the sequentialization of K. The rules for +, == and if are clear. The rule for variable assignment updates the environment, at the same time dissolving the assignment statement. We chose to let lookup of uninitialized variables be undefined. Pointer lookup and update are similar, replacing the environment by memory. The equation of while shows a use of the cell structure to achieve context sensitivity; if replacing it with the simple-minded equation (or rule in case one prefers to regard loops unrolling as a computational step) while(K1 )K2 = if(K1 ){K2 ;while(K1 )K2 } then there is nothing to prevent the application of this equation again on the while term inside the conditional, and so on. While proof-theoretically one could argue that there is no problem with that, operationally it is problematic as it leads to operational non-termination even though
Runtime Verification of C Memory Safety
139
the program may terminate. Therefore, we chose to restrict the unrolling of while to only the cases when while is the first computation task. The rules for free and malloc make subtle use of matching modulo associativity and commutativity of . In the case of free(P), a σ is matched in the ... mem cell whose domain is the N contiguous locations P, P+N−1, where N is the natural number associated to P in the ... ptr cell (i.e., the number of locations previously allocated at P using a malloc); then the free statement in cell ... k , the memory map σ in cell ... mem and the pointer mapping P→N in cell ... ptr are discarded; this way, the memory starting with location P can be reclaimed and reused in possible implementations of KernelC. Recall that we assume that all (partial) maps appearing in any context are well-formed; in particular, the map σ σ in the rule of free is well-formed, which means that there is only one such matching in the memory cell (P and N are given), which means that the rule for free is deterministic. Such a compact and elegant definition is possible only thanks to the strength of matching and rewriting modulo equations. Maude [12] provides efficient support for these operations, which is what makes it a very convenient execution vehicle for K. The well-formedness of maps can either be assumed (one can prove aside that each equation/rule preserves it) or checked as a condition attached to the rule. Fig. 3 shows a rewriting logic derivation using the K semantics in Fig. 2; →∗ stands for one or more rewrite steps, with arbitrarily many equational steps in between. The most intricate rule in Fig. 2 is that of malloc which is an almost exact dual of the rule for free. Like in the free rule, the σ is doubly constrained: its domain is disjoint from σ’s (because σ σ is well-formed) and its domain is the set of contiguous locations P, P+N−1 with P the returned pointer. However, the constraints on σ are loose enough to allow a high degree of semantic non-determinism. E.g., program “BAD ≡ p=malloc(2);*2=7;” may exhibit three different types of behavior, two in which it terminates normally but in nonisomorphic configurations, and one in which it gets stuck looking up for location 1 which is not allocated. E.g., BAD k · env · mem · ptr rewrites to any of the following (each being a normal form): · k p → 1 env (1 → i) (2 → 7) mem 1 → 2 ptr , where i ∈ Int · k p → 2 env (2 → 7) (3 → j) mem 2 → 2 ptr , where j ∈ Int *2 =7; k p → 5 env (5 → k) (6 → −3) mem 5 → 2 ptr , where k ∈ Int
In concrete implementations of KernelC, one may see the last type of behavior more frequently than the other two, as it is unlikely that malloc allocates at the “predicted” location, 2 in our case. We tried this code in gcc on a Linux machine (casting 2 to (T*)2) and it compiled (but it gave an expected segmentation fault when run). Thus, we can regard the third normal form term above as a “core dump”. We claim that, in spite of this apparently undesired non-determinism, this is the most general semantics of malloc that a language designer may want to have. Any other additional constraints, such as “always allocate a fresh memory region”, or “always reuse existing memory if possible”, etc., may lead to a restrictive definition of KernelC, possibly undesired by some implementors.
140
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
Let REVERSE be the list reverse program in Introduction, and let WHILE ≡ while(x!=null){y=*(x+1);*(x+1)=p;p=x;x=y;} IF ≡ if(){x=*(p+1); *(p+1)=null; WHILE}. Also, let us assume the environment and memory maps: (ρ1 ≡ p → 1, x → 0, y → 0), (ρ2 ≡ p → 1, x → 5, y → 0), (ρ3 ≡ p → 5, x → 0, y → 0), (σ1 ≡ 1 → 72 → 55 → 96 → 0), (σ2 ≡ 1 → 72 → 05 → 96 → 0), (σ3 = 1 → 72 → 05 → 96 → 1). The following derivation shows an execution reversing a list with the elements 7, 9: REVERSE(p) k ρ1 env σ1 mem = p!=nullIF k ρ1 env σ1 mem = !(p==null) IF k ρ1 env σ1 mem =∗ if (p==0) 0 else 1 IF k ρ1 env σ1 mem = p==0 if () 0 else 1 IF k ρ1 env σ1 mem = p ==0 if () 0 else 1 IF k ρ1 env σ1 mem → 1 ==0 if () 0 else 1 IF k ρ1 env σ1 mem = 1==0 if () 0 else 1 IF k ρ1 env σ1 mem → 0 if () 0 else 1 IF k ρ1 env σ1 mem = if (0) 0 else 1 IF k ρ1 env σ1 mem→ 1 IF k ρ1 env σ1 mem →∗ x=*(p+1); *(p+1)=0; WHILE k ρ1 env σ1 mem →∗ x=*2; *(p+1)=0; WHILE k ρ1 env σ1 mem →∗ x=5; *(p+1)=0; WHILE k ρ1 env σ1 mem →∗ *(p+1)=0; WHILE k ρ2 env σ1 mem →∗ if(x!=0){y=*(x+1);*(x+1)=p;p=x;x=y;WHILE} k ρ2 env σ2 mem →∗ y=*(x+1); *(x+1)=p;p=x;x=y;WHILE k ρ2 env σ2 mem →∗ *(x+1)=p; p=x; x=y; WHILE k ρ2 env σ2 mem →∗ p=x; x=y; WHILE k ρ2 env σ3 mem →∗ if(x!=0){y=*(x+1);*(x+1)=p;p=x;x=y;WHILE} k ρ3 env σ3 mem →∗ · k ρ3 env σ3 mem
Fig. 3. Rewriting logic derivation using the KernelC semantics in Fig. 2
The actual C language makes no specific requirements on memory allocation, allowing C interpreters or compilers freedom to choose among various memory allocation possibilities; it is programmers’ responsibility to write programs that do not rely on particular memory allocation strategies. Note that, for simplicity, our semantics abstracts from the fact that malloc can fail, in which case a null pointer is returned; that is similar to saying we assume an unbounded memory. The language. We can now formally state what KernelC is: Definition 3. The language KernelC discussed here is the rewrite logic theory (ΣKernelC , EKernelC , RKernelC ) depicted in Fig. 2. If KernelC γ →∗ γ we say that, in KernelC, configuration γ rewrites to configuration γ . Both the abstract syntax of KernelC and Σ are included in ΣKernelC , and also both the desugaring equations of derived KernelC constructs and E are included in EKernelC ; recall from Definition 2 that (Σ, E) is the equational definition of KernelC configurations. Therefore, the rewrite logic semantics of KernelC, identified with KernelC from here on, can produce by means of rewriting all the possible complete or intermediate executions that the language can yield. In particular, if KernelC
Runtime Verification of C Memory Safety
141
K →∗ γ with K a well-formed computation and γ a well-terminated configuration, then γ contains the (possibly non-deterministic) “result” obtained after “evaluating” K. In addition to comprising all the good executions, the rewrite theory KernelC also comprises all the bad executions of KernelC programs, namely all those that can get stuck; as seen shortly, this is very important as it will allow us to formally define memory safety of KernelC programs. Note that like in any other formal operational semantics, our rewrite logic definition of KernelC has the property that informal execution steps and whole executions of programs become, respectively, formal proof steps and whole proofs in rewriting logic. Interestingly, unlike in other operational semantic frameworks, rewriting logic also provides models which are complete for its proof system, so the very same K definition of KernelC is also a loose “denotational” semantics in addition to being an “operational” one; moreover, since rewriting logic admits initial models, which are essentially built as a fix point over the algebra of terms, there is a selected subset of models, the “reachable” ones, for which induction is valid. In other words, once one has a K definition of a language, one needs no other formal semantics of that language because its K definition already provides everything one may need from a formal semantics. This is also one of the reasons for which we call K semantics executable rather than operational; the latter may give the wrong impression that the K semantics can only be used to yield an interpreter for the language. Even though K is executable by its very nature, here we actually defined, and not implemented, KernelC. We therefore wanted to keep our semantics as loose, or unconstrained, as possible. As usual, when implementing non-deterministic specifications one needs not (and typically does not) provide all the non-deterministic behaviors in one’s implementation. In fact, each implementation of KernelC is expected to be deterministic. The non-determinism of malloc in our KernelC definition is a result of a deliberate language underspecification, not a desired non-deterministic feature of the language. General details on under-specification versus non-determinism are beyond our scope here, but the interested reader is referred to [13] for an in-depth discussion on these subjects. An additional advantage of the under-specified malloc in our definition of KernelC is that it allows us to elegantly yet rigorously define memory safety in the next section: a program is memory-safe iff it cannot get stuck, i.e., it cannot be rewritten to a normal form whose computation cell is not well-terminated.
3
Memory Safety
We here give a formal definition to memory safety in KernelC, capturing the intuition that a program is memory safe iff it is so under any possible implementation of KernelC, i.e., under any possible choice the rule for malloc may make. Due to the undecidability of termination in general, our notion of memory safety, like any other practical (i.e., not unreasonably restricted) notion of memory safety, is undecidable in general. In this section we show that memory safety is actually undecidable even for terminating KernelC programs. That means,
142
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
in particular, that KernelC semantics as well as any faithful implementation of it, cannot detect memory safety violations even on programs which always terminate, no matter whether that is attempted statically or at runtime. To check memory safety, one therefore needs either to rely on user help (e.g., annotations), as detailed in [14], or to restrict the class of memory safe programs, which is what we do in next section. Definition 4. Well-formed computation K is terminating iff K is a terminating configuration in KernelC, and is memory safe iff any normal form of K in KernelC is final. Program “BAD ≡ p=malloc(2);*2=7;” is terminating but not memory safe: BAD rewrites, as seen, to normal form *2 =7; k p→ 5 env (5 → −1)(6 → −3) mem 5 → 2 ptr . Program“GOOD ≡ p=malloc(2);*(p+1)=7;”, on the other hand, is both terminating and memory-safe: GOOD rewrites only to normal form configurations of the form · k p → i env (i → j) (i+1 → 7) mem i → 2 ptr , where i ∈ Nat+ and j ∈ Int. Program “p=malloc(1);while(*p){}” is memory safe but not terminating (when *p 0), and finally, program “p=malloc(1);while(*1){}” is neither memory-safe (when p 1) nor terminating (when p = 1 and *1 0). For our simple language, memory is the only source of unsafety; for more complex languages, one may have various types of safety, depending upon the language construct at the top of the computation in t when t is a normal form, which tells why the computation got stuck; e.g., if the language has division and 3/0 is at the top of the computation, then K got stuck because a division by zero was attempted. KernelC is Turing complete (we assumed both arbitrarily large integers and infinite memory), so termination of KernelC programs is undecidable. That immediately implies that memory safety is also undecidable in general: for any memory safe program PGM, the program “PGM;BAD” is memory safe iff PGM does not terminate. What is not so obvious is the decidability or undecidability of memory safety on terminating programs. In the remainder of this section we show that this is actually an undecidable problem. A hasty reader may think that, since programs have no symbolic inputs or data, memory safety must be decidable on terminating programs: one can simply run the program and check each memory access. The complexity of the problem comes from the non-determinism/under-specification of malloc, which makes any particular execution of the program to mean close to nothing wrt memory safety. Consider, for example, an execution of the program “x=malloc(1); free(x); y=malloc(1); *x=1;” in which the second malloc just happens to return the same pointer as the first malloc. Since this particular execution taking place on a hypothetical particular implementation of KernelC terminates normally, one may be wrongly tempted to say that it is memory safe; this program is clearly not memory safe (gets stuck if second malloc chooses a different location) and even the execution itself can be argued as memory unsafe, because of a memory leak on x (dangling pointer).
Runtime Verification of C Memory Safety
143
Proposition 1. Memory safety of terminating KernelC programs is an undecidable property. Proof. Since KernelC is Turing complete, we can encode any decidable property ϕ(n) of input n ∈ Nat as a terminating and memory-safe KernelC program “x=n;PGMϕ ” which writes some variable out, such that ϕ(n) holds iff KernelC x=n;PGMϕ →∗ · k out → 1, ... env ... and ϕ(n) does not hold iff KernelC x=n;PGMϕ →∗ · k out → 0, ... env ... . Since the pointer returned by malloc is non-deterministic, we can use it to “choose a random” n to assign to x: consider the program “PGM’ϕ ≡ x=malloc(1);PGMϕ;if(out)GOOD else BAD”. PGM’ϕ terminates because “x=n;PGMϕ ” terminates for any n ∈ Nat returned by malloc(1) and the conditional always terminates. On the other hand, PGM’ϕ is memory safe iff the variable out is 1 in the environment when PGMϕ terminates, which happens iff ϕ(n) holds for all n ∈ Nat. The undecidability of memory safety then follows from the fact that there are decidable properties ϕ for which (∀n)ϕ(n) is a proper co-recursively-enumerable property [15]. Since our notion of memory safety refers to a program rather than a path, the proposition above says that it is also impossible to devise any runtime checker for memory safety of general purpose KernelC (and hence C) programs. One could admittedly argue that such anomalies occur as artifacts of poorly designed languages like C, that allow for (too) direct memory access and complete freedom in handling pointers as if they are natural numbers. However, it is actually precisely these capabilities that make C attractive when performance is a concern, and performance is indeed a concern in many applications. That memory unsafe programs may execute just fine is a must feature of any formal semantic definition of C that is worth its salt, because all C implementations deliberately “suffer” from this problem. Since unrestricted use of pointers returned by malloc can lead to non-deterministic executions of programs, one could, in principle, introduce some notion of “path memory safety”. For example, one could argue that an execution of the program “x=malloc(1); y=malloc(1); if (y==x+1) {} else BAD” in which y just happens to be x+1 is memory safe, or that an execution of the program “x=malloc(2); if (*x==*(x+1)) {} else BAD” in which *x just happens to be *(x+1) is memory safe. Encouraged by the informal so-called “C rules for pointer operations” [6], we prefer to not introduce such a notion of “path memory safety” and, instead, to keep our notion of memory safety of programs in Definition 4; with it, these terminating programs are not memory safe. We will next introduce a stronger notion of memory safety, supported by an executable semantics that will always get stuck on these programs.
4
Strong Memory Safety
We propose the semantic notion of strong memory safety: a program is strongly memory safe iff it does not get stuck in the executable semantics SafeKernelC,
144
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
a variant of KernelC semantics with symbolic pointers. Interestingly, our formal definition of strong memory safety includes the informal notion of memory safety implied by the “C rules for pointer operations”[6]. Strong memory safety is shown decidable for terminating programs, but, of course, it is undecidable in general. Note that we are not attempting to fix C’s problems here, nor to propose a better language design. However, the high degree of non-determinism in the semantics of malloc may be problematic in formal verification. We prefer to give a slightly different semantics to our language, one which captures the nondeterminism of malloc symbolically. Fig. 4 shows the formal K semantic definition of SafeKernelC, which essentially adds symbolic numbers and gives malloc a symbolic semantics. Everything else stays unchanged, like in the definition of KernelC in Fig. 2. The first distinction between KernelC and SafeKernelC is that, although both of them are deterministic for all rules except the malloc rule, KernelC can introduce non-determinism based on the values returned by the malloc function, while SafeKernelC, using symbolic values, is deterministic up to symbolic variable renaming. Proposition 2. For any L ∈ {KernelC, SafeKernelC}, if L γ0 →∗ γ such that L γ = K k ρ env σ mem π ptr , and L K = malloc(N); K , then there exists at most one configuration γ such that L γ → γ . SafeKernelC is deterministic, modulo renamings of the symbols from NatVar. Proof. First part can be formally proved by induction on the length of the derivation. Intuitively, the property holds because, at any moment, there exists a unique way to match a configuration which is reachable from an initial state, and the rules preserve this invariant. Moreover, except for the malloc rule, which allows for a choice of the value introduced (but does not violate the invariant), all other rules have the variables in the right hand side completely determined by those in the left hand side. For the second part, since the value introduced by the malloc rule is only operated with symbolically, which corresponds to the fact that future side conditions must hold for all possible valuations of variable, it follows that we can choose a canonical way to generate fresh symbols and thus completely eliminate the non-determinism. Therefore, we can choose a representative derivation for any SafeKernelC derivation of an initial state, say one in which choosing of fresh variables is done in order from the countable infinite sequence nv1 , nv2 , . . . , nvN , . . .. Definition 5. Well-formed computation K is strongly terminating iff K is terminating in SafeKernelC, and is strongly memory safe iff any normal form of K in SafeKernelC is final. Since SafeKernelC adds symbolic values (for pointers and initial values in allocated memory locations), the assumed machinery for naturals and integers is now expected to work with these symbolic values as well. In particular, the rule (side) conditions may be harder to check. For example, the rule “I1 ==I2 → N”
Runtime Verification of C Memory Safety
145
applies only when one proves that I1 = I2 , and in that case N is 1, or when one proves that I1 I2 , and in that case N is 0; if one cannot prove any of the two, then the term “I1 ==I2 ” remains unreduced and the execution of the program may get stuck because of that. For example, both “p=malloc(1);while(*p){}” and “p=malloc(1);while(*1){}” are now strongly terminating (but remain memory unsafe, also in the strong sense). Also, both programs discussed in front of Proposition 1 get stuck when processing the conditions of their if statements. On the positive side, programs obeying the recommended safety rules for pointer operations in C [6], e.g., reading only initialized locations and comparing pointers only if they are within the same data-structure contiguously allocated in memory, are strongly memory safe. For example, “n=100;a=malloc(n);x=a;while(x!=a+n){*x=0;x=x+1;}” is both strongly memory safe and strongly terminating. Since the side conditions can get arbitrarily complicated (they depend on the program), the problem of deciding their validity itself can potentially become undecidable. Therefore, we will assume an oracle for the logic involving the side conditions, which, given a formula, can give one of the following answers: YES, if the formula holds, NO, if it does not hold, or MAYBE, if the oracle cannot decide the problem. For example, this oracle could be a sound automatic theorem prover, which will attempt to prove/disprove the theorem, but, being incomplete, might also fail on complex formulas. The oracle we used together with our definition of SafeKernelC is very simple and based only on simplification rules (see Appendix C). However, since the logic (regarding pointer comparison) involved in programs following “good coding standards” should be relatively simple, we believe this oracle will probably act as a decision procedure for the majority of code. Matching logic [14] shows how one could use the same framework (together with code annotations), to prove general purpose properties about programs, which would allow checking memory safety for “bad” coded programs, as well. The advantage of the technique presented here is that it is fully automatic and requires no additional user input. Proposition 3. Any execution of a program p in SafeKernelC, can be stepby-step simulated in KernelC. Proof. Suppose (γi )i≥0 is a sequence of configurations satisfying that γ0 is an iniθi+1 (ρi+1 )
tial configuration corresponding to p, and, for any i, SafeKernelC γi −−−−−−→ NatVar infinite set of symbolic natural numbers (abstract syntax) Nat ... | NatVar (semantic equations and rules) malloc(N) K k σ mem π ptr → P K k σσ mem π[P←N] ptr where P is a fresh symbol in NatVar and Dom(σ ) = P, P + N − 1 Fig. 4. Formal semantics of SafeKernelC (figure only shows how it differs from the semantics of KernelC in Fig. 2)
146
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
γi+1 . Also suppose this derivation sequence is representative, in the sense that fresh NatVar symbols are generated (in order) from the sequence (nv j ) j≥1 . Let (i j ) j≥1 be a sequence of positive numbers such that ρi j is the instance of the malloc rule introducing nv j . If follows that, for any i ≤ i j0 , γi is completely determined by (nv j )1≤ j≤ j0 ; that is, all values in the environment and store are algebraic expressions with variables from (nv j )1≤ j≤ j0 . We define the following sequence of functions V j : {nv1 , . . . , nv j } → Nat, by: V1 (nv1 ) = 1, and Vn+1 (nv j ) = Vn (nv j ), j ≤ n , where N is the number associated to nvn in γin , 1 + Vn (nvn ) + Vn (N), j = n + 1 and Vn is the canonical extension of Vn to expressions and configurations (mapping σ(nv j ) to 0). Let V be the limit of (V j ) j≥1 , i.e., V(nv j ) = V j (nv j ). It remains to show that, for any i ≥ 0, V(γi ) is a configuration for KernelC, and V(θi+1 (ρi+1 ))
KernelC V(γi ) −−−−−−−−− → V(γi+1 ) (whence V(θi+1 (ρi+1 )) is an instance of a KernelC rule). For i = 0, V(γ0 ) = γ0 , since γ0 does not contain any NatVar symbols, whence it also is a configuration for KernelC. Now, suppose V(γi) is a θi+1 (ρi+1 )
KernelC configuration, and that SafeKernelC γi −−−−−−→ γi+1 . If ρi+1 is not the malloc rule, then it basically is the same rule schema as in KernelC, with the difference that it can be instantiated for terms with symbols from NatVar. Since the NatVar symbols are free variables in the instance θi+1 (ρi+1 ), it follows that V(θi+1 (ρi+1 )) is an instance for both KernelC and SafeKernelC; thereV(θi+1 (ρi+1 ))
fore it must be that KernelC V(γi ) −−−−−−−−− → V(γi+1 ). Similarly, if θi+1 (ρi+1 ) is an instance of the malloc rule, then V(θi+1 (ρn )) is an instance of the malloc rule in KernelC (by the construction of V). Theorem 1. Let = SafeKernelC γ0 →n γn be a prefix of the execution of a program p using SafeKernelC. Then any execution of p using KernelC has a prefix of length n which is a valuation of . θi+1 (ρi+1 )
Proof. Let SafeKernelC γi −−−−−−→ γi+1 , 0 ≤ i < n, where n ∈ Nat∪{∞}, be the (possibly infinite) canonical derivation of γ0 in SafeKernelC. Let KernelC θi+1 (ρi+1 )
γi −−−−−−→ γi+1 , 0 ≤ i < n be a derivation such that γ0 = γ0 . Let (i j ) j≥1 be the increasing sequence of positive numbers such that ρi j is an instance of the malloc rule. We then let V(nv j ) = θij (P). It can be proved by induction on i
that V(θi (ρi )) = θi (ρi ), and therefore V(γi ) = γi . Moreover, if n < n, then, using a construction similar to the one used in proving Proposition 3, we can expand the existing valuation V to one covering all symbols in the SafeKernelC derivation, say V , and use that to expand the existing KernelC derivation to KernelC V (θi1 (ρi+1 ))
V (γi ) −−−−−−−−→ V (γi+1 ), n ≤ i < n, such that V (γn ) = γn . By showing that all KernelC executions are abstracted by the SafeKernelC deterministic execution of a program (as long as that can proceed),Theorem 1 gives us an effective procedure for runtime verification of memory safety.
Runtime Verification of C Memory Safety
147
Theorem 2. 1. Strong memory safety guarantees memory safety. 2. Monitoring SafeKernelC executions yields a sound procedure for runtime verification of KernelC memory safety. Proof. 1. Assume p is a strongly safe program and let γ0 be an initial configuration for p. This means that either (1) the execution of p using SafeKernelC is non-terminating, or (2) there exists n such that SafeKernelC γ0 →n γn is final. If (1) is true, then for any prefix SafeKernelC γ0 →n γn of the execution of n, all KernelC executions of p must be of length at least n. Since n is arbitrarly large, this implies that all KernelC executions of p are non-terminating, and thus memory safe. If (2) is true, then since all KernelC executions of p must not only have prefixes of length n, but also those prefixed should be valuations of the SafeKernelC execution, and since the valuation of a final SafeKernelC configuration would yield a final KernelC configuration, it follows that all KernelC executions are precisely of length n and end up in final comfgurations, thus p is memory safe. 2. Given program p, one can verify its memory safety by simply “executing” it using the SafeKernelC definition. As long as the symbolic execution in SafeKernelC can proceed, strong memory safety is guaranteed to hold up to that point. That means that the execution of the program on any real machine using KernelC is guaranteed to be memory safe up to the same point. Ultimately, for terminating programs, monitoring strong memory safety becomes a sound decision procedure for verifying memory safety. Corollary 1. Strong termination and strong memory safety remain undecidable in general, but strong memory safety of terminating programs is decidable, assuming the oracle used for side conditions is a decision procedure. Proof. First two claims follow from the fact that SafeKernelC is Turring complete. If a program is known to terminate, then it also strongly terminates (as a coroallary of Proposition 3), thus its canonical derivation is finite, and by constructing it (which we can, given our oracle is a decision procedure), we can effectively check whether the last configuration obtained in this derivation is indeed final.
5
Conclusions
We have presented the first (up to our knowledge) formal definition of memory safety for a language allowing direct allocation and addressing of memory. After showing that verification of memory safety is not amenable for automation in general, through a suite of undecidability results, we proposed strong memory safety, a meaningful restriction of memory safety, and proved that it is runtime verifiable. Our main result is that runtime verification of strong memory safety is a sound decision procedure for memory safety. The preliminary executable definition is available for download (and experimentation) as a part of the KMaude distribution [16].
148
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
References 1. Necula, G.C., McPeak, S., Weimer, W.: CCured: type-safe retrofitting of legacy code. In: POPL 2002: Proceedings of the 29th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 128–139. ACM, New York (2002) 2. Hastings, R., Joyce, B.: Purify: Fast detection of memory leaks and access errors. In: Proceedings of the Winter USENIX Conference, January 1992, pp. 125–136 (1992) 3. Nethercote, N., Seward, J.: Valgrind: a framework for heavyweight dynamic binary instrumentation. In: Ferrante, J., McKinley, K.S. (eds.) PLDI, pp. 89–100. ACM, New York (2007) 4. Berger, E.D., Zorn, B.G.: Diehard: probabilistic memory safety for unsafe languages. In: PLDI 2006: Proceedings of the 2006 ACM SIGPLAN conference on Programming language design and implementation, pp. 158–168. ACM, New York (2006) 5. Novark, G., Berger, E.D., Zorn, B.G.: Exterminator: Automatically correcting memory errors with high probability. Commun. ACM 51(12), 87–95 (2008) 6. Harbison, S.P., Steele, G.L.: C: A Reference Manual, 5th edn. Prentice Hall, Englewood Cliffs (2002) 7. Ro¸su, G.: K: A Rewriting-Based Framework for Computations – Preliminary version. Technical Report UIUCDCS-R-2007-2926, University of Illinois (2007) 8. Meseguer, J., Ro¸su, G.: The rewriting logic semantics project. Theor. Computer Science 373(3), 213–237 (2007) 9. S ¸ erb˘ anu¸ta ˘, T.F., Ro¸su, G., Meseguer, J.: A rewriting logic approach to operational semantics. Inf. and Comp. (to appear, 2009), http://dx.doi.org/10.1016/j.ic.2008.03.026 10. Meseguer, J.: Conditioned rewriting logic as a united model of concurrency. Theor. Comput. Sci. 96(1), 73–155 (1992) 11. Wright, A.K., Felleisen, M.: A syntactic approach to type soundness. Inf. Comput. 115(1), 38–94 (1994) 12. Clavel, M., Dur´ an, F., Eker, S., Lincoln, P., Mart´ı-Oliet, N., Meseguer, J., Talcott, C.L. (eds.): All About Maude - A High-Performance Logical Framework, How to Specify, Program and Verify Systems in Rewriting Logic. LNCS, vol. 4350. Springer, Heidelberg (2007) 13. Walicki, M., Meldal, S.: Algebraic approaches to nondeterminism: An overview. ACM Comput. Surv. 29(1), 30–81 (1996) 14. Rosu, G., Schulte, W.: Matching logic. Technical Report UIUCDCS-R-2009-3026, University of Illinois at Urbana-Champaign (2009) 15. Rogers Jr., H.: Theory of Recursive Functions and Effective Computability. MIT Press, Cambridge (1987) 16. K-Maude web page, http://fsl.cs.uiuc.edu/index.php/K-Maude
Runtime Verification of C Memory Safety
A
KernelC Syntax and Configuration in K-Maude
(k syntax for KERNELC is including GENERIC-EXP-SYNTAX + STRING-SYNTAX. sorts Stmt StmtList Pgm. subsort Stmt < StmtList. op #include<stdio.h>‘#include<stdlib.h>‘void‘main‘(void‘)‘{_‘} : StmtList -> Pgm [renameTo _]. op *_ : Exp -> Exp [strict prec 25]. op !_ : Exp -> Exp [aux]. vars E E’ : Exp. eq ! E = E ? 0 : 1. ops _&&_ _||_ : Exp Exp -> Exp [aux]. eq E && E’ = E ? E’ : 0. eq E || E’ = E ? 1 : E’. op _?_:_ : Exp Exp Exp -> Exp [renameTo if‘(_‘)_else_ prec 39]. op _:=_ : Exp Exp -> Exp [strict(2) prec 40 gather (e E)]. op _; : Exp -> Stmt [prec 45 strict]. op ; : -> Stmt [renameTo .K]. op __ : StmtList StmtList -> StmtList [prec 100 gather(e E) renameTo _->_]. op ‘{_‘} : StmtList -> Stmt [renameTo _]. op ‘{‘} : -> Stmt [renameTo .K]. op malloc‘(_‘) : Exp -> Exp [strict]. op free‘(_‘) : Exp -> Exp [strict]. op if‘(_‘)_ : Exp Stmt -> Stmt [aux prec 47]. var St St’ : Stmt. eq if(E) St = if (E) St else {}. op if‘(_‘)_else_ : Exp Stmt Stmt -> Stmt [strict (1) prec 46]. op while‘(_‘)_ : Exp Stmt -> Stmt. op printf‘("%d "‘,_‘) : Exp -> Exp [strict] . op null : -> Exp [aux]. eq null = 0. k) (k configuration for KERNELC is including KMAP{K, K} + FRESH-ITEM{K}. ops env mem ptr : -> CellLabel [wrapping Map‘{K‘,K‘}]. op out : -> CellLabel [wrapping K]. op stream : String -> K. op void : -> KResult. k)
149
150
G. Ros, u, W. Schulte, and T.F. S, erb˘ anut, ˘ a
B
KernelC Semantics in K-Maude
(k semantics for KERNELC is including GENERIC-EXP-SEMANTICS. var P : Pgm . var N N’ : Nat . var X : Name. var Env : Map{K,K} . var V V’ : KResult . var I : Int. var Ptr Mem : Map{K,K} . var K K1 K2 : K . var S : String. kcxt * K1 := K2 [strict(K1)] . --- evaluating lhs to a lVal eq P = mkK(P) <env> .empty <mem> .empty .empty item(1) stream("") . eq #(true) = #(1) . eq #(false) = #(0). ceq if (#(I)) K1 else K2 = K2 if I eq 0. ceq if (#(I)) K1 else K2 = K1 if I neq 0. eq V ; = .K . --- discarding value of an expression statement keq [[X ==> V]] ... <env>... X |-> V .... keq [[X := V ==> V]] ... <env> [[Env ==> Env[X <- V]]] . keq [[* #(N) ==> V]] ... <mem>... #(N) |-> V .... keq [[* #(N) := V ==> V]] ... <mem>... #(N) |-> [[V’ ==> V]] .... keq [[while(K1) K2 ==> if(K1) (K2 -> while(K1) K2) else .K]] .... op alloc : Nat Nat -> Map{K,K}. eq alloc(N, 0) = .empty. eq alloc(N, s(N’)) = (#(N) |-> #(N + 1)) &’ alloc(N + 1, N’). keq [[ malloc(#(N)) ==> #(var(N’))]] ... ... [[.empty ==> (#(var(N’)) |-> #(N))]] ... [[item(N’) ==> item(N’) + s(N)]] <mem>... [[.empty ==> alloc(var(N’), N)]] .... op freeMem : Map{K,K} Nat Nat -> Map{K,K}. eq freeMem(Mem, N, 0) = Mem. eq freeMem((Mem &’ (#(N) |-> V)), N, s(N’)) = freeMem(Mem,N + 1,N’). keq [[free(#(N)) ==> void]] ... ... [[#(N) |-> #(N’) ==> .empty]] ... <mem> [[Mem ==> freeMem(Mem, N, N’)]] . keq [[printf("%d ",#(I)) ==> void]] ... [[stream(S) ==> stream(S + string(I,10)+ " ")]] . k)
Where the rules for _eq_/_neq_ are defined as follows: ops _eq_ _neq_ : Int Int -> Bool. eq X neq X = false. ceq X neq Y = true if X > Y. ceq X neq Y = true if X < Y. eq X eq Y = not(X neq Y).
Runtime Verification of C Memory Safety
C
151
SafeKernelC Semantics in K-Maude
Since it relies on the same syntax and configuration, it is called a semantics for KernelC as the previous definition. All rules stay unchanged, with the exception of the semantics rule for malloc(and its helping function alloc) which are modified to generate symbolic naturals both for pointers and the uninitialized memory locations: eq alloc(var(NV), s(N’)) = (#(var(NV)) |-> #(var(NV + 1))) &’ alloc(var(NV) + 1, N’). eq alloc(var(NV) + N, s(N’)) = (#(var(NV) + N) |-> #(var(NV + N + 1))) &’ alloc(var(NV) + N + 1, N’). keq [[ malloc(#(N)) ==> #(var(N’))]] ... ... [[.empty ==> (#(var(N’)) |-> #(N))]] ... [[item(N’) ==> item(N’) + s(N)]] <mem>... [[.empty ==> alloc(var(N’), N)]] ....
To support symbolic naturals and to allow the execution to advance, an additional “oracle” for symbolic naturals must now be provided: sort NatVar . --- the type for symbolic naturals subsort NatVar < NzNat . --- assume all symbolic naturals are non-zero op var : Nat -> NatVar . --- symbolic naturals constructor vars X Y Z T : Int. vars Nx : NzInt. var Nn : NzNat. eq 1 * X = X. eq 0 * X = 0. eq 0 + X = X. eq X - Y = X + (- 1) * Y. eq X + (-1) * X = 0. eq X * (Y + Z) = X * Y + X * Z. eq (X + Y) * Z = X * Z + Y * Z. eq Nn > 0 = true. eq Nn >= 0 = true. eq (-1) * Nn > 0 = false. eq (-1) * Nn >= 0 = false. ceq X + Y <= 0 = true if X > 0 = false /\ Y > 0 = false. eq X >= Nx = X - Nx >= 0. eq Nx <= Y = Y - Nx >= 0. eq X > Nx = X - Nx > 0. eq Nx < Y = Y - Nx > 0. eq X >= (-1) * Y = X + Y >= 0. eq X > (-1) * Y = X + Y > 0. eq (-1) * Y <= X = X + Y >= 0. eq (-1) * Y < X = X + Y > 0.
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis Stavros Tripakis University of California, Berkeley, and Verimag Laboratory, 545Q, DOP Center, Cory Hall, EECS Department, University of California, Berkeley, CA 94720-1772, USA [email protected]
Abstract. We propose a framework for fault diagnosis that relies on a formal specification that links system behavior and faults. This specification is not intended to model system behavior, but only to capture relationships between properties of system behavior (defined separately) and the faults. In this paper we use a simple specification language: assertions written in propositional logic (possible extensions are also discussed). These assertions can be used together with a combined on-line/off-line diagnostic system to provide a symbolic diagnosis, as a propositional formula that represents which faults are known to be present or absent. Our framework guarantees monotonicity (more knowledge about properties implies more knowledge about faults) and allows to explicitly talk about diagnosability, implicit assumptions on behaviors or faults, and consistency of specifications. State-of-the-art diagnosis frameworks, in particular from the automotive domain, can be cast and generalized in our framework.
1
Introduction
Fault diagnosis: The goal of fault diagnosis is to identify faults in a system. Faults are usually viewed as causes that result in system behavior deviating from nominal behavior. Such causes can be of different kinds, including manufacturing errors, system wear-out or “logical” bugs. A distinction is often made between fault detection, the “task of determining when a system is experiencing problems” and fault diagnosis, the task of “explaining”, or “locating the source of a system fault once it is detected” [5]. We will not be concerned with such distinctions in this paper. Instead, we will intentionally view a fault as any abstract (and unknown) system property, the presence of which is worth identifying. Such
This work is supported in part by the Center for Hybrid and Embedded Software Systems (CHESS) at UC Berkeley, which receives support from the National Science Foundation (NSF awards #0720882 (CSR-EHS: PRET) and #0720841 (CSR-CPS)), the U.S. Army Research Office (ARO #W911NF-07-2-0019), the U.S. Air Force Office of Scientific Research (MURI #FA9550-06-0312), the Air Force Research Lab (AFRL), the State of California Micro Program, and the following companies: Agilent, Bosch, Lockheed-Martin, National Instruments, Thales, and Toyota.
S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 152–167, 2009. c Springer-Verlag Berlin Heidelberg 2009
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis
153
identification can be then used for recovery (e.g., shutdown and reboot) or repair (e.g., replacement of the faulty parts). What exactly a fault is, often depends on the application at hand. One approach to fault diagnosis, which can be termed “white-box”, consists in the following steps. First, build a global model of the behavior of the system, that includes the nominal, non-faulty behavior, as well as the faulty behavior for all types of considered faults. Interactions between non-faulty and faulty behavior must also be modeled. Then, design an observer that monitors a set of observable variables of the system and tries to deduce, from the observations, how the system behaved. In turn, knowledge about system behavior can be used to identify faults that may have occurred, if any, which ones, and so on. Examples of this approach to diagnosis are the discrete-event system diagnosis framework of [15] or the timed-automata diagnosis framework of [18]. This approach is difficult to apply in practice, the main reason being that building such a global model is a daunting task: this is due to the complexity and size of the model. Moreover, as claimed in [1], “because defect behavior is so variable, a fault model always leaves some faults unmodeled”. In this paper we follow an alternative approach, where only a partial view on the relation between faults and behavior is assumed. This view attempts to capture the knowledge that engineers gain from experience in observing system behavior and identifying faults (using different means). For instance, one could have reached the conclusion that when a particular fault F is present then the system manifests a particular property P in its behavior. This knowledge can be used for fault diagnosis: F will be excluded as a possible fault when observed behavior does not satisfy P . We term this approach black-box diagnosis, because it uses no explicit model of the system: neither its non-faulty, nor its faulty behavior. We only use assertions that attempt to capture relationships between behaviors and faults. For example, the assertion F ⇒ P expresses formally the relationship described informally in the example above, namely, that if fault F is present then the system exhibits behavior that satisfies property P . Fault diagnosis in the automotive domain: This work has been motivated by fault diagnosis problems coming mainly from the automotive domain (e.g., see [16,7]). There, a lot of effort is put in carefully designing monitors and testers, for on-line and off-line diagnosis, respectively (also called on-board and workshop diagnosis, respectively [16]). These components check whether a given property holds and report whenever this fails. For example, a monitor may check that a specific value stays within some lower and upper bounds. What appears to be missing, however, is a methodology and overall framework that coordinates these monitors and testers, and correlates their results, in order to achieve diagnosis. The main goal of this paper is to contribute some elements that we believe are useful toward such a methodology, framework, and associated techniques and tools. We should note that some representations that relate monitor/test results and faults do exist in the automotive domain. These include the so-called D-matrix,
154
S. Tripakis
as well as diagnostic manuals, typically used by the workshop mechanics [11,4]. D-matrices are a somewhat formal representation that, although useful in some cases, is much less flexible than the representation we propose in this paper, as discussed in Section 4. Diagnostic manuals, on the other hand, contain information such as “if error code C is on, then check components X, Y and Z [for possible faults]”. “Error code C on” means a specific monitor (or test) produced a result indicating a fault may be present. It is however difficult, if not impossible, to interpret diagnostic manuals as providing any type of deterministic relation between faults and monitor/test results. One cannot guarantee that when an error code is on then one of the components to be checked will indeed be faulty (it may be the case that none of them is faulty). One cannot guarantee either that the components that are not to be checked are non-faulty. For these reasons, we do not discuss diagnostic manuals in this paper. Our framework: In a nutshell, our framework contains the following elements: – First, the user designs a set of on-line monitors and off-line testers. Each monitor, as well as each tester, is responsible for checking a certain property Pi on the observable behavior of the system. The difference between a monitor and a tester is the following. A monitor is passive, in the sense that it does not drive the inputs of the system under observation, but only observes a set of observable outputs. A tester, on the other hand, is active: it drives the inputs and at the same time observes the outputs. One of the benefits of our framework is that these monitors/testers can be designed using different languages, methods and tools. They could, for instance, be “manually” written in a general-purpose programming language, or in a language specifically designed for testing, such as TTCN [8] or e [6]. Alternatively, they could be automatically generated from a formal specification, for instance, a Mealy machine [10], an input-output labeled transition system [17], a temporal logic formula [3], a timed automaton [9], and so on. Our framework is independent from how the monitors/testers are built. – Second, the user provides a formal description of the relationships between the properties Pi and possible faults in the system. This description is given using the behavior-fault assertion language (BFA) that we describe in Section 2. – Third, our framework provides a diagnoser and a fault knowledge base. The BFA specification is stored in the fault knowledge base. The diagnoser uses the BFA specification and the outputs coming from the monitors or testers to produce a diagnosis. The latter is a formula characterizing the current knowledge about faults in the system, i.e., which faults are present, absent, or unknown. Our framework combines on-line and off-line diagnosis. On-line, the diagnoser produces a result which is updated dynamically, as the outputs of the monitors are updated, as the behavior of the system evolves. Off-line, tests can be performed on the system, to get additional information, especially about properties that could not be checked on-line.
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis fault knowledge base
system
T1
.. .
.. .
Mm
diagnoser
fault knowledge base
system
M1
Tk
diagnoser
user feedback on-line
155
user feedback off-line
Fig. 1. On-line and off-line diagnosis
The approach is illustrated in Figure 1. Mi denote the on-line monitors and Ti denote the off-line testers. Our framework can seamlessly handle different types of faults, including permanent faults (that, once they occur, they stay present) and transient or intermittent (that may change their presence/absence status as system behavior evolves). This is particularly true in the on-line diagnosis phase, where the output of the diagnoser continuously evolves as the monitor outputs evolve.
2 2.1
The Framework Behavior-Fault Assertions
We propose a language that allows to explicitly and formally capture relations between observable behavior and faults. This language is essentially propositional logic, where atoms are either propositions Fi , denoting presence of fault i, or propositions Pi , corresponding to properties expressed on the observable behavior. There can be as many Fi and Pi atomic propositions as necessary, but a finite number of each. Notice that each fault proposition Fi or property proposition Pi is simply a symbol (i.e., a propositional variable) at this point. Semantics will be assigned below. A boolean expression on Fi and Pi is called a behavior-fault assertion (BFA). Examples of BFAs: We will use the symbols ∧, ∨, ¬, ⇒ and ⇔, to denote logical conjunction, disjunction, negation and implication. A ⇔ B is shorthand for A ⇒ B ∧ B ⇒ A. Equivalence of logical formulae is denoted ≡. We will use true and false for the boolean truth values. Finally, we will use := to define notation. Here are some examples of BFAs, with their intuitive meaning: – F1 ⇒ P1 : if fault F1 is present, then the observable behavior must satisfy property P1 .
156
S. Tripakis
– ¬P1 ⇒ ¬F1 : if the observable behavior falsifies P1 then fault F1 is absent (note that this is equivalent to the above). – P1 ∨ F1 : if the observable behavior falsifies P1 then fault F1 is present, i.e., equivalent to ¬P1 ⇒ F1 . – (P1 ∧ P2 ) ⇒ (F1 ∨ F2 ) : if the observable behavior satisfies both P1 and P2 , then at least one of the faults F1 or F2 must be present. – P1 ∧ P2 : this is an assumption on observable behaviors: all observable behaviors must satisfy P1 and P2 . – ¬(F1 ∧ F2 ) : this is an assumption on faults: if fault F1 is present then F2 cannot be present and vice versa. 2.2
Semantics
The system to be diagnosed is typically a dynamical system, that is, it has some (global) state s that evolves with time. Let S be the set of all possible system states. Let s(t) denote the state of the system at time t. We will not be concerned with what the nature of time is or what the dynamics of the system are (e.g., discrete, continuous, etc.). Suppose there are n faults that we are interested in. This is reflected by having n propositional symbols corresponding to faults, F1 , ..., Fn . At a given system state s, every fault is either present or absent. We write s(Fi ) = true to denote that Fi is present at state s, and s(Fi ) = false to denote that Fi is absent at state s. The boolean vector f = (s(F1 ), s(F2 ), ..., s(Fn )) which gives the status of each fault at a given state is called the fault configuration. We denote by fi the i-th element of vector f . As the state evolves with time, so does the fault configuration. We write f (t) to denote the fault configuration at time t. Ideally, we would like to know what f (t) is at any given time t. This is the ultimate goal of diagnosis, albeit not always achievable. A behavior of the system is a history of its state evolution over a period of time. Let X be the set of all possible system behaviors. We assume that for each property symbol Pi there exists a function Pi : X → {true, false, ?}. Function Pi models the monitor/tester for property Pi . Given a behavior x ∈ X , Pi (x) = true means “x satisfies Pi ”, Pi (x) = false means “x does not satisfy property Pi ”, and Pi (x) = ? means “don’t know”, that is, x may or may not satisfy Pi . The ? option is useful to capture the fact that a monitor/tester for Pi may only have partial observation capabilities, that is, it may not have access to the entire behavior x, but only part of it. This can be modeled in the semantical function Pi which can issue ? for x. There are other useful applications of ?, for instance, when properties are expressed in a temporal logic such as LTL [13]. The semantics of LTL are usually defined in terms of infinite behaviors. In monitoring, however, observed behaviors are finite. For this reason, it is not always possible to determine whether an observed behavior satisfies an LTL property or not, and “don’t know” needs to be issued as an answer [3]. The three-valued vector p = (P1 (x), P2 (x), ..., Pn (x)) is called the property configuration. As the behavior of the system evolves with time, so does the property configuration. The property configuration at time t is denoted p(t). We
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis
157
denote by pi the i-th element of vector p. If p does not contain unknown values, that is, for all i = 1, ..., n, pi = ?, then p is called determinate. Let Φ be a BFA, f a fault configuration, and p a property configuration. Φ is a propositional logic formula over variables Fi and Pi . Let Φ[f, p] be the formula obtained by substituting every variable Fi by the truth value fi , and every variable Pi such that pi = ?, by the truth value pi . For example, let Φ1 = F1 ⇒ P1 ∧ F2 ⇒ P2 . Let f = (false, true) and p = (false, ?). Then Φ1 [f, p] ≡ false ⇒ false ∧ true ⇒ P2 ≡ P2 . Now let f = (true, true). Then Φ1 [f , p] ≡ true ⇒ false ∧ true ⇒ P2 ≡ false. A BFA Φ is valid with respect to a system if for any time t, Φ[f (t), p(t)] is satisfiable. 2.3
The Diagnosis Problem
The diagnosis, Φ(p), is defined to be the set of all fault configurations that are consistent with given property configuration p, that is: Φ(p) = {f | Φ[f, p] is satisfiable}. The objective of the diagnostic system is to compute, and represent in a compact manner, Φ(p(t)), at any given time t for which a diagnosis is requested by the user of the diagnostic system. 2.4
BFA Consistency
A BFA may be inconsistent, that is, unsatisfiable when interpreted as a classical (two-valued) propositional logic formula. For example, P1 ∧ ¬P1 is inconsistent. We require all BFAs to be consistent. This is because we assume that for every property Pi , there exists at least one behavior x ∈ X such that Pi (x) = ?. This assumption is reasonable: if a property Pi is such that for all x ∈ X , Pi (x) = ?, then Pi does not serve any purpose since it never gives any useful information. Thus Pi is redundant and can be eliminated from the BFA. Given the above assumption, since x is a possible behavior of the system, it can be observed, and when it is, the truth value Pi (x) (i.e., true or false) will be assigned to Pi . Then, the BFA will evaluate to false, which means it is invalid. In other words, with the above assumption, inconsistent BFAs are invalid BFAs, thus should not be considered. Consistency (satisfiability) of a BFA can be checked using a SAT solver. A BFA can also be inconsistent because of the fault propositions. For example, F1 ∧ ¬F1 is inconsistent. Clearly, such BFAs should also be discarded, since, by definition of the semantics, every fault is either present or absent, but not both. 2.5
Assumptions Encoded in a BFA
BFAs allow assumptions on either observable system behavior or faults to be explicitly stated. For instance a BFA of the form (P1 ⇒ P2 ) ∧ ¬(F1 ∧ F2 ) ∧ · · ·
158
S. Tripakis
explicitly states an assumption P1 ⇒ P2 on behavior and an assumption ¬(F1 ∧ F2 ) on faults. The former states that if property P1 holds then property P2 also holds. This encodes something known about the behavior of the system. Notice that, given observed behavior x such that P1 (x) = true, this assumption allows us to conclude that P2 also holds, even when P2 (x) = ?. Therefore, using BFA P1 ⇒ P2 ∧ P2 ⇒ F1 , and observing that P1 holds, allows us to conclude that fault F1 is present, independently of what the result of the monitor of property P2 is.1 The part ¬(F1 ∧F2 ) states an assumption on faults, namely, that faults F1 and F2 cannot both be present at the same time. This is useful to encode assumptions about the fault model, for instance. Often assumptions may be “hidden” in a BFA. For example: F1 ⇒ P1 ∧ F2 ⇒ ¬P1 implies the assumption ¬(F1 ∧ F2 ) on faults and: P1 ⇒ F1 ∧ P2 ⇒ ¬F1 implies the assumption ¬(P1 ∧ P2 ) on behavior. Hidden assumptions may be unintentional, so it is important to extract them from the BFA and present them to the user. This can be done automatically: the assumptions can be derived from a BFA using existential quantification. Consider a BFA Φ. The hidden assumptions on faults in Φ are characterized by the propositional formula ∃P1 , ∃P2 , ..., ∃Pm : Φ and the implicit assumptions on behavior by the formula ∃F1 , ∃F2 , ..., ∃Fn : Φ. Notice that the former is a formula on F1 , ..., Fn variables and the latter is a formula on P1 , ..., Pm variables. 2.6
The Symbolic Diagnoser
A diagnoser is an implementation of the diagnostic system, that is, one particular way of computing and representing in a compact manner the set Φ(p(t)). We provide one possible diagnoser here, but others may exist as well. Let Φ be a BFA and p be the property configuration at a given time. The diagnoser takes Φ and p as inputs, and produces as output a propositional logic formula Ψ (Φ, p) over variables Fi . This formula characterizes all fault configurations in Φ(p) (Theorem 1). Ψ (Φ, p) is called the symbolic diagnosis and it can be computed as follows: 1
What happens if there is a behavior x for which we observe P1 (x) = true and P2 (x) = false? This means that the assumption P1 ⇒ P2 made on system behaviors is wrong, therefore, the BFA is incorrect. See discussion on “Assertion validation” in Section 6.
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis
159
1. For every property Pi such that p(Pi ) is not ?, i.e., p(Pi ) ∈ {true, false}, the truth value p(Pi ) is substituted in Φ in the place of symbol Pi . Call the resulting formula Φ[p]. The latter is a formula over variables Fi and possibly also some variables Pi , those for which p(Pi ) = ?. 2. If Φ[p] contains no Pi variables, then Ψ (Φ, p) := Φ[p]. Otherwise, let Pi1 , ..., Pik be the property variables appearing in Φ[p]. Then Ψ (Φ, p) is obtained by eliminating these variables by existential quantification: Ψ (Φ, p) := ∃Pi1 , ..., ∃Pik : Φ[p] The correctness of this diagnoser is stated below. Theorem 1 (Soundness and completeness). f ∈ Φ(p) iff f satisfies Ψ (Φ, p). For example, let Φ1 = F1 ⇒ P1 ∧ F2 ⇒ P2 and p = (false, true). Then, using the above procedure, we compute: Ψ (Φ1 , p) ≡ Φ1 [p] ≡ F1 ⇒ false ∧ F2 ⇒ true ≡ ¬F1 . Indeed, from the fact that property P1 is false and the implication F1 ⇒ P1 , we can deduce that fault F1 is absent. Next, let p = (false, ?). Using the above procedure, we compute: Ψ (Φ1 , p ) ≡ ∃P2 : Φ1 [p ] ≡ ∃P2 : F1 ⇒ false ∧ F2 ⇒ P2 ≡ ¬F1 . Thus, again we can conclude that fault F1 is absent. 2.7
Ideal Diagnosability and Weaker Notions
Ideally, we would like to identify exactly which faults are present and which are not. This means that ideally Φ(p) should be a singleton. Obviously, this depends on the observed behavior, captured by the property configuration p. If the observed behavior is completely uncontrollable (e.g., produced by a “passive” on-board monitoring framework, passive in the sense that it does not provide inputs to the system) then there is not much we can do. For example, in the BFA Φ1 of the example above, if for all x ∈ X we have P2 (x) = true, then Φ1 cannot give us information about the status of fault F2 . Moreover, observations are sometimes inconclusive, that is, for a given x ∈ X and property Pi , we have Pi (x) = ?. In this case, no information on Pi is available. On the other hand, sometimes behaviors are controllable, in the sense that we can subject the system to predefined tests that are guaranteed to provide an answer to whether a property holds or not. These tests are designed to drive the inputs of the system and observe how the system responds in a given scenario. This is the case, for instance, in off-line (also called workshop) diagnosis [16]. Inspired by this, we introduce a notion of ideal diagnosability, that attempts to capture whether a given BFA can in principle provide precise information about which faults are present and which are absent, provided the truth value of every property Pi is known.
160
S. Tripakis
Formally, we say that a BFA Φ allows ideal diagnosability if for any determinate property configuration p, Φ(p) is a singleton. Recall that p is determinate if it does not contain unknown values, that is, for all i = 1, ..., n (where n is the length of vector p), pi = ?. For example, let Φ1 = F1 ⇒ P1 ∧ F2 ⇒ P2 . Then Φ1 does not allow ideal diagnosability, because Φ1 ((true, true)) obviously contains more than one fault configuration (in fact it contains all of them). On the other hand, the BFA Φ2 = F1 ⇔ P1 ∧ F2 ⇔ P2 , allows ideal diagnosability. We can see that ideal diagnosability requires a pretty complete specification of the relation between properties and faults, which obviously is not always available. For this reason, we do not require BFAs to allow ideal diagnosability in general. A straightforward, albeit inefficient, way to check whether a given BFA Φ allows ideal diagnosability is to enumerate all possible determinate property configurations (there is 2m of them, assuming there are m properties) and check, for each such configuration p, whether Ψ (Φ, p) has a unique solution. Checking whether a propositional logic formula has a unique solution can be done by running a SAT solver twice on the formula, the second time adding the negation of the solution found in the first run, if any. Weaker notions of diagnosability could also be defined. For example, one may wish to know whether a given BFA Φ allows to determine, in principle, whether at least one of two faults F1 and F2 is present. This means that, for any determinate property configuration p, Ψ (Φ, p) should imply either F1 ∨ F2 or ¬(F1 ∨ F2 ). 2.8
Monotonicity
One nice property of our framework is that it is monotonic, in the sense that more knowledge about properties implies more knowledge about faults. Let us make this precise. Consider two property configuration vectors p = (p1 , ..., pn ) and p = (p1 , ..., pn ). We say that p is more determinate than p , noted p ≤ p , if for all i = 1, ..., n, pi =? implies pi =?. Intuitively, p ≤ p means that p provides more knowledge than p . Theorem 2 (Monotonicity). p ≤ p implies Φ(p) ⊆ Φ(p ).
3
On-Line and Off-Line Diagnosis
As illustrated in Figure 1, we envisage a diagnostic system which combines online diagnosis (also called on-board diagnosis in the automotive domain) with off-line diagnosis (also called workshop diagnosis in the automotive domain [16]). 3.1
On-Line Monitoring and Diagnosis
The on-line diagnostic system consists of several components: First, a set of monitors, denoted M1 , ..., Mm in Figure 1, where Mi monitors property Pi . At any given point in time t, Mi outputs Pi (x(t)), where x(t) is
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis
161
the behavior of the system up to time t. Together the outputs from all monitors form the current property configuration vector p(t). Second, a fault knowledge base. The latter contains a BFA Φ, which is an input to the diagnostic system, designed by the user, and the current symbolic diagnosis, Ψ (Φ, p(t)), which is updated dynamically by the diagnoser. Third, the on-line diagnosis system includes a diagnoser. The diagnoser takes as input the BFA Φ and the current property configuration p(t), computes the current diagnosis, Ψ (Φ, p(t)), and stores the latter in the fault knowledge base. It may also provide feedback to the user, e.g., telling the user that it is necessary to visit a workshop for further, off-line diagnosis. How this feedback is generated is beyond the scope of this paper. How often the current property configuration and current diagnosis are updated depends on the particular needs of the system, which in turn depend on the application. 3.2
Off-Line Testing and Diagnosis
The off-line diagnostic system is not very different from the on-line one. The only difference is that monitors are replaced by testers. A tester Tj is supposed to test a certain property Pi(j) . There may be more than one testers for the same property, and no testers for some properties. Typically, during off-line diagnosis, one would like to test properties for which no conclusion was reached during the on-line phase, that is, properties Pi such that Pi (x) = ?. On the other hand, there may be properties for which on-line monitoring is guaranteed to be conclusive, and these may require no off-line tests. The off-line tests are executed, either in parallel, or in sequence, or using a mix of both strategies. How exactly this is done depends on the application, and in particular on the type of testing strategy that the testing harness permits. An interesting problem here is test sequencing, discussed below. One assumption we make is that during execution of the tests the fault configuration of the system does not change. This is important, since, contrary to on-line monitoring, the tests are performed only for a given amount of time and not continuously. If faults change during test execution, then the result of a test does not mean much, since it could be different if a test were to be executed again. Once the tests are run, a new property configuration vector is obtained. This is used, as in on-line diagnosis, to update the symbolic diagnosis. Test sequencing: Off-line diagnosis raises the problem of test sequencing, namely, what is the “best” order in which tests should be run. It is crucial to define carefully what exactly “best” means, and many different variants of this problem have appeared in the literature (e.g., see [12]). Here, we define and study a simple variant that fits our logical framework. Executing tests takes time and other resources, i.e., it is a costly process. Therefore, there is an incentive to minimize the number of tests that need to be run. In the ideal diagnosability case, running all the tests is guaranteed to produce a definite answer as to the status of each and every fault. For example,
162
S. Tripakis
T1 T2 0
T2 T3
1 0
T1 1
T1 T3 1
0 0
1
T3 0
1
Fig. 2. Test execution strategies as decision trees
consider the simple case where there is a single fault F1 , and the BFA F1 ⇔ (P1 ⇒ P2 ∧ ¬P1 ⇒ P3 ). This BFA clearly allows ideal diagnosability. Suppose there is one test for each property, i.e., test Ti for Pi , for i = 1, 2, 3. In which order should the tests be run? Two possible execution strategies are shown in Figure 2. Each strategy is in essence a decision tree: node Ti of the tree corresponds to executing test Ti ; the left (respectively, right) branch is followed when Ti produces a value indicating that Pi is false (respectively, true); a leaf of the tree labeled 0 means that fault F1 is absent, and 1 means that F1 is present. Intuitively, the strategy represented by the left-most tree is better: the tree is smaller, and it is guaranteed to require execution of at most two tests in the worst case; on the other hand, the right-most tree may require execution of three tests, thus represents a generally more costly strategy. A reasonable definition of “best” test strategies could therefore be “representable by a decision tree of minimal depth”. Unfortunately, finding a strategy representable by a tree of minimal depth is an intractable problem. Indeed, satisfiability of a propositional formula φ can be easily reduced to the problem of checking whether there exists a tree of depth 1 representing φ: such a tree exists iff φ is unsatisfiable or valid. Similar test sequencing problems could be defined for weaker notions of diagnosability.
4
Application: Capturing the D-Matrix
One of the representations often employed in fault diagnosis contexts, in the automotive but also other domains, is the so-called D-matrix (sometimes called the “diagnostic matrix” [11] and sometimes the “dependency matrix” [4]). In this section, we compare D-matrices to BFAs. We show that capturing D-matrices as BFAs is straightforward, while this is not true in the opposite direction. The rows of a D-matrix correspond to faults (or “components” that may be faulty or non-faulty) and the columns correspond to tests. In the boolean version of the D-matrix, an entry (i, j) of the matrix contains the expected outcome of test j, given that fault i is present. For instance, if test j has a boolean outcome, then (i, j) is either 0 or 1. In some cases the expected outcome of a test is uncertain (that is, it may be either 0 or 1, but we do not know in advance). In these cases the entry (i, j) may contain a special “?” value.
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis
163
It is important to note that usually the D-matrix representation makes a single-fault assumption, in the sense that only one of the faults is assumed to be present. This allows to obtain the values of the entries of the matrix, for example, by “injecting” a fault, executing the tests, and observing their outcome. Only a single fault is injected at any time. The matrix may also contain a special row corresponding to “no fault”, containing the test outcomes when no fault has been injected. The single-fault assumption is also used when checking diagnosability. The latter holds iff there are no two rows in the matrix such that their entries “match” for every column. Two entries match if they have identical value or if one of the values is ? (because this means the outcome can be anything). If rows i1 and i2 match then it is possible (for some test outcomes, that we cannot a-priori predict) that faults i1 and i2 cannot be distinguished. On the other hand, if no rows match, then by executing all the tests we should be able to identify the (single, by assumption) fault whose row does not contradict the test outcomes. Note that it is possible that all rows contradict the fault outcomes. This implies either that the D-matrix is incorrect, or that there are additional faults (rows) that have not been included in the matrix. There are also probabilistic versions of the D-matrix representation. In the probabilistic version, the entry (i, j) is a conditional probability distribution over the test of possible outcomes of test j. For example, if there are two outcomes 0 and 1, the entry (i, j) could be (0.1, 0.9) for a 90% probability to get an output 1 when executing test j and 10% probability to get 0, given that the fault is i. We will not consider probabilistic D-matrices in this paper. A boolean D-matrix M can be easily captured in terms of a BFA: an entry M (i, j) = b, with b =?, can be represented as the assertion Fi ⇒ Pj , where Pj is the property associated with test j. Pj is defined as follows. Given a behavior x, if the inputs of x do not match the inputs that test j would provide, then Pj (x) = ?. Otherwise: if the outcome of test j on x is b, then Pj (x) = true, otherwise Pj (x) = false. Practically speaking, the tester for Pj implements exactly the test j, and if a monitor is needed instead, it can be derived from an implementation of test j by turning outputs of the test into monitored inputs. On the other hand, capturing BFA assertions in a D-matrix representation can be non-straightforward, as well as expensive. For example, in order to capture the assertion F1 ⇒ (P1 ∨P2 ) one would have to devise a test that checks whether the disjunction of properties P1 , P2 holds and add a corresponding column to the matrix (notice that almost all entries of the column will be ?, except for the entry corresponding to F1 , which is wasteful). If there are already tests covering properties P1 and P2 individually (presumably because these are used in other assertions) then: (1) one would need to ensure consistency of the information stored in the matrix (for each row, the value of the entry of P1 ∨ P2 must not contradict the disjunction of the values of P1 and P2 ); and (2) many redundant entries are introduced in the matrix. As another example, in order to capture the assertion P1 ⇒ F1 , one would have to first transform it into the equivalent form ¬F1 ⇒ ¬P1 . Then, one would
164
S. Tripakis
have to add a row in the matrix corresponding to ¬F1 , and a column corresponding to a test that checks ¬P1 . Again, one would have to maintain consistency between rows F1 and ¬F1 . This can become difficult for more complex assertions involving multiple faults, e.g., P1 ⇒ (F1 ∨ F2 ) or (F1 ∧ F2 ) ⇒ P2 , and so on. These simple examples show that the BFA model is more flexible and more convenient than the D-matrix model.
5
Related Work
Our framework can be seen as a specialization of Reiter’s consistency based diagnosis framework [14]. The latter is a triple (SD , C, O), where SD is a system description, C a set of components, and O a set of observations. In Reiter’s framework, SD and O are general first-order logic formulas, whereas C is a set of constants. We can cast our framework in Reiter’s terms as follows: SD will be the BFA Φ, C will be the set of fault propositions {F1 , ..., Fn }, and O will represent the known values in the property configuration p. For instance, if there are three properties P1 , P2 , P3 and p = (true, ?, false) then O is P1 ∧ ¬P3 . Although more general than ours, Reiter’s framework has been intended mostly as a “white-box” framework, where the entire structure and behavior of the system is modeled in SD. This is evident both from the terminology used (“system description”, “components”) as well as from the examples in the original and subsequent papers by Reiter and other authors. By restricting the framework, and treating faults as “first-class objects”, we can focus only on the relationships between behavior and faults, and not on an extensive modeling of behavior, which is infeasible in practice. In that sense, our framework can also be seen as an expert system, in that it attempts to capture, through BFAs, the knowledge of the engineers. Another benefit of restricting Reiter’s framework is that computing the diagnosis becomes much simpler: our method only uses existential quantification of propositional formulas, and does not rely on conflict sets and hitting sets as in Reiter’s method. Another difference is that our framework guarantees monotonicity. Our framework is also closely related to Bauer’s [2]. He also uses separately defined monitors to “source out” behavior and reduce Reiter’s first-order logic framework to propositional logic. Causal and structural information still remain in SD, however. We go one step further and make no assumptions on how the relationships between behavior and faults are specified. In [2], diagnoses are computed by LSAT, a specially-constructed SAT-solver that can produce multiple solutions given a user-provided n-fault assumption. In our framework, diagnoses are represented symbolically as a propositional formula on the fault variables Fi .
6
Conclusions and Perspectives
We proposed a fault diagnosis framework that assumes no explicit model of system behavior, but instead relies on a formal specification of the relationships between behavior and “faults” (or fault “causes”, or any other notion
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis
165
that may be relevant to a particular application). By decoupling behaviors and faults, we can treat the problem of how to construct monitors for behaviors separately from the problem of how to use monitor outputs to gain knowledge about faults. Future directions include: Richer assertion languages: In this paper we considered a simple instance of this framework where the relationships between behaviors and faults are captured in propositional logic. This simple language can be extended in multiple ways. For instance, one possibility is to use temporal logic, or some other formalism that allows to talk about ordering or timing of observations. For example, we may want to express that if property P1 is observed before P2 is observed, then this implies fault F1 is present, but if P2 has been observed before P1 then nothing can be inferred. Again, P1 and P2 can be complex (and themselves dynamical) properties, but the top-level specification treats them as atoms. Probabilistic and statistical learning frameworks: Another possibility is to move from a boolean to a probabilistic interpretation. For instance, we may want to state something of the form: “if fault F1 is present then there is a 90% probability that test T1 produces outcome true and 10% probability that it produces false”. Or: “if property P1 is observed then there is a 50% probability that F1 is present”. Combined with a language that allows us to speak about time, this could become: “if property P1 has been continuously true for at least t time units, then there is a h(t)% probability that fault F1 is present”, where h(·) is some function. Probabilistic logics and statistical learning methods could be potentially useful in extending the framework in this direction. Assertion validation: A BFA is an assertion relating the observed behaviors and the faults. This assertion is constructed presumably by humans, based on their experience or other means, therefore, it may be invalid, that is, erroneous in a variety of different ways. To list a few: (1) The assumptions hidden in the BFA about system behavior (e.g., P1 ⇒ P2 ) may be incorrect. This can be discovered when a behavior x is observed, for instance, such that P1 (x) = true and P2 (x) = false. (2) Similarly, the hidden assumptions about faults (e.g., ¬(F1 ∧ F2 )) may be incorrect. This can be discovered when the diagnosis obtained by some other means contradicts these assumptions (for example, we find that both faults F1 and F2 were present). (3) Finally, the relationships between behavior and faults are incorrect. This can be discovered when the diagnosis obtained by the BFA and some property configuration contradicts the diagnosis obtained by some other means. For instance, if Ψ (Φ, p) is ¬F2 but somehow F2 turns out to be present, then this implies the BFA is invalid. In all above cases, once invalidity is detected, the problem is how to locate which “parts” of the assertion are incorrect and to correct them. This is of
166
S. Tripakis
course a difficult problem, of locating errors in a formal model, and it is related to automated debugging.
Acknowledgments Thanks to Mark Wilcutts and Hakan Yazarel from Toyota, and Ken McMillan and Anubhav Gupta from Cadence, for useful discussions. Thanks to Andreas Bauer from TU Munich for pointing out his work during a discussion at the Dagstuhl 2007 Seminar on Runtime Verification (the ideas outlined in this paper arose independently).
References 1. Aitken, R.C.: Modeling the unmodelable: Algorithmic fault diagnosis. IEEE Design & Test of Computers 14(3), 98–103 (1997) 2. Bauer, A.: Simplifying diagnosis using LSAT: a propositional approach to reasoning from first principles. In: Bart´ ak, R., Milano, M. (eds.) CPAIOR 2005. LNCS, vol. 3524, pp. 49–63. Springer, Heidelberg (2005) 3. Bauer, A., Leucker, M., Schallhart, C.: Comparing LTL semantics for runtime verification. Journal of Logic and Computation (February 2009) 4. Beygelzimer, A., Brodie, M., Ma, S., Rish, I.: Test-based diagnosis: Tree and matrix representations. In: IM 2005 - IFIP/IEEE International Symposium on Integrated Network Management, pp. 529–542 (2005) 5. Chen, M., Zheng, A.X., Lloyd, J., Jordan, M.I., Brewer, E.: Failure diagnosis using decision trees. Autonomic Computing (2004) 6. Iman, S., Joshi, S.: The e-Hardware Verification Language. Springer, Heidelberg (2004) 7. Isermann, R.: Model-based fault detection and diagnosis: status and applications. Annual Reviews in Control 29, 71–85 (2005) 8. ISO/IEC. Open Systems Interconnection Conformance Testing Methodology and Framework – Part 1: General Concept – Part 2: Abstract Test Suite Specification – Part 3: The Tree and Tabular Combined Notation (TTCN). Technical Report 9646, International Organization for Standardization — Information Processing Systems — Open Systems Interconnection, Gen`eve (1992) 9. Krichen, M., Tripakis, S.: Conformance Testing for Real-Time Systems. Formal Methods in System Design 34(3), 238–304 (2009) 10. Lee, D., Yannakakis, M.: Principles and methods of testing finite state machines A survey. Proceedings of the IEEE 84, 1090–1126 (1996) 11. Luo, J., Pattipati, K., Qiao, L., Chigusa, S.: Towards an integrated diagnostic development process for automotive systems. In: IEEE Intl. Conf. Systems, Man and Cybernetics, pp. 2985–2990 (2005) 12. Pattipati, K., Alexandridis, M.: Application of heuristic search and information theory to sequential fault diagnosis. IEEE Trans. Systems, Man and Cybernetics 20(4), 872–887 (1990) 13. Pnueli, A.: A temporal logic of concurrent programs. Theoretical Computer Science 13, 45–60 (1981)
A Combined On-Line/Off-Line Framework for Black-Box Fault Diagnosis
167
14. Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987) 15. Sampath, M., Sengupta, R., Lafortune, S., Sinnamohideen, K., Teneketzis, D.: Diagnosability of discrete event systems. IEEE Transactions on Automatic Control 40(9) (September 1995) 16. Struss, P., Price, C.: Model-based systems in the automotive industry. AI Magazine 24(4), 17–34 (2004) 17. Tretmans, J.: Testing concurrent systems: A formal approach. In: Baeten, J.C.M., Mauw, S. (eds.) CONCUR 1999. LNCS, vol. 1664, p. 46. Springer, Heidelberg (1999) 18. Tripakis, S.: Fault Diagnosis for Timed Automata. In: Damm, W., Olderog, E.-R. (eds.) FTRTFT 2002. LNCS, vol. 2469, pp. 205–224. Springer, Heidelberg (2002)
Hardware Supported Flexible Monitoring: Early Results Atonia Zhai, Guojin He, and Mats P.E. Heimdahl University of Minnesota [email protected], [email protected], [email protected]
Abstract. Monitoring of software’s execution is crucial in numerous software development tasks. Current monitoring efforts generally require extensive instrumentation of the software or dedicated hardware test rig designed to provide visibility into the software. To fully understand software’s behavior, the production software must be studied in its production environment. To address this fundamental software engineering challenges, we propose a compiler and hardware supported framework for monitoring and observation of software-intensive systems. We place three fundamental requirements on our monitoring framework. The monitoring must be non-intrusive, low-overhead, and predictable so that the software is not unduly disturbed. The framework must also allow low-level monitoring and be highly flexible so we can accommodate a broad range of crucial monitoring activities. The general idea behind our work is that to make dramatic progress in non-intrusive, predictable, and fine-grained monitoring, we must change how software is compiled and how hardware is designed; a softwaremonitoring framework covering the development of monitors, through compilation, and down to the hardware is essential. To achieve our goals, we have pursued an approach leveraging the rapid emergence of multicore processor architectures to achieve a non-intrusive, predictable, finegrained, and highly flexible general purpose monitoring framework. In this report we describe our initial steps in this direction and provide some preliminary performance results achieved with this new multi-core architecture. We use separate cores for the execution of the application to be monitored and the monitors. We augment each core with identical programmable extraction logic that can observe an application executing on the core as its program state changes.
1
Introduction
Monitoring of software’s execution is crucial in numerous software development tasks. For example, test oracles monitor an application’s execution (the outputs and typically part of the application’s internal state), test adequacy coverage
This work has been partially supported by NASA Ames Research Center Cooperative Agreement NNA06CB21A, NASA IV&V Facility Contract NNG-05CB16C, and the L-3 Titan Group.
S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 168–183, 2009. c Springer-Verlag Berlin Heidelberg 2009
Hardware Supported Flexible Monitoring: Early Results
169
analysis tools must determine what portion of an application have been executed (and possibly how those parts of the application were reached), run-time security and safety monitors in critical systems must determine if security and safety policies are maintained by the application, and—of course—all other run time verification tasks envisioned in the Run-Time Verification series of workshops. These monitoring tasks generally require three crucial properties. First, to ensure that the performance of the monitored software is not degraded to a point where the monitoring is simply infeasible, the monitoring must incur low overhead. Second, since monitors are likely to change during the lifetime of the monitored software (for example, if safety and security policies change) or—in the case of test oracles and coverage measurement tools—will be removed entirely at some point, we must have predictable behavior, in terms of both functional behavior and performance, so that we can predict the impact of changed or removed monitors. Finally, to enable access to internal program state information (crucial information in all testing, and safety and security monitoring), we must have finegrained monitoring where both the state information in the monitored program as well as the program point where the monitoring takes place can be selected to suit the task at hand. Several communities have addressed the monitoring problem from various angles. For instance, test oracles are developed largely ad hoc and rely on intrusive software instrumentation of the software under study [25]; runtime verification generally relies on software instrumentation with high overhead [23], and most of the dedicated hardware solutions are targeted towards the monitoring for narrow properties [7,35,3,26,36,6,32]. Unfortunately, these approaches are fragmented, largely ad-hoc, and address narrow aspects of the monitoring problem (such as efficient implementation of monitorspropagation [7,35,3,26,36,6,32]). These monitoring efforts typically require extensive instrumentation of the software and/or execution of the software in a dedicated hardware test-rig or emulator. Under such conditions the software’s behavior is not the same as it would be in its intended target environment. To fully understand software’s behavior—in particular embedded software’s behavior—the production software must be studied in its production environment. To alleviate the problems with overhead and predictability of instrumentation for monitoring purposes, we have pursued an approach leveraging the rapid emergence of multi-core processor architectures [10,14,1,33,15] to achieve a nonintrusive, predictable, fine-grained, and highly flexible general purpose monitoring framework through monitoring-aware compilers coupled with novel architectural enhancements to the multi-core architectures. In this report we describe our initial steps in this direction and provide some preliminary performance results achieved with this new multi-core architecture. We use separate cores for the execution of the application to be monitored and the monitors. We augment each core with identical programmable extraction logic that can observe an application executing on the core as its program state changes. If a state change that needs to be monitored occurs, the extraction logic will pack the state change into a message and send it to one of the
170
A. Zhai, G. He, and M.P.E. Heimdahl
monitor cores for verification. In this architecture, one or more cores can be used to monitor and potentially share the workload, while introducing little or no intrusion to the software being monitored. If and when the monitors are no longer needed, the processor capacity previously occupied by the monitors can be reclaimed and allocated to production software without affecting the software originally being monitored. The communication between cores can be achieved either through a dedicated or an existing on-chip interconnection network depending on the need for predictable behavior. In the work presented here, we use an existing interconnection network and provide performance data for two monitoring problems—tracking memory problems such as memory leaks and taint analysis—that thoroughly stress the ability to efficiently extract data from an application and communicate that data to a monitor. Although our work on a monitoring aware compilers and multi-core architecture is far from complete, the initial steps and performance evaluation presented in this report illustrate the potential for this approach as we attempt to make run-time verification and monitoring in the production environment standard practice. The remainder of the paper is organized as follows. We provide the motivation for our work in Section 2 and an overview of our compiler and architecturally supported vision in Section 3. We present the details of the Ex-Mon architecture in Section 4; illustrate the effectiveness of the proposed architecture with two case studies in Section 5. Finally in Sections 6, we discuss the implications of our results and point to future directions.
2
Motivation and Problem Overview
Monitoring of the execution of a software system plays a central role in numerous software development activities. Monitoring is prevalent in testing (e.g., test oracles and test coverage tools), debugging (e.g., breakpoints and watch variables), run-time verification, safety and security monitors and interlocks, etc. Unfortunately, the performance penalties (both in terms of cost and predictability of the executions) are significant obstacles to effective use in the software development process. The motivation for our work in compiler and hardware architecture support for monitoring have been from problems and opportunities in primarily two areas: (1) the cost and difficulty of thoroughly testing embedded critical applications and (2) the opportunities for monitoring offered by model-based software development. Software Testing: A test oracle must monitor both the outputs from an application as well as internal state information since the fault finding can be severely affected by which and how many data items are being observed by the oracle. Thus, instrumentation of the application is generally required to collect test information in log-files or provide it to on-line oracles. Either way, the overhead associated with the data collection can be large enough to delay projects for months. In practice, we have with our industrial collaborators seen that determining that a small modification has not “broken” an embedded subsystem by
Hardware Supported Flexible Monitoring: Early Results
171
simply rerunning its test suite can take weeks. If additional modifications are needed, the delays add up and quickly lead to costly schedule delays. Accelerated testing through hardware support could provide orders of magnitude speedup of this process. A worse situation can emerge when one attempts to measure how well a test suite has covered an application as judged by some test-adequacy criterion. Of particular interest in our previous work has been the Modified Condition and Decision Coverage (MC/DC) criterion [5] used in the avionics industry and required in a standard such as DO-178B [28]. For the most critical applications, MC/DC has to be demonstrated on the object code (as opposed on the source code) and extensive instrumentation of the code is needed to establish this coverage. Unfortunately, current approaches relying on instrumentation leads to such performance degradation and increase in code size that only portions of the application can be instrumented at any time; the full test suite is run to establish coverage of the instrumented part of the application, another part is instrumented, the test suite rerun, repeat. The problem is so severe that simply establishing coverage can be comparable in cost to test development. In addition, all testing will have to be repeated without the instrumentation since there are no guarantees that the instrumentation did not change the behavior of the application under test. Clearly, low-overhead and predictable monitoring would be hugely beneficial. Opportunities in Model-Based Development: In model-based development, the development effort is centered around a formal description of the proposed software system. There are currently numerous commercial tools that attempt to provide these capabilities—commercial tools are, for example, CADE from Esterel Technologies [8], Statemate from i-Logix [11], and Simulink and Stateflow from The Mathworks Inc. [18,19]. Note here that this process leaves us with several development artifacts that are in an executable form; first, the source code that will be used to control the system under development (typically C or C++ code); second, the formal (or semi-formal) models from which the source code was derived (in our application domain, most likely Simulink and Stateflow models); third, collections of formalized required properties of the software derived for verification and testing purposes (generally captured as synchronous observers expressed in the modeling language or as temporal logic formulas for model checking). Currently, after the source code has been developed and tested, the model and property artifacts are used only for maintenance and documentation purposes. In our work, we envision these artifacts to see additional use as monitors after software deployment. As a concrete example, consider a recent project where we in collaboration with Rockwell Collins Inc. developed a formal model for the mode logic of a Flight Guidance System [20]. In the project, the system requirements were provided as informal “shall” statements. These requirements were relatively mature and well-understood. We then created a model using Simulink from Mathworks [18]; the model when completed consisted of about 4,500 Simulink blocks. Throughout creation of the model, we continually used the execution
172
A. Zhai, G. He, and M.P.E. Heimdahl
capabilities of Simulink to execute the model and informally confirm that it behaved as we expected. In the formal verification phase, we manually translated the shall statements into formal properties stated over the Simulink model in CTL and the NuSMV model checker [24] was then used to confirm whether the property held over the model or not. The effort resulted in 300+ CTL properties based on the informal requirements. In a production setting, after the Simulink model has been adequately validated, it would be used as a basis for the manual design and implementation of the production software (here the development was governed by a standard for airborne software—DO-178B [28]—that takes a highly skeptical view of code generation). If we could execute the original Simulink model as a synchronous observer next to the production software, we could conceivably detect potential design or implementation faults as well as possible hardware faults (single bit upsets, stuck at faults, etc.) that might affect the execution of the production software; this would be a highly valuable monitoring capability that would complement the fault detection of failures in the sensors, actuators, and the environment outlined in the previous section. In addition, if the requirements on the system were formalized as declarative properties (such as the CTL properties discussed above), these properties could be deployed as additional monitors used to detect both possible faults in the original model as well as additional design, implementation, and hardware faults affecting the correct operation of the software. Given appropriate support for efficient monitoring, we hypothesize that such monitors working in concert will provide a highly effective fault detection scheme for the embedded application of interest in our proposed effort.
3
Monitoring Overview
A program execution monitor observes the internal states of an application and verifies that a set of properties defined over the application are satisfied. Note here that access to fine-grained internal application state information is essential for our target monitoring tasks, for example, a test oracle monitor will most often need access to many internal variables and a test adequacy coverage monitor might need to know about all branch decisions as well as the conditions guarding the branches (as in the case of MC/DC). To perform the monitoring activities, a subset of the internal state of the application is exposed to the monitor through some form of communication. The implementation of the monitor can be viewed as a collection of monitoring routines. For example, the monitor might be a test oracle monitoring for a large number of required functions or invariants, each implemented as a separate monitoring routine. At runtime, upon receiving a state update the monitor invokes the subset of the of monitor routines that are affected; monitoring routines not affected by the state change need not be evaluated. The monitor is responsible for keeping track of the persistent state information needed to verify the properties of interest. Monitoring routines can be invoked in two distinct ways: explicit and implicit invocation. Monitoring routines with explicit invocation are invoked when
Hardware Supported Flexible Monitoring: Early Results
173
the execution of the application under study reaches a certain program point. Typical examples are monitoring routines performing pre- and post- condition checking that must be invoked at the entry and exit of a function. A monitoring routine with implicit invocation is invoked when an application state component of interest to the monitoring routine changes. A typical example would be a monitoring routine checking a program invariant; such a routine needs to be notified anytime any state variable covered by the invariant—possibly through aliases— is changed. In summary, there are four key steps when performing monitoring: (1) extracting the state information in the application needed by the monitor; (2) communicating this state information to the monitor; (3) updating the state in the monitor; and (4) dispatching the set of monitoring routines in response to the state change in a timely fashion. The simplest way to monitor the execution of an application is to integrate the set of monitoring routines with the application through instrumentation [23,16]. This way, the application state is completely visible to the monitor, thus no explicit state extraction and communication is needed. The monitor maintains its own state in the same address space as the application. In this scenario, dispatching explicit monitor routines is straightforward—calls to the appropriate monitor routines can be insert as instrumentation in the original program. To dispatch the appropriate implicit monitoring routines, all instructions that can generate a state change of interests must be instrumented. At runtime, all relevant state changes must be examined, and the proper monitor routine dispatched. If instrumented instructions occur often, this can be a source for significant performance overhead. As an example to illustrate our points, consider a simplified memory bug detection monitor that observes heap accesses and determines whether or not the following rules are violated by an application when accessing the memory: (i) no read to uninitialized memory locations; (ii) only allocated memory can be accessed; (iii) all allocated memory must be freed eventually; (iv) parameters to calls to the free function must be allocated memory addresses and be the return values of previous malloc function calls. In the case of explicit software instrumentation, all access to heap memory as well as calls to malloc and free must be instrumented [29,12,21,2]; separate shadow memory must be maintained to keep track of the allocation and initialization history. The performance impact of this implementation is significant, previous work has reported a 20x slowdown, even with aggressive optimizations [22]. One way to mitigate the performance degradation is to migrate the monitor to a separate processor [4,13]. This will allow the monitor and the application to execute in parallel and—if the monitor can keep up with the application— the performance penalties are now limited to the overhead associated with extraction and communication of state information and competition for shared resources. Unfortunately, the state extraction and communication overhead can be significant, and the instrumentation needed to extract the information from the application leads to—for our purposes—unacceptable perturbation of the application.
174
A. Zhai, G. He, and M.P.E. Heimdahl
Recently, there have been proposals for hardware support for a variety of monitoring activities; in particular to support fine-grained monitoring. Nevertheless, most of these proposals support one narrow type of monitor, such as monitoring memory bugs [30,35] or taint analysis [7,35,32,6,27,34]. These solutions can provide extremely low overhead monitoring, but they are all targeted to specific monitoring tasks and allow little or no customization. The techniques briefly discussed above (heavy instrumentation, distribution of monitors to separate processors, and hardware supported special purpose monitoring) cover three corners in the design space of software monitoring. When considering a monitoring task, we must consider within this design space the particular need for flexibility, performance, and predictability. For example, when considering continual monitoring for memory access violations the performance overhead is of utmost importance, but the need for flexibility in monitoring is not there (a special purpose monitoring task); dedicated hardware support might be the right solution. When using monitors as oracles during unit testing, the need for flexibility in the monitoring task is imperative since we are likely to use monitors for diverse tasks, for example, a functional test oracle or a test adequacy measurement tool. In this case, arbitrary monitors that can be easily modified and replaced are required; an inflexible hardware solution would not work, flexible monitors based on instrumentation would be far more suitable (if the performance penalty is acceptable). The particular challenges of our target monitoring tasks (discussed in Section 2) require us to somehow achieve the best characteristics from the previously suggested techniques while largely eliminating their weaknesses. Given the observations on monitoring above, it has become clear that to successfully enable effective monitoring we must pursue an architecture and compiler enhanced approach to software monitoring. The architectural and compiler support is essential to provide the performance and throughput needed for realistic applications, and all monitors much be software based to allow for the flexibility we need for a diverse collection of monitoring activities. To achieve these breakthroughs, we must re-consider the four steps in monitoring. Figure 1 shows an overview of our proposed monitoring framework that orchestrates the compiler as well as the architectural support to generate and enhance software execution monitors. As mentioned above, at run time there are four steps in monitoring: (1) extract the desired information from the core executing the application, (2) forward it to the monitor core, (3) update the monitor core state with the new information, and (4) dispatch the appropriate monitoring routine(s). All four steps must be supported with hardware to provide the performance needed. In particular, hardware support for information extraction is essential since our aim is to avoid instrumenting the application program with special instructions. To this effect, we integrate an extraction logic onto each core, and have the monitor configure the extraction logic with an event-of-interest list when the program is loaded. This extraction logic is capable of capturing a variety of instruction-level events. For example, the extraction logic can invoke monitors at specific program points; thus, it must be aware of
Hardware Supported Flexible Monitoring: Early Results
Application
175
Monitor w/ Annotation
Monitor Aware Compiler System Original Compiler
Extraction Logic Compiler
Monitor Compiler
Instrumentation Compiler
P
P
P
P
App. 1
Mon. 1
Mon. 2
App. 2
Extraction Logic
interconnection network
Architectural support for execution monitoring – Flexible extraction logic extracts information needed by the monitor routines – Underlying communication mechanism forward information to the monitor – Effective monitor dispatching support invokes the desired monitor routines. Compilation support for monitor generation – Identify the set of activities that invoke the monitor routines – Generate three outputs: (i) the application executable (instrumented hardware support for extraction logic is unavailable); (ii) the monitor executable; and (iii) extraction logic configuration if hardware support is available. – Update the monitor as the application is optimized;
Fig. 1. Overview of our proposed hardware/compiler monitoring infrastructure
the program counter. If we are monitoring for changes of a variable x and x is a register resident value, the event list would include the set of instructions that modify this value. On the other hand, if x is a memory-resident value, the event list would include the instructions that modify the memory location where x resides. Once such events are detected, information about the event are forwarded to the monitor. A similar approach is employed to monitor for other memory related events as well as for function invocations. The monitor-aware compiler takes as inputs the application and the monitoring routines annotated with the state information in an application A of interest to the monitor and the program points in A where explicit monitor routines will be invoked. From this input, the monitor aware compiler creates three new artifacts: (1) the application executable, (2) the monitor executables, and (3) a list of instruction-level events that must be extracted from the application execution. Note: our goal is that monitored application executable shall contain no additional instrumentation as compared to a non-monitored equivalent. The monitor executable will contain two parts: a list of monitor routines and a dispatching routine that determines which monitor routines should be invoked when we observe a certain event. The monitor-aware compiler is also responsible for generating the list of instruction-level events that must be extracted and forwarded from the executing application to the monitor; this information will be used to configure the extraction logic for each core. There is a risk that the on-chip and off-chip shared resources, such as the communication bandwidth, could become an issue even in our framework and
176
A. Zhai, G. He, and M.P.E. Heimdahl
the application being monitored might have to be stalled for the extraction logic to forward its information. In addition, should the execution time for the monitor be excessive, the application might have to be stalled for the monitor to keep up. One way to reduce the cost of state communication is to reduce the amount of state information transferred. This can be achieved with both hardware and software support. In this work, we propose compiler analysis to identify the minimal set of state information that require communication, and then only communicate this set. We also propose hardware techniques to avoid the transfer of duplicate state and compress the transferred states. For example, there is no need to transfer state information that has not changed; this is a fairly common occurrence when, for example, a piece of control software runs at a rate 5 to 10 times faster than the sensors sampling the environment—we will only see changes to certain state variables in the control software every 5-10 execution cycles. Furthermore, parallelization of the monitors is also possible in this framework
4
Support for Execution States Extraction
In a multi-core environment, efficiently using one core to monitor the behaviors of the software executing of another requires hardware support. One of the key functionality of such hardware support is to selectively extract information needed by the monitoring core from the monitored core. To avoid stalling the monitored application, information extraction must operate at the speed of the processor. In our previous work, we proposed a programmable extraction logic for this purpose, we refer to the architectural support as Ex-Mon [13]. The extraction logic can be programmed with the set of instructions whose runtime behaviors are of interests to the monitor. E.g., the extraction logic can be programmed with a set of instruction addresses; and the results of these instructions will be forwarded. The extraction logic monitors the earliest entries of the reorder buffer. When an instruction commits, the extraction logic decides whether the results of the instruction is of interest to the monitor; and packs and forwards the necessary information if it is. The monitor software explicitly manages the extraction logic by initiating and updating it at runtime. Forwarded information is written to a dedicated area of the shared memory, referred to as the communication queue. The communication queue is a circular buffer, with the head points to the next available slot for the extraction logic to write to; and the tail points to the next element to be consumed by the monitor. The head is updated by the extraction logic; and the tail is updated by the monitor. When the queue is full, the monitored program must be stalled, which is a major cause for performance degradation in monitoring. Although the extraction logic allows programmers to specify events of interests, traffic for some monitor activities can still be excessive. Thus, optimization opportunities to reduce communication traffic must be explored. Consider a monitor that is interested in detecting accesses through dangling pointers. Once a certain memory location is proven to be allocated, all future accesses to this
Hardware Supported Flexible Monitoring: Early Results
177
location do not need further verification, until the state of this memory location is changed by system calls to realloc and free. Consider a loop containing an instruction that sweep through array elements. Once we know the first element accessed, the access pattern is predictable until the end of the loop is reached. To take advantage of these opportinities, we propose hardware and software support to reduce communication traffic. Eliminating redundant forwarding with hardware support: An auxiliary structure, the local filter, is introduced to reduce communication costs. The local filter uses a small fully associative cache to store recently forwarded items. If an input address matches an entry in this filter, the instruction will not be forwarded. The local filter is initially empty and is updated by the extraction logic, which is in turn managed by the monitoring software. This is achieved by adding two extra bits in the extraction logic indicating whether the instruction forwarded should create an entry in the filter, clear the filter or have no effect. Eliminating redundant forwarding with compiler support:The hardwarebased solutions although efficient, is fundamentally limited by the lack of global program information. Compiler can help by only forwarding values that (i) are actually needed by the monitor; and (ii) are difficult for the monitor to derived. The compiler first identify program state changes that do not affect the verification routines and stop forwarding them. The irrelevant state elimination problem can be formulated as a simple dataflow analysis, where the compiler simply mark all values used in the monitor routines as relevant, and perform a backward analysis to mark all instructions these values depend on. When the algorithm terminates, all unmarked instructions are irrelevant, and should be eliminated. Communicating values from the monitored core to the monitor can be costly, and is sometimes more expensive than computing the forwarded value locally on the monitor. To identify and explore these opportunities, the compiler must identify a set of communications so that the cost of communication and computation at the monitor is minimized. This optimization can also be formulated as a backward analysis that identified dependent instructions. The backward search terminates when the cost for computing all dependent instructions exceeds the cost of reading from the communication queue.
5
Evaluation with Case Studies
To evaluate the effectiveness and efficiency of architecture and compiler support for execution monitoring, we will illustrate how the Ex-Mon can be utilized to detect memory bugs and track taint propagation. In the proposed infrastructure, monitor software contains (i) a set of monitoring routines, (ii) a dispatching routine that activates the appropriate monitoring routines, and (iii) routines to initiate and update the extraction table. The dispatching routine invokes the desired monitoring routines for every incoming event. Monitoring routines not only verify whether the incoming event violates
178
A. Zhai, G. He, and M.P.E. Heimdahl
any correctness specification, but also update the states that are needed for future verifications. The monitor program must maintain sufficient states to verify all specifications. Ideally, to implement an efficient monitoring system, only the minimal amount of events are extracted and forwarded to construct the states. It is worth pointing out that as the number and types of events increase, the efficiency of the dispatching routine can become important. Currently, the monitor compiler is under development, and the monitor programs used in this paper are developed manually. We evaluate the proposed support for software execution monitoring using the Simics [9] simulation environment, a full system simulation platform. We augment the simulator with the Wisconsin GEMS [17] infrastructure for a detailed memory hierarchy simulation. We simulate a multicore system with 8 cores, where each core has its own private L1 instruction and data cache, while sharing a unified L2 cache. The private L1 caches are 64KB in size and 4-way set-associative, with 64Bytes cache lines and 3-cycle access latency. The L2 cache is 8MB in size and 4-way setassociative, with 64Bytes cache lines and 6-cycle access latency. The main memory is 8GB in size with 160-cycle access latency. The extraction table has 1K entries and the local filter has 32 entries. We simulate the SPECINT 2006 benchmarks with the ref input set. For a reasonable simulation time, we simulated one billion instructions after skipping the initialization phase of the benchmark. In this paper, we evaluate the performance overhead of two execution monitor: memory-bug detection and taint propagation, using SPECINT 2006 [31] benchmark suite. We manually developed both monitoring softwares. The memory bug detection monitor detects a set of well-known memory bugs, including double free, memory leak, dangling pointer, and uninitialized load dynamically. The taint propagation monitor tracks flow of information and signals an error if tainted data takes control of program execution. In our implementation, both monitors simply log error states when faults are detected, and reported the faults at the end of the execution. 5.1
Verifying Memory Bugs
Memory bugs include memory leak, dangling pointers, loads to uninitialized memory locations and double free. In the rest of this section, we will first provide a brief description of the monitoring software; and then show how the monitor program works with the proposed hardware supports; and finally show the performance of monitoring software with hardware support. The dispatching routine for memory bug detection is a loop with a switchcase statement. It invokes the appropriate monitor routines when an event is observed in the communication queue. A special event, EXIT, corresponds to the termination of the monitored program, will lead to the termination of the dispatching routine. For memory bugs detection, the forwarded information is the commit of memory access instructions that accesses the heap, and all instructions related to calls of memory management functions, such as malloc, free, realloc and calloc. The extraction table is thus initiated with instructions that setup and
Hardware Supported Flexible Monitoring: Early Results
179
program execuon me slowdown 3.5 3 2.5 2
w/o filter 1.5
with filter
1 0.5 0
(a) Memory bug detection. program execuon me slowdown 16 14 12 10 w/o local filter 8 6
with local filter
4 2 0
(b) Taint propagation. Fig. 2. Execution time slowdowns due to monitoring for memory bugs and taint propagation
invoke memory management routines; and a memory range that correspond to the heap space. At runtime, the monitoring software parses incoming events and maintains allocation information for each and every memory block. Based on the allocation information, the monitor is able to verify whether a memory error has occurred. For example, when a memory location is read, the monitor software checks if the location has been allocated and initialized. If not allocated, the load is through a dangling pointer; if allocated, but not initialized, the load is an uninitialized read. When the monitored application has completed, the commit of one epilogue instructions will match an entry in the extraction table, and trigger the monitoring program to detect memory leakage bugs. In memory bug detection, we can use the local filter to reduce communication. In this case, all heap references are entered to the local filter to avoid repeated forwarding. Commits of call instructions to memory management functions will clear the local filter, since these functions can change the memory allocation. Results. At runtime, the monitor program consumes data from the communication queue in a FIFO order. In some portion of execution, the workload of the monitoring program is higher than that of the monitored program, and thus packets in the communications queue cannot be processed in a timely manner.
180
A. Zhai, G. He, and M.P.E. Heimdahl
When the communication queue is full, the execution of the monitored program must be stalled, and performance degrades. This is the main performance penalty evaluated in our work, as shown in Figure 2(a). For a 32K communication queue, the performance overhead is 110% on average. However, for some benchmarks, such as 403.gcc, the performance overhead is negligible, while for some other benchmarks, such as 446.hmmer the performance overhead is nearly 200%. Utilizing the local filter to reduce the number of forwarded through the communication queue has a significant performance impact, as shown by the with local filter bars in Figure 2(a). With a 32-entry local filter, all benchmarks are able to benefit significantly—on average, we are able to achieve 20% performance improvement. 5.2
Tracking Taint Propagation
Taint propagation is the foundation for building many security-enhancing software monitors. In taint propagation, each data item, i.e., every memory location and register, is tagged with a taint tag indicating whether the value stored is tainted or not. At runtime, the taint tags are propagated by the instructions that manipulate these data. For example, we can mark all data from unsafe sources, such as the internet, as tainted, and then keep track of taint propagation to ensure that unsafe data do not take control of the program. From the perspective of taint propagation, two types of instructions requires monitoring: data movement instructions (load/store, mov,etc) that propagate taint status of the source operand to its destination; and an arithmetic instruction that taint the destination operant if any of its sources is tainted. Since these two types of instructions contribute to a significant potion of dynamic execution, taint propagation creates a burden on the communication capability between the monitoring core and the monitor. At runtime, the monitor parses incoming events and maintains taint information for each and every memory and register. Based on the allocation information, the monitor is able to keep track of taint information. In taint tracking, we can use the local filter to reduce communication. Results. The workload of the monitor is almost always higher than that of the monitored program, and thus the application stalls often, causing performance degradation. This is the main performance penalty evaluated in our work, as shown in Figure 2(b). With a 32K communication queue, the performance overhead is 8x on average. For some benchmarks, such as 456.hmmer and 464.h264, the overhead is as high as 10-13x. These benchmarks have larger percentage of data-moving and arithmetic instructions, and thus impose heavy workload on communication and taint tracking. 429.mcf, on the other hand, only incurs an 13% overhead. This is because 429.mcf has low IPC due to cache misses, thus stalls caused by monitoring is relatively small. Utilizing the local filter to reduce the number of data items forwarded through the communication queue has a significant performance impact, as shown by the with local filter bars in Figure 2(b). With a 32-entry local filter, all
Hardware Supported Flexible Monitoring: Early Results
181
benchmarks are able to benefit significantly—on average, we are able to achieve 20% performance improvement. For 403.gcc, the performance improvement is as much as 46%.
6
Conclusions
Our central hypothesis is that by leveraging the rapid emergence of multi-core processor architectures, we can achieve a non-intrusive, predictable, fine-grained, and highly flexible general purpose monitoring framework through monitoringaware compilers coupled with novel architectural enhancements to the multi-core architectures. In this paper we describe our initial steps in this direction and provide some preliminary performance results achieved with this new multi-core architecture. We use separate cores for the execution of the application to be monitored and the monitors. We augment each core with identical programmable extraction logic that can observe an application executing on the core as its program state changes. We experimented our hardware support with two well-known monitors: memory bug detection and taint propagation, and found that with adequate hardware support, performance penalty can be reduced significantly compare to instrumentation-based approaches.
References 1. AMD Corporation. Leading the industry: Multi-core technology & dual-core processors from amd (2005), http://multicore.amd.com/en/Technology/ 2. Austin, T.M., Breach, S.E., Sohi, G.S.: Efficient detection of all pointer and array access errors. In: ACM SIGPLAN 1994 Conference on Programming Language Design and Implementation, PLDI 1994 (1994) 3. Chen, S., Falsafi, B., Gibbons, P.B., Kozuch, M., Mowry, T.C., Teodorescu, R., Ailamaki, A., Fix, L., Ganger, G.R., Lin, B., Schlosser, S.W.: Log-based architectures for general-purpose monitoring of deployed code. In: ASID 2006: Proceedings of the 1st workshop on Architectural and system support for improving software dependability, pp. 63–65. ACM, New York (2006) 4. Chen, S., Kozuch, M., Strigkos, T., Falsafi, B., Gibbons, P.B., Mowry, T.C., Ramachandran, V., Ruwase, O., Ryan, M., Vlachos, E.: Flexible hardware acceleration for instruction-grain program monitoring. In: 35th Annual International Symposium on Computer Architecture (ISCA 2008) (June 2008) 5. Chilenski, J.J., Miller, S.P.: Applicability of modified condition/decision coverage to software testing. Software Engineering Journal 9, 193–200 (1994) 6. Crandall, J.R., Chong, F.T.: Minos: Control data attack prevention orthogonal to memory model. In: MICRO 37: Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, Washington, DC, USA, 2004, pp. 221–232. IEEE Computer Society Press, Los Alamitos (2004) 7. Dalton, M., Kannan, H., Kozyrakis, C.: Raksha: A flexible information flow architecture for software security. In: 34th Annual International Symposium on Computer Architecture, ISCA 2007 (2007)
182
A. Zhai, G. He, and M.P.E. Heimdahl
8. Esterel-Technologies. SCADE Suite product description. (2004), http://www.esterel-technologies.com/v2/ scadeSuiteForSafetyCriticalSoftwareDevelopment/index.html 9. Magnusson, P.S., et al.: Simics: A full system simulation platform. IEEE Computer 35(2), 50–58 (2002) 10. Friedrich, J., McCredie, B., James, N., Huott, B., Curran, B., Fluhr, E., Mittal, G., Chan, E., Chan, Y., Plass, D., Chu, S., Le, H., Clark, L., Ripley, J., Taylor, S., Dilullo, J., Lanzerotti, M.: Design of the POWER6(TM) Microprocessor. In: 2007 IEEE International Solid-State Circuits Conference (February 2007) 11. Harel, D., Lachover, H., Naamad, A., Pnueli, A., Politi, M., Sherman, R., ShtullTrauring, A., Trakhtenbrot, M.: Statemate: A working environment for the development of complex reactive systems. IEEE Transactions on Software Engineering 16(4), 403–414 (1990) 12. Hastings, R., Joyce, B.: Purify: Fast detection of memory leaks and access errors. In: The Winter 1992 USENIX Conference, San Francisco, California, pp. 125–138 (1991) 13. He, G., Zhai, A., Yew, P.-C.: Ex-mon: An architectural framework for dynamic program monitoring on multicore processors. In: The Twelfth Workshop on Interaction between Compilers and Computer Architectures, Interact-12 (2008) 14. Intel Corporation. Intel’s dual-core processor for desktop PCs (2005), http://www.intel.com/personal/desktopcomputer/dual_core/index.htm 15. Intel Corporation. Intel itanium architecture software developer’s manual, revision 2.2 (2006), http://www.intel.com/design/itanium/manuals/iiasdmanual.htm 16. Luk, C.-K., Cohn, R., Muth, R., Patil, H., Klauser, A., Lowney, G., Wallace, S., Reddi, V.J., Hazelwood, K.: Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation. In: ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation (PLDI 2005) (June 2005) 17. Martin, M.M.K., Sorin, D.J., Beckmann, B.M., Marty, M.R., Xu, M., Alameldeen, A.R., Moore, K.E., Hill, M.D., Wood, D.A.: Multifacet’s general execution-driven multiprocessor simulator (gems) toolset. Computer Architecture News (2005) 18. Mathworks Inc. Simulink product web site. Via the world-wide-web: http://www.mathworks.com/products/simulink 19. Mathworks Inc. Stateflow product web site. Via the world-wide-web: http://www.mathworks.com 20. Miller, S.P., Tribble, A.C., Whalen, M.W., Heimdahl, M.P.E.: Proving the shalls: Early validation of requirements through formal methods. Int. J. Softw. Tools Technol. Transf. 8(4), 303–319 (2006) 21. Mitchell, N., Sevitsky, G.: LeakBot: An automated and lightweight tool for diagnosing memory leaks in large Java applications. In: Cardelli, L. (ed.) ECOOP 2003. LNCS, vol. 2743. Springer, Heidelberg (2003) 22. Nethercote, N., Seward, J.: How to shadow every byte of memory used by a program. In: The Third International ACM SIGPLAN/SIGOPS Conference on Virtual Execution Environments (VEE 2007), San Diego, California, USA (June 2007) 23. Nethercote, N., Seward, J.: Valgrind: A framework for heavyweight dynamic binary instrumentation. In: ACM SIGPLAN 2007 Conference on Programming Language Design and Implementation (PLDI 2007), San Diego, California, USA (June 2007) 24. The NuSMV Toolset (2005), http://nusmv.irst.itc.it/ 25. Pezze, M., Young, M.: Software Test and Analysis: Process, Principles, and Techniques. John Wiley and Sons, Chichester (2006)
Hardware Supported Flexible Monitoring: Early Results
183
26. Qin, F., Lu, S., Zhou, Y.: Safemem: Exploiting ECC-memory for detecting memory leaks and memory corruption during production runs. In: 11th International Symposium on High-Performance Computer Architecture (HPCA-11) (February 2005) 27. Qin, F., Wang, C., Li, Z., Kim, H.-s., Zhou, Y., Wu, Y.: Lift: A low-overhead practical information flow tracking system for detecting security attacks. In: 39th Annual IEEE/ACM International Symposium on Microarchitecture, 2006. MICRO-39, December 2006, pp. 135–148 (2006) 28. RTCA. DO-178B: Software Consideration. In: Airborne Systems and Equipment Certification. RTCA (1992) 29. Seward, J., Nethercote, N.: Using valgrind to detect undefined value errors with bit-precision. In: The USENIX 2005 Annual Technical Conference, Anaheim, California, USA (April 2005) 30. Shetty, R., Kharbutli, M., Solihin, Y., Prvulovic, M.: Heapmon: a helper-thread approach to programmable, automatic, and low-overhead memory bug detection. IBM J. Res. Dev. 50(2/3), 261–275 (2006) 31. Standard Performance Evaluation Corporation. The SPEC Benchmark Suite, http://www.specbench.org 32. Suh, G.E., Lee, J.W., Zhang, D., Devadas, S.: Secure program execution via dynamic information flow tracking. In: ASPLOS-XI: Proceedings of the 11th international conference on Architectural support for programming languages and operating systems, pp. 85–96. ACM, New York (2004) 33. Sun Corporation. Throughput computing—niagara (2005), http://www.sun.com/processors/throughput/ 34. Venkataramani, G., Doudalis, I., Solihin, Y., Prvulovic, M.: Flexitaint: A programmable accelerator for dynamic taint propagation. In: 14th International Symposium on High-Performance Computer Architecture (HPCA-14) (February 2008) 35. Venkataramani, G., Roemer, B., Solihin, Y., Prvulovic, M.: Memtracker: Efficient and programmable support for memory access monitoring and debugging. In: 13th International Symposium on High-Performance Computer Architecture (HPCA13) (February 2007) 36. Zhou, P., Qin, F., Liu, W., Zhou, Y., Torrellas, J.: iwatcher: Simple, general architectural support for software debugging. In: 31st Annual International Symposium on Computer Architecture, ISCA 2004 (2004)
DMaC: Distributed Monitoring and Checking Wenchao Zhou, Oleg Sokolsky, Boon Thau Loo, and Insup Lee Department of Computer and Information Science, University of Pennsylvania, 3330 Walnut Street, Philadelphia, PA 19104-6389 {wenchaoz,sokolsky,boonloo,lee}@cis.upenn.edu
Abstract. We consider monitoring and checking formally specified properties in a network. We are addressing the problem of deploying the checkers on different network nodes that provide correct and efficient checking. We present the DMaC system that builds upon two bodies of work: the Monitoring and Checking (MaC) framework, which provides means to monitor and check running systems against formally specified requirements, and declarative networking, a declarative domain-specific approach for specifying and implementing distributed network protocols. DMaC uses a declarative networking system for both specifying network protocols and performing checker execution. High-level properties are automatically translated from safety property specifications in the MaC framework into declarative networking queries and integrated into the rest of the network for monitoring the safety properties. We evaluate the flexibility and efficiency of DMaC using simple but realistic network protocols and their properties.
1
Introduction
In recent years, we have witnessed a proliferation of new overlay networks that use the existing Internet to enable deployable network evolution and introduce new services. Concurrently, new Internet architectures and policy mechanisms have been proposed to address challenges related to route oscillation and slow convergence of Inter-domain routing. Within the networking community, there is a growing interest in formal tools and programming frameworks that can facilitate the design, implementation, and verification of new protocols. One of the most commonly proposed approaches is based on runtime verification [12,17], which provides debugging platforms for verifying properties of protocols at runtime. This approach typically works by providing programming hooks that enable developers to check properties of an executing distributed system at runtime. Existing approaches are often platform dependent and hard to be generalized. The runtime checks are tightly coupled with the implementation and, as a result, cannot be easily reused across different execution environments, or be used to compare different implementations of the same protocol written in different programming languages. Moreover, given that the properties are specified at the implementation level, formal reasoning and analysis are not possible. S. Bensalem and D. Peled (Eds.): RV 2009, LNCS 5779, pp. 184–201, 2009. c Springer-Verlag Berlin Heidelberg 2009
DMaC: Distributed Monitoring and Checking
185
To address the above shortcomings, we present DMaC, a distributed monitoring and checking platform. DMaC builds upon two bodies of work: (1) the Monitoring and Checking (MaC) framework [11], which provides means to monitor and check running systems against formally specified requirements, and (2) declarative networking [14,13], a declarative domain-specific approach for specifying and implementing distributed network protocols. The original MaC framework was designed conceptually as centralized monitoring systems. DMaC achieves the distributed capability via the use of declarative networking, where network protocols are specified using a declarative logic-based query language called Network Datalog (NDlog). In prior work, it has been shown that traditional routing protocols can be specified in a few lines of declarative code [14], and complex protocols require orders of magnitude less code [13] compared to traditional imperative implementations. The compact and high-level specification enables rapid prototype development, ease of customization, optimizability, and potentiality for protocol verification. In DMaC, the safety properties of a distributed system are first specified using a platform independent formal specification language called MEDL. These property specifications are then compiled into declarative networking code for execution. Since declarative networks utilize a distributed query engine to execute its protocols, these checks can be expressed as distributed monitoring queries in NDlog. This paper makes the following three contributions: • DMaC platform: The system allows us to specify desired properties of protocols independent of their implementation and abstract away the physical distribution, generate run-time checkers for the properties and deploy them across the network, seamlessly integrated within the NDlog engine. • Formal specifications to declarative networks: We show that formal specifications can be automatically compiled to distributed queries. Moreover, given the query-based approach used in declarative networks, we illustrate the potential of applying existing database query optimizations in DMaC for efficient plan generation and dynamic reoptimization. • Implementation and evaluation: We have performed evaluation of DMaC on several representative examples deployed over a cluster. Results demonstrate feasibility of the approach, in terms of both performance overhead due to monitoring queries, and functionality of the property language.
2 2.1
Background Declarative Networking
Declarative query languages such as Network Datalog (NDlog) are a natural and compact way to implement a variety of routing protocols and overlay networks. For example, some traditional routing protocols can be expressed in a few lines of code [14], and the Chord DHT in 47 lines of code [13]. When compiled and executed, these declarative networks perform efficiently relative to imperative implementations, while achieving orders of magnitude reduction in code size.
186
W. Zhou et al.
The compact specifications enable ease of customization and adaptation, where protocols can be adaptively selected and composed based on policies and the changing network environment. A NDlog program is a distributed variant of Datalog which consists of a set of declarative rules. Each rule has the form p :- q1, q2, ..., qn., which can be read informally as “q1 and q2 and ... and qn implies p”. Here, p is the head of the rule, and q1, q2,...,qn is a list of literals that constitute the body of the rule. Literals are either predicates with attributes (which are bound to variables or constants by the query), or boolean expressions that involve function symbols (including arithmetic) applied to attributes. In Datalog, rule predicates can be defined with other predicates in a cyclic fashion to express recursion. The order in which the rules are presented in a program is semantically immaterial; likewise, the order of predicates appearing in a rule is not semantically meaningful. Commas are interpreted as logical conjunctions (AND). The names of predicates, function symbols, and constants begin with a lowercase letter, while variable names begin with an uppercase letter. As a running example throughout the paper, the following four NDlog rules implement the pathVector protocol that computes the shortest paths between all pairs of nodes in a network. materialize(link,keys(1,2),infinity). materialize(path,keys(3),infinity). materialize(bestCost,keys(1,2),infinity). materialize(route,keys(1,2),infinity). p1 path(@S,D,P,C) :- link(@S,D,C),P=f_initPath(S,D). p2 path(@S,D,P1,C1+C2) :- link(@S,Z,C1), route(@Z,D,P,C2), f_memberOf(S,P)=false, P1=f_concat(S,P). p3 bestCost(@S,D,MIN) :- path(@S,D,C,P). p4 route(@S,D,P,C) :path(@S,D,P,C), bestCost(@S,D,C).
In NDlog, each predicate contains a location specifier, which is expressed with @ symbol followed by an attribute. This attribute is used to denote the source location of each corresponding tuple. For instance, all link, path and route tuples are stored based on the @S address field. The above NDlog program is executed as distributed stream computations in a recursive fashion. Rule p1 takes link(@S,D,C) tuples, and computes all the singlehop paths path(@S,D,P,C) where the user-defined function f initPath initializes the path vector as P=[S,D]. In rule p2, multi-hop paths are computed. Informally, it means that if there is a link between S and Z with cost C1, and the route between Z and D is P with cost C2, then there is a path between S and Z with cost C1+C2. Additional predicates are used for computing the path vector: the built-in function f memberOf is used to drop any paths with loops; if no loop is found, a new path P1 (from S to D via intermediate node Z) is created by function f concat. In rule p3, the bestCost is defined as the minimum cost among all paths. Finally, rule p4 is a local Datalog rule used to compute the shortest path with the lowest cost based on the local bestCost table.
DMaC: Distributed Monitoring and Checking
187
Declarative networking also incorporates the support of soft-state derivations which are commonly used in networks. In the soft state storage model, all data (input and derivations) have an explicit “time to live” (TTL) or lifetime, and all tuples must be explicitly reinserted with their latest values and a new TTL or they are deleted. To support this feature, an additional language feature is added to the NDlog language, in the form of a materialize keyword at the beginning of each NDlog program that specifies the TTLs of predicates. For example, the definition materialize(link,keys(1,2),10) specifies that the link table has its primary key set to the first and second attributes (denoted by keys(1,2))1 , and each link tuple has a lifetime of 10 seconds. If the TTL is set to infinity, the predicate will be treated as hard-state. The soft-state storage semantics are as follows. When a tuple is derived, if there exists another tuple with the same primary key but differs on other attributes, an update occurs, in which the new tuple replaces the previous one. On the other hand, if the two tuples are identical, a refresh occurs, in which the TTL of the existing tuple is extended. For a given predicate, in the absence of any materialize declaration, it is treated as an event predicate with zero lifetime. Since events are not stored, they are primarily used to trigger rules periodically or in response to network events. Soft-state tuples are deleted upon expiration. In contrast, hard-state tuples are deleted via cascaded deletions. For instance, when existing links are deleted, the deletions have to be cascaded, resulting in deletions of previously derived path tuples, and updates to bestPath tuples. The NDlog language also generates events when table insertions and deletions occur. For instance, link ins and link del are generated whenever link tuples are inserted and deleted respectively. 2.2
Runtime Monitoring and Checking
MaC Overview. Continuous monitoring and verification of the run-time behavior of a system can improve our confidence about the system by ensuring that the current execution is consistent with its requirements at run time [9,18,6,15,19,7]. We have developed a Monitoring and Checking (MaC) framework for run-time monitoring of software systems [11], which allows us to specify high-level properties, implement checkers for these properties, extract relevant low-level information from the system execution and abstract these low-level observations to match the level of the property specification. The MaC framework includes two languages: MEDL and PEDL. The MetaEvent Definition Language (MEDL) is used to express properties. Its formal semantics are similar to the semantics of a past-time linear-time temporal logic. It can be used to express a large subset of safety properties of systems, 1
Tables are maintained in P2 following the set semantic, where primary keys are the unique identifications of the tuples stored in a table. Upon receiving a tuple with the identical primary key as an existing tuple, the table will be updated by replacing the old tuple with the more recent one.
188
W. Zhou et al.
including timing properties. We use events and conditions to capture and reason about temporal behavior and data behavior of the target program execution; events are abstract representations of time progress and conditions are abstract representations of data. Primitive Event Definition Language (PEDL) describes primitive high-level events and conditions used in MEDL properties in terms of system objects. PEDL defines what information is sent from the filter to the event recognizer, and how it is transformed into events used in high-level specification by the event recognizer. Unlike MEDL, PEDL depends on the target system, since it has to refer to observations made directly on the system. The framework includes two main phases: static phase and dynamic phase. During the static phase, i.e., before a target program is executed, properties are specified in MEDL, along with the respective PEDL mapping. Then, also in the static phase, run-time components such as a filter, an event recognizer, and a run-time checker are generated. During the dynamic phase, the target program, instrumented with the observation filter, is executed while being monitored and checked with respect to the properties. The filter tracks changes of monitored objects and sends observations to the event recognizer. The event recognizer performs the abstraction of low-level observations into high-level events according to the supplied mapping. Recognized events are sent to the run-time checker. Although the event recognizer can be incorporated into the checker, we separate them to clearly distinguish high-level properties, independent of a particular implementation, from low-level information extraction, which by necessity applies to a given implementation. A run-time checker determines whether or not the current execution history satisfies the properties. Checking is performed incrementally, without storing unnecessary history information. We have implemented a prototype of the MaC for Java programs, called JavaMaC [10]. Java-MaC targets Java bytecode, providing automatic instrumentation of the target program and automatic generation of the checker, event recognizer, and filter. Although the existing implementation can be easily extended to monitor other kinds of target systems by providing a different filter, it is difficult to apply it to distributed systems. MEDL Specification Language. The language MEDL is built around a twosorted logic that describes events and conditions. In this paper, we use a parametric variant of MEDL described in [23], but without explicit quantification. We present MEDL syntax and semantics using examples. For the formal presentation, we refer the reader to [10,23]. In MEDL, events are instantaneous observations. Primitive events are received from the monitored system, while composite events are derived in the checker. Events can carry attributes that give additional information about the occurrence. For example, event route(s, d, p) is observed when the node s enters the route to the node d into its routing table, and the route follows the path p, and these three values are supplied as attributes of the event. In addition, each event has the timestamp of its occurrence, interpreted by the checker clock. Unlike events, conditions have durations: once a condition becomes true, it is
DMaC: Distributed Monitoring and Checking
189
true continuously until it becomes false. Primitive conditions are defined within the monitored system, which notifies the checker about changes to the condition value. Conditions can be parametrized by attributes of events that are used in the definition of a condition, as described below. ¯ Primitive events are defined using the import declaration: import event e(X) specifies that event e can observed directly from the monitored system and that ¯ = (x1 , . . . , xk ). A conjunction of two every occurrence of e carries attributes X ¯ ¯ ¯ events e(X) = e1 (X1 ) ∧ e2 (X2 ) is an event that occurs when both e1 and e2 ¯ ⊆ var(X¯1 ) ∪ var(X¯1 )2 ; that is, occur simultaneously. We require that var(X) attributes of the composite event can come only from the events and conditions used in its definition. For example, event e(x, y) = e1 (p, x) ∧ e2 (x, y, 2) occurs at time t only if both e1 and e2 occur at time t, such that the last attribute of e2 is the constant 2 and the common attribute x has the same value in both occurrences. Disjunction of events is defined similarly. Given a condition c, event (e when c) occurs if e occurs while the condition c is true. As an example, consider ec (x1 , x2 ) = e(x1 ) when c[x1 , x2 ]. Let conditions c[1, 1], c[1, 2], and c[2, 1] be true and c[1, 3] be false. When event e(1) occurs, it will cause simultaneous occurrences of events ec (1, 1) and ec (1, 2). ¯ is associated with two events, startc and endc , both of Each condition c[X] ¯ as attributes. Event startc occurs when c becomes true, while which have X endc occurs with it becoming false. A primitive condition c[x1 , . . . , xn ] represents the directly observed state of the monitored system. Whenever a ground instance c(p1 , . . . , pn ) changes its value, the checker receives an occurrence of startc (p1 , . . . , pn ) or endc (p1 , . . . , pn ), corresponding to the new value of the ground condition, from the event recognizer. Conjunction, disjunction, and negation of conditions are defined in the expected way. Any two events e1 (X¯1 ), e1 (X¯1 ) define a condition [e1 , e2 ), which is parametrized with var(X¯1 )∪var(X¯2 ). This condition is true if there has been an occurrence of e1 in the past and, since then, there has been no occurrence of e2 with matching attributes. A time-bounded variant of this expression, [e1 , e2 )
3
DMaC Architecture
Figure 1 shows the architecture of DMaC. In the above figure, the monitored network is specified using NDlog as a collection of rules. The system can be used 2
¯ are the variables in X. ¯ var(X)
190
W. Zhou et al.
Protocol Specification
NDlog Rules
Safety Property Specification
PEDL MEDL
Locationer Filter Event Recognizer
Planner
NDlog Rules
Fig. 1. Architectural Overview of DMaC
for monitoring existing legacy networks (where the network state is exposed as distributed tables that one can query over), or declarative networks themselves. The latter provides more opportunity for fine-grained analysis, since the protocol and monitoring queries themselves are based on the same declarative query language. Separately, safety properties of the protocols are described in MEDL. The MEDL specifications are location-agnostic. PEDL scripts are then used to specify the distributed nature of the monitored property, in the form of an event recognizer that takes as input the events that arrive at a node, and a locationer that determines the locations where generated events from MEDL are to be sent. MEDL properties are translated into NDlog rules without location information. The planner module then utilizes the PEDL specifications in combination with query optimization techniques to decide on the allocation of the generated rules and annotate the NDlog rules with location information. The generated NDlog rules are then deployed in the network as monitoring queries. 3.1
Example Network Property: Route Persistence
To illustrate the use of DMaC, we consider the route persistence property [16], which tracks the duration that each computed route persists without changing. We will introduce more complex examples in Section 3.3. In the normal situation, we expect routes, once established, to be stable for some time before changing again. Therefore, upon each route update message, we raise persistenceAlarm when changes occur too quickly (e.g., less than 10 seconds) after the previous update. In the context of deployed network protocols, rapid changes to computed routes could be a symptom of more serious issues, such as the lack of convergence in the protocol.
DMaC: Distributed Monitoring and Checking import var var event event
191
event rtMsg (s, d, p) routeStored [s, d] timeStored [s, d] newRoute(s, d, p) = rtMsg (s, d, p) when routeStored [s, d] = p persistenceAlarm(s, d) = newRoute(s, d, p) when time(newRoute(s, d, p)) − timeStored[s, d] < 10 newRoute(s, d, p) → {routeStored [s, d] := p; timeStored [s, d] := time(newRoute(s, d, p); }
The above MEDL program uses the time(newRoute) construct to denote the time of the newRoute event. All events have an associated timestamp. To avoid clock synchronization issues, DMaC takes the convention that events are assigned a receiver-based timestamp, and this timestamp is retrievable via the time construct above. The translation of MEDL into NDlog is as follows: sm1 newRoute(@S,D,P,T) :- rtMsg(@S,D,P), routeStored(@S,D,P1), P1!=P, T:=f_now(). sm2 routeStored(@S,D,P) :- newRoute(@S,D,P,T). sm3 timeStored(@S,D,T) :- newRoute(@S,D,P,T). sm4 persistenceAlarm(@S,D) :- timeStored(@S,D,T1), newRoute(@S,D,P,T), T-T1<10.
In the above program, f now is a user-defined function that returns the current local time of the node where the rule is triggered. In order to check the property over the path vector protocol, we need to connect the two sets of NDlog rules, specifying how the imported event rtMsg is produced by the protocol specification. For this, we add the following PEDL statement: export event rtMsg: rtMsg(@S,D,P) :- route(@S,D,P,C).
This statement serves several purposes. First, it specifies that rtMsg originates from changes to the route table of node S in the protocol specification and, second, the rtMsg event is raised at the same node, S, where the route table is stored. Finally, it projects away irrelevant attributes of the table, in this case, C. PEDL rules can also be used to insert additional attributes into the high-level event. For example, if rtMsg was to be processed at a node different from S, we may choose to insert the origination timestamp as an additional attribute of rtMsg in order to support the reasoning about propagation delay in property specifications. The PEDL script also contains another statement: export persistenceAlarm(@S,D)
which specifies that persistenceAlarm event is exported - that is, it can be used in other MEDL properties and also in other network protocols. For exported events, we specify the location where the event is raised. From the locations assigned to the imported and exported events, the planner module decides on the deployment of MEDL rules to network nodes. In this simple example, where both exported and imported events are local to the node
192
W. Zhou et al.
PEDL Locationer export persistenceAlarm(@S,D) = persistenceAlarm(s,d);
MEDL var routeStored[s,d]; var timeStored[s,d]; event newRoute(s,d,p) = route(s,d,p) when routeStored[s,d] != p;
Event Recognizer event route(s,d,p) = route(@S,D,P,C);
Filter import route(@S,D,P,C);
alarm persistenceAlarm(s,d) = newRoute(s,d,p) when time(route(s,d,p)) Ͳ timeStored[s,d] < 10; newRoute(s,d,p) Ͳ> routeStored'[s,d] := p; timeStored'[s,d] := time(newRoute(s,d,p)); Safety property for Persistence
Fig. 2. MEDL and PEDL specifications for the Persistence Alarm
S, it is clear that the checker should also be located at S. In more complicated cases, intermediate results of MEDL evaluation may be placed at different nodes. We use known techniques from query optimization to compute this placement. More details are given in Section 4.3. 3.2
MEDL Extensions for Sliding Window Event Correlation
In practice, several monitoring scenarios in the networking domain require the ability to reason about event occurrences that arrive within a given time window. We provide an example in Section 3.3 but briefly outline language extensions to MEDL in order to support such correlation: First, to make reasoning about event correlation easier, we also define a timebounded conjunction of events e1 ∧ 10 asserts the maximum value of the attribute p in the events that occurred in the last 5 time units should be larger than 10. Note that this expression effectively defines a separate sliding window for each value of x encountered so far. 3.3
Other Examples
We consider additional network properties to illustrate the flexibility of DMaC, as well as demonstrate intuitively the translation of MEDL into NDlog. The
DMaC: Distributed Monitoring and Checking
193
examples are representative of typical queries that involve aggregation and event correlation. Other examples unrelated to network routing (e.g. ensuring the correctness of distributed hash table routing, multicast trees, etc) are also possible within our framework. Data Plane Monitoring. In the previous example, we considered a property that, when violated, indicates a problem with the control plane of the network. Here, we consider a property that indicates a problem with the data plane, that is, the transmission of packets over established routes. Often, sudden changes in the flow rate of data across the network indicate a problem with the network. We define flow rate as the average number of packages that are transmitted in a period of time (for example, during the last 60 seconds). We raise flowAlarm when the flow rate between two nodes exceeds a threshold (say 100KB). In order to calculate flow rate, we use the new sliding window feature of MEDL. import event package(s, d, size) event rateAlarm(s, d) = start( size [package(s, d, size)]60 > 100000)
As the result of the translation to NDlog, we obtain the following rules: materialize(package,keys(1,2),60). ct1 packageSum(@S,D,SUM<Size>) :- package(@S,D,Size). ct2 rateAlarm(@S,D) :packageSum_ins(@S,D,Sum), Sum>100000.
Distributed Time-based Event Correlation. Often, it is necessary to correlate events that are raised by different nodes in the network. In particular, simultaneous problems in the control and data planes of the network often indicate an attack on the network. In this example, we consider a set of gateways, each of which is responsible for the health of a set of sensor nodes. The gateways need to correlate persistence (pAlarms) and rate alarms (rAlarms) sent from different nodes. We revisit this example in Section 4.3 to motivate our approach to location assignment for MEDL rules. The property utilizes the nodeto-gateway mapping as an imported condition, allowing the mapping to change dynamically. import condition gateway(s, m) event gatewayAlarm(m, z) = persistenceAlarm(s, z) ∧<5 rateAlarm(z, d) when gateway(s, m) alarm attackAlarm(m) = start (#[gatewayAlarm(m, z)]3600 > 4)
The translation to NDlog is as follows: materialize(gateway,keys(1),infinity). materialize(pAlarms,keys(1,2),5). materialize(rAlarms,keys(1,2),5). materialize(gatewayAlarm,keys(1,2),3600).
194
W. Zhou et al.
cm1 pAlarms(@M,S,D) :- persistenceAlarm(@S,D,Stat), gateway(@S,M). cm2 rAlarms(@M,S,D) :- flowAlarm(@S,D,Stat), gateway(@S,M). cm3 gatewayAlarm(@M,Z) :- pAlarms(@M,S,Z), rAlarms(@M,Z,D). cm4 alarmNum(@M,COUNT<>) :- gatewayAlarm(@M,Z). cm5 attackAlarm(@M,Count) :- alarmNum_ins(@M,Count), Count>4.
4
Translating MEDL into NDlog
In this section, we present a general algorithm that translates MEDL rules into the corresponding NDlog programs. Figure 3 shows the steps involved in translation: MEDL normalization, Datalog generation, and Optimized NDlog generation. We focus on the first two steps, and defer the discussion of the third step to Section 4.3. 4.1
MEDL Normalization
In the first step, each MEDL rule is rewritten into a normalized MEDL expression, in which each event and condition is defined by an application of exactly one operator, applied to either events or conditions or constants. For example, e(x, y) = (e1 (x, z) ∧≤t e2 (y, z)) when (c1 [x, y] ∧ c2 [x, z]) would be represented as the following three rules: e(x, y) = e12 (x, y, z) when c12 [x, y, z] e12 (x, y, z) = e1 (x, z) ∧≤t e2 (y, z) c12 [x, y, z] = c1 [x, y] ∧ c2 [x, z]
We also require that each guarded command updates exactly one variable, which can be achieved by splitting the update block into individual statements and creating a separate guarded command for each. As the basic components in MEDL, events, conditions and auxiliary variables ¯ is translated to the tuple are translated to tuples in NDlog. An event e(X) e(X1,...,Xn), and the presence of a tuple indicates that the event has occurred. ¯ is c(X1,...,Xn), where the presence of the tuple Translation of a condition c[X] indicates that the condition is true and the absence of it means that it is false. ¯ is translated to the tuple v(X1,...,Xn,V), with the An auxiliary variable v[X] last variable storing the current value of the variable. Tuples that correspond to conditions and variables are materialized using X1,...,Xn as keys. Events, on the other hand, may carry attributes that need not be used as keys. We use a simple technique to identify such attributes. An event attribute that is used in the definition of a condition or variable as its parameter is always a part of the key for the event relation. This is to ensure that the event is present when the rule that updates the condition or variable is evaluated. However, if an attribute of an event is used only to update values of auxiliary variables, then attribute need not be a part of the key. Consider event
DMaC: Distributed Monitoring and Checking Event Filter
MEDL rules
MEDL Normalization
Normalized MEDL rules
Datalog Generation
195
Locationer Location Information Location-agnostic Datalog rules Optimized NDlog NDlog rules
Generation
Fig. 3. Translation process from MEDL rules to NDlog rules
MEDL Rules ¯ = e1 (X¯2 ) ∨ e2 (X¯2 ) e(X) ¯ = e1 (X¯1 ) when c[X¯2 ] e(X)
¯ = e1 (X¯1 ) ∧≤t e2 (X¯2 ) e(X)
¯ = start(c[Y¯ ]) e(X) ¯ = end(c[Y¯ ]) e(X) ¯ = c1 [X¯1 ] ∧ c2 [X¯2 ] c[X] ¯ = c1 [X¯1 ] ∨ c2 [X¯2 ] c[X] ¯ = pred(v1 [Z¯1 ], ...vp [Z¯p ]) c[X] ¯ = [e1 (X¯1 ), e2 (X¯2 )) c[X] ¯ → {v[Z] ¯ := expr(v1 [Z¯1 ], ..., vp [Z¯p ])} e(X)
.
Corresponding Datalog Rules e(X1 , ..., Xn ) : −e1(X1,1 , ..., X1,k ). e(X1 , ..., Xn ) : −e2(X2,1 , ..., X2,m ). e(X1 , ..., Xn ) = e1 (X1,1 , ..., X1,k ), c(X2,1 , ..., X2,m ). materialize(e1 , keys(1, 2), t). materialize(e2 , keys(1, 2), t). e1 (X1,1 , ..., X1,k ) : −e1 (X1,1 , ..., X1,k ). e2 (X2,1 , ..., X2,m ) : −e2 (X2,1 , ..., X2,m ). e(X1 , ..., Xn ) = e1 (X1,1 , ..., X1,k ), e2 (X2,1 , ..., X2,m ). e(X1 , ..., Xn ) = e2 (X2,1 , ..., X2,m ), e1 (X1,1 , ..., X1,k ). e(X1 , ..., Xn ) : −c ins(Y1 , ..., Ym ). e(X1 , ..., Xn ) : −c del(Y1 , ..., Ym ). c(X1 , ..., Xn ) : −c1 (X1,1 , ..., X1,k ), c2 (X2,1 , ..., X2,m ). c(X1 , ..., Xn ) : −c1 (X1,1 , ..., X1,k ). c(X1 , ..., Xn ) : −c2 (X2,1 , ..., X2,m ). c(X1 , ..., Xn ) : −v1 (Z1,1 , ..., Z1,m1 , V al1 ), ..., . vp (Zp,1 , ..., Zp,mp , V alp ), pred(V al1 , ...V alp ). c(X1 , ..., Xn ) : −e1 (X1,1 , ..., X1,k ). delete c(X1 , ..., Xn ) : −e2 (X2,1 , ..., X2,m ), c(X1 , ..., X2 ). v(Z1 , ..., Zn , V al) : −v1 (Z1,1 , ..., Z1,mp , V al1 ), ..., vp (Zp,1 , ..., Zp,mp , V alp ), V al := expr(V al1 , ...V alp ).
Fig. 4. MEDL and corresponding Datalog rules
newRoute in Figure 2. Its attribute p is not a part of the key, since the event is used in the definition of persistenceAlarm and also to update values of variables routeStored and timeStored, none of which is parameterized by p. 4.2
Datalog Generation
The Datalog generation process rewrites normalized MEDL rules into locationagnostic Datalog rules. Figure 4 summarizes the translation algorithm by listing each MEDL rule type and the corresponding Datalog rules. We categorize normalized MEDL rules into ten different types, of which six are for event generation, three for condition generation and one for variable updates. Due to space constraints, we highlight two particularly interesting translation. The 3rd row shows a MEDL rule for sliding window based event correlation (see Section 3.2) and the corresponding Datalog rules. The translation result in 4 Datalog rules that used to store the events e1 and e2 as soft-state tables e1 and e2 respectively for a specified lifetime determined by the sliding window size of t seconds. The soft-state tables are then used for correlating the events over the time-interval of t seconds.
196
W. Zhou et al.
In the 8th row, the condition predicate c(X1 , ..., Xn ) over auxiliary variables v1 [X¯1 ], ..., vn [X¯n ] is handled by introducing the function predicate pred into the Datalog rule. The rule is triggered by update events of each variable. As an example, the condition c[x, y] = v1 [x] + v2 [y] > 5 will be translated to the corresponding Datalog rule c(X, Y ) : −v1 (X, V al1 ), v2 (Y, V al2 ), V al1 +V al2 > 5. Note that, the time(event) semantic in MEDL requires recoding the generation timestamp as additional information for an event. If the timestamp of an event is used in a MEDL program, variable T is added to the relation, and T := f now () is added to the rule that produces the event. This results in the use of receiver-based timestamps, where each event is timestamped based on the recipient node’s time. 4.3
NDlog Program Generation and Optimization
The Filter and Locationer modules in PEDL explicitly indicate the physical locations of the import and export events and conditions used in DMaC rules. To deploy DMaC programs for distributed monitoring, one needs to further assign the locations where the computations in MEDL (e.g. correlation of events and conditions) should take place. In several instances, different location assignments may result in varying communication overhead, and the specific data placement strategy is determined by factors such as inter-node latency, bandwidth, and also the rate at which events are generated. Interestingly, given our use of declarative networking, the optimization decisions map naturally into distributed query optimizations. Our goal in this section is to highlight some of the challenges and opportunities in this space, and set the stage for richer explorations in future. Motivating Example. We consider a three-node network consisting of nodes n1 , n2 and n3 . The three events in the network that follow the MEDL rule are: e3 (X¯3 ) = e1 (X¯1 ) ∧≤t e2 (X¯2 ), where e1 (X¯1 ) is located at n1 ; e2 (X¯2 ) is located at n2 ; and e3 (X¯3 ), as the correlation results of e1 and e2 , is located at n3 . These three events respectively refer to the persistenceAlarm, flowAlarm, and attackAlarm events from the event correlation example in Section 3.3. According to the locations where the correlation is performed, the compilation of the MEDL rule may result in different sets of NDlog rules, each of which potentially has a distinct execution overhead. There are three potential execution plans for the above MEDL rule: – Plan a (correlation at n1 ): Node n2 sends e2 (X¯2 ) to n1 . Meanwhile n1 performs correlation of the received e2 (X¯2 ) events and the local e1 (X¯1 ) events, and it sends the resulting e3 (X¯3 ) to n3 ; – Plan b (correlation at n2 ): Node n1 sends e1 (X¯1 ) to n2 , and n2 performs the correlation and sends the resulting e3 (X¯3 ) to n3 ; – Plan c (correlation at n3 ): Node n1 and n2 send the generated events e1 (X¯1 ) and e2 (X¯2 ) to n3 , where the correlation is performed. Once the correlation location is decided, a MEDL rule will be translated into NDlog rules automatically by the compilation process. For instance, taking node
DMaC: Distributed Monitoring and Checking
197
n3 as the correlation location, the above MEDL rule is translated into the following NDlog rules: op1 e1’(@n3,X1) :- e1(@n1,X1). op2 e2’(@n3,X2) :- e2(@n2,X2). op3 e3(@n3,X3) :- e1’(@n3,X1), e2’(@n3,X2).
Rules op1 and op2 ship e1 (X¯1 ) and e2 (X¯2 ) from their original location to node n3 , which is the correlation location. Rule op3 then performs the correlation of these two events and generates e3 (X¯3 ). Following a similar approach, one can easily write corresponding NDlog rules if the correlation happens at n1 or n2 . Cost-based Query Optimizations. When a MEDL rule that involves multiple events and conditions is compiled to NDlog implementation, we choose as default to send the contributing events and conditions to the location where the computation result is located (i.e. plan c in the above example). Plan c may in fact turn out to be an inefficient strategy. Revisiting the correlation example in Section 3.3, if node n3 is connected via a high-latency, lowbandwidth link to nodes n1 and n2, and the rate at which events e1 and e2 are generated is extremely high, this plan would result in overwhelming node n3. A superior strategy in this case could be for events to be correlated locally between n1 and n2 first. The savings can be tremendous, particularly if these two nodes are connected via a high-speed network, and the actual correlation of e3 events is infrequent. Interestingly, one can model the above tradeoffs via cost-based query optimizations, where a cost is assigned to each plan, and the plan with the lowest cost is selected. As an example, the bandwidth utilization of the above plans can be estimated from the rate at which events arrive, and the selectivity of the correlation: Plan a: |e2 | ∗ se2 + re1 ,e2 ∗ |e1 | ∗ |e2 | ∗ se3 Plan b: |e1 | ∗ se1 + re1 ,e2 ∗ |e1 | ∗ |e2 | ∗ se3 Plan c: |e1 | ∗ se1 + |e2 | ∗ se2 where |ei | represents the estimated generation rate of event ei (X¯i ), sei represents ¯i ), re1 ,e2 is the selectivity of the correlation, i.e. the the message size of ei (X ¯ likelihood that e1 (X1 ) and e2 (X¯2 ) are correlated. Given a MEDL rule, exhaustive enumeration is required to search for all potential execution plans and select the optimal plan given the cost estimates. In practice, finding the optimal plan is NP-hard. To find approximate solutions that provide efficient plans, commercial database engines utilize dynamic programming [20] in combination with other plan selection heuristics, enabling the query engine to generate efficient plans at reasonable overhead. In our distributed setting, the optimal plan may change over time as the rate of events and network conditions vary during the monitoring period. In recent database literature, there has been significant interests and progress in the area of adaptive query optimization [5] techniques, commonly used to reoptimize query plans during execution time. Such runtime reoptimizations are particularly useful
198
W. Zhou et al.
in the areas of wide-area data integration and distributed processing of sensor feeds. We intend to explore the use of adaptive query optimizations in DMaC as future work.
5
Evaluation
In this section, we perform an evaluation of the DMaC system. The goals of our evaluation are two-fold: (a) to experimentally validate the correctness of the DMaC implementation, and (b) to study the additional overhead incurred by monitoring rules. Our experiments are executed within a local cluster with 15 quad-core machines with Intel Xeon 2.33GHz CPUs and 4GB RAM running Fedora Core 6 with kernel version 2.6.20, which are interconnected by high-speed Gigabit Ethernet. Our evaluation is based on the P2 declarative networking system [1]. In the experiments, we deploy up to a network size of 120 nodes (eight nodes per physical machine). For the workload, we utilize the path vector protocol which computes the shortest paths between all pairs of nodes. To evaluate the additional overhead incurred by monitoring safety properties, we deploy two versions of the path vector query, i.e. PV and PV-DMaC, where PV executes pure path vector query presented in Section 2.1, PV-DMaC additionally executes the DMaC rules presented in Section 3.1 to monitor the route persistence property. When a violation of the property is detected, the specified MEDL/PEDL results in the generation of persistenceAlarm events which are exported to a centralized monitor to log all such violations across the network. As input, each node takes a link table pre-initialized with 6 neighbors. After the path vector query computation reaches a fixpoint, i.e. the computation of all-pairs best paths have completed, we periodically inject network events to induce churn (changes to the network topology). To evaluate the performance overhead at high and low rates of network churn, at each 60-second interval, we interleave high churn of 50 link updates per second followed by low churn of 15 link updates per second. As links are added and deleted, the path vector query will recompute the routing tables incrementally. Figure 5 shows the number of persistenceAlarms that are generated per second in response to the link updates. We observe that there is a clear correlation PerNodeBand dwidth(KBps)
RateofEven nts/Alarms
240
LinkEvents Alarms
200 160 120 80 40 0 0
100
200
300
400
500
TimeElapsed(s)
Fig. 5. Number of updates and persistence alarms over time
40
PV PVͲDMaC
35 30 25 20 15 10 5 0 0
100
200
300
400
500
TimeElapsed(s)
Fig. 6. Per-Node bandwidth (KBps) for monitoring route persistence
DMaC: Distributed Monitoring and Checking
199
between the rate of link events and alarms: when the network is less stable (i.e. in high churn, such as 0-60 seconds), the persistence property is more likely to be violated due to frequent route recomputations, hence resulting in a higher rate of the persistenceAlarms; whereas the rate of the alarms drops significantly when the network is under low churn. Figure 6 shows the per-node bandwidth overhead incurred by PV and PVDMaC as the protocol incrementally recomputes in response to link updates. We observe that PV-DMaC incurs only an additional overhead of 11% in bandwidth utilization. The overhead is attributed primarily to the generation of persistenceAlarms which are sent to the centralized monitor. We note that in absolute terms, the increase in bandwidth utilization is 2.5KBps, which is wellwithin the capacity of typical broadband connections.
6
Related Work
Literature on run-time monitoring and checking for networked systems can be divided into two broad categories. One category of papers addresses generalpurpose run-time verification problems. These papers are typically concerned with increasing expressiveness of property specification languages and developing time- and space-efficient online checking algorithms. Multiple run-time verification systems [7,21,3,4] reported in the literature are primarily oriented toward monitoring of software. While some of them are capable of collecting observations from multiple nodes in a distributed system [2], they typically assume a centralized checker. The other category of related work contains papers originating in the networking community. Here, distribution is critical in both collection of observations and in checking. This category of work typically uses simple invariant properties and is concerned with minimization of network traffic. An important point of comparison is [8], which offers a system H-SEND for invariant monitoring of wireless networks. Our work differs from both of these categories. On one hand, we are using a richer language than the one typically used in correctness monitoring of networks. On the other hand, we address distributed deployment of checkers in the network, the aspect typically not considered in run-time verification literature. Finally, deployment of checkers is tightly integrated with the network deployment itself through NDlog, which is a unique feature of our approach.
7
Conclusion and Future Work
We have presented a way to integrate a run-time verification framework into a declarative networking system that is based on the language NDlog. The integration allows us to specify high-level, implementation-independent properties of network protocols and applications in the language MEDL, generate checkers for these properties, and deploy the checkers in a distributed fashion across the network. Checkers are generated by translating MEDL properties into NDlog and
200
W. Zhou et al.
are executed as distributed queries along with the protocol implementations. We use distributed query optimization techniques to derive allocation of checkers to network nodes. In the future work, we will work to remove restrictions on MEDL constructs introduced in this paper. The restrictions stem from the treatment of event timestamps in a distributed system. Currently, timestamps of events transmitted across the network are assigned based on the local clock of the receiving node, and sender’s timestamps may be captured as event attributes. While this is adequate for many of commonly used network properties, a more general treatment is desirable. A possible approach is to introduce the knowledge about the physical distribution of events through the network and extending the notion of a timestamp along the lines of [22].
Acknowledgments This work is based on work supported in part by ONR MURI N00014-07-1-0907, NSF CNS-0721845 and NSF IIS-0812270.
References 1. P2: Declarative Networking, http://p2.cs.berkeley.edu 2. Bauer, A., Leucker, M., Schallhart, C.: Monitoring of real-time properties. In: ArunKumar, S., Garg, N. (eds.) FSTTCS 2006. LNCS, vol. 4337, pp. 260–272. Springer, Heidelberg (2006) 3. Chen, F., Rosu, G.: MOP: An efficient and generic runtime verification framework. In: Proceedings of OOPSLA 2007, pp. 569–588 (2007) 4. Colombo, C., Pace, G., Schneider, G.: Dynamic event-based runtime monitoring of real-time and contextual properties. In: 13th International Workshop on Formal Methods for Industrial Critical Systems (FMICS 2008) (September 2008) 5. Deshpande, A., Ives, Z.G., Raman, V.: Adaptive query processing. Foundations and Trends in Databases 1(1), 1–140 (2007) 6. Diaz, M., Juanole, G., Courtiat, J.-P.: Observer - a concept for formal on-line validation of distributed systems. IEEE Transactions on Software Engineering 20(12), 900–913 (1994) 7. Havelund, K., Rosu, G.: Monitoring Java programs with JavaPathExplorer. In: Proceedings of the Workshop on Runtime Verification. Electronic Notes in Theoretical Computer Science, vol. 55. Elsevier Publishing, Amsterdam (2001) 8. Herbert, D., Sundaram, V., Lu, Y.-H., Bagchi, S., Li, Z.: Adaptive correctness monitoring for wireless sensor networks using hierarchical distributed run-time invariant checking. ACM Transactions on Autonomous and Adaptive Systems 2(3) (2007) 9. Jahanian, F., Goyal, A.: A formalism for monitoring real-time constraints at runtime. In: 20th Int. Symp. on Fault-Tolerant Computing Systems (FTCS-20), pp. 148–155 (1990) 10. Kim, M., Kannan, S., Lee, I., Sokolsky, O., Viswanathan, M.: Java-MaC: a run-time assurance approach for Java programs. Formal Methods in Systems Design 24(2), 129–155 (2004)
DMaC: Distributed Monitoring and Checking
201
11. Kim, M., Viswanathan, M., Ben-Abdallah, H., Kannan, S., Lee, I., Sokolsky, O.: Formally specified monitoring of temporal properties. In: Proceedings of the European Conf. on Real-Time Systems - ECRTS 1999, June 1999, pp. 114–121 (1999) 12. Liu, X., Guo, Z., Wang, X., Chen, F., Tang, X.L.J., Wu, M., Kaashoek, M.F., Zhang, Z.: D3S: Debugging Deployed Distributed Systems. In: NSDI (2008) 13. Loo, B.T., Condie, T., Hellerstein, J.M., Maniatis, P., Roscoe, T., Stoica, I.: Implementing Declarative Overlays. In: ACM SOSP (2005) 14. Loo, B.T., Hellerstein, J.M., Stoica, I., Ramakrishnan, R.: Declarative Routing: Extensible Routing with Declarative Queries. In: SIGCOMM (2005) 15. Mok, A.K., Liu, G.: Efficient run-time monitoring of timing constraints. In: IEEE Real-Time Technology and Applications Symposium (June 1997) 16. Paxson, V., Kurose, J., Partridge, C., Zegura, E.W.: End-to-end routing behavior in the internet. IEEE/ACM Transactions on Networking, 601–615 (1996) 17. Reynolds, P., Killian, C., Wiener, J.L., Mogul, J.C., Shah, M.A., Vahdat, A.: Pip: Detecting the Unexpected in Distributed Systems. In: NSDI (2006) 18. Sankar, S., Mandal, M.: Concurrent runtime monitoring of formally specified programs. IEEE Computer (1993) 19. Savor, T., Seviora, R.E.: Toward automatic detection of software failures. IEEE Computer, 68–74 (August 1998) 20. Selinger, P.G., Astrahan, M.M., Chamberlin, D.D., Lorie, R.A., Price, T.G.: Access path selection in a relational database management system. In: SIGMOD (1979) 21. Sen, K., Rosu, G., Agha, G.: Online efficient predictive safety analysis of multithreaded programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 123–138. Springer, Heidelberg (2004) 22. Sen, K., Vardhan, A., Agha, G., Rosu, G.: Efficient decentralized monitoring of safety in distributed systems. In: 26th International Conference on Software Engineering (ICSE 2004), pp. 418–427 (2004) 23. Sokolsky, O., Sammapun, U., Lee, I., Kim, J.: Run-time checking of dynamic properties. In: Proceeding of the 5th International Workshop on Runtime Verification (RV 2005), Edinburgh, Scotland, UK (July 2005)
Author Index
Adler, Philipp 26 Amme, Wolfram 26 Barringer, Howard
40
Niemel¨ a, Ilkka 93 Nir-Buchbinder, Yarden Nunes, Isabel 115
1
Falcone, Yli`es 40 Fernandez, Jean-Claude Finkbeiner, Bernd 60 Groce, Alex
Mounier, Laurent
40
101
Rajamani, Sriram K. 25 Rosu, Grigore 132 , Rydeheard, David 1
1
Hansen, Trevor 76 Havelund, Klaus 1 He, Guojin 168 Heimdahl, Mats P.E. Heljanko, Keijo 93 K¨ ahk¨ onen, Kari 93 Kuhtz, Lars 60 Kˇrena, Bohuslav 101 Lampinen, Jani 93 Lee, Insup 184 Letko, Zdenˇek 101 Loo, Boon Thau 184 Lopes, Ant´ onia 115
168
Schachte, Peter 76 Schulte, Wolfram 132 Serb˘ anut˘ 132 , a, Traian Florin , Sokolsky, Oleg 184 Søndergaard, Harald 76 Tripakis, Stavros 152 Tzoref-Brill, Rachel 101 Ur, Shmuel
101
Vasconcelos, Vasco T. Vojnar, Tom´ aˇs 101 Zhai, Atonia 168 Zhou, Wenchao 184
115